1 Introduction

Numerous numerical optimization problems have grown more complex over the last few decades, demanding the utilization of quite efficacious optimization algorithms. For instance, accuracy problems in data mining and design cost problems in engineering often require finding the best solutions from many available ones without squandering efforts in looking for sub-optimal areas. Due to the highly non-convex landscapes and the knotty nature of many real-world problems, the search space associated with such issues poses many difficulties in optimization approaches [1]. These extraordinarily knotted modalities of the search space are typically proportionate to the problem size, and the increasing problem dimensionality [2]. Traditional deterministic methods, based on humble calculus rules, conduct a thorough search and cannot present effective solutions as heuristic methods enclosed by limited calculations resources [3]. Typically traditional optimization methods usually, in a large number of cases, presented potent procedures for obtaining the optimal global solutions for problems of humble or even extreme complexity [4]. They intelligibly lead to a fast and efficient optimization process when the optimization problem to be addressed has a simple design with few constraints and decision variables. These methods find it relatively complex to deliver fully efficient solutions for real-world problems of complex designs that have undergone many constraints [4]. They can only identify local optima for some intricate problems, as there is no sureness that the global optimum will constantly be found [5]. This puts a powerful challenge to locate the global optimality of concern. To control such issues, researchers have focused their attention on meta-heuristic techniques. More information about meta-heuristics can be found later in the related works section. A broad range of meta-heuristics has recently been utilized in the literature to cover efficient optimization for various optimization problems. A meta-heuristic algorithm can act effectively for most kinds of problems. It has become evident in line with the ‘no-free-lunch’ (NFL) theory [6] that there is no general meta-heuristic that can handle all kinds of optimization problems in the best possible manner and can surpass all other meta-heuristics for every problem [7]. A successful meta-heuristic algorithm may be deemed to play well, or at least perform well, on most problems. A crucial problem that impedes researchers from achieving these goals is the sufficient equilibrium between exploration and exploitation conducts in the algorithm of interest. This balance cannot be identified in advance, where the proper balance between these aspects depends on the problem. Practically, some methods have better exploration and exploitation strategies than others. This opens the room for improvement of existing meta-heuristic processes to get a satisfactory balance between exploitation and exploration and then to get to sane solutions in most optimization problems. Simply put, the NFL theory inculcates this area of studies to stay unlocked and motivate researchers to improve the accuracy of present methods or introduce new ones to reinforce optimization with broader performance [8].

In this context, this study was motivated to enhance further a newly well-known meta-heuristic method, called Crow Search Algorithm (CSA) [9], to tackle a good pool of optimization problems. This is developed according to the ideology of ongoing amelioration to develop compelling solutions to real-life problems. CSA is one of the promising swarm intelligence techniques [9], devised about the intelligent behavior of crows while foraging in nature. Although CSA can get the optimum in solving various problems in the field of optimization [10], it usually may be trapped in local optima, particularly when faced with complex problems with several local optima. This may be attributed to its narrow search capability and slow convergence property. To give control over such hurdles, various strategies, and methods have been presented in the literature to promote the accuracy level of CSA further. In this, many variants of CSA were designed from the operators’ perspective. In [11], CSA was improved by chaotic-based criteria to embed a chaotic local search to ameliorate the diversity of solutions. Due to the efficacy of chaos, chaos theory was amalgamated into CSA to explore its characteristics [12]. A conscious neighborhood method was employed in CSA to direct the locomotion of crows according to three search mechanisms. The enhanced CSA revealed good performance on benchmark test problems [13]. In [14], a chaos-based strategy was utilized to implement a chaotic local search for CS to hasten the convergence rate and mitigate the incidence of local optima. The chaotic mechanisms are potent to strengthen the search power of CSA under its stochasticity and ergodicity. The strategies of amalgamation of past experience and crafting a non-hideout position help compromise the exploration and exploitation potential of CSA in the search process. A niching method was integrated into CSA to manage the interaction among crows to empower CSA to locate multiple solutions for optimization problems [15]. In [16], each crow uses two update methods to come up with a better solution, where one method achieves intelligence among crows, and the other method assists crows to get out of the local optima. Also, in that work, awareness probability and flight length were adapted inside the iteration loops of CSA to establish a good balance between local and global search strategies to boost searchability. These operators efficiently amend crows’ movements. Besides operators, various strategies were also presented in the literature to augment the performance of CSA in addressing various optimization problems. An opposition-based learning method was incorporated into the position updating process of CSA to explore solutions and direct the movements of crows [17, 18]. Kapur’s entropy was incorporated with CSA to reinforce its population diversity and assist it in emerging from early convergence [19]. Promising adjusting methods for time-varying flight length based on crows’ life convergence times were proposed to deal with the problem of rapid convergence of CSA. In the meantime, an adaptive flight length variable, in reliance on the present and maximum iteration values, improves the hunting ability of crows [20]. A spiral search mechanism was used to strengthen CSA by mitigating early premature convergence while solving numerical optimization problems [21]. Hybrid strategies that integrate several approaches also show outstanding performance in tackling the defects of CSA. CSA was combined with PSO to use both properties sufficiently to strengthen its exploration and exploitation processes [22]. An improved CSA was hybridized with a uniform crossover mechanism to enhance its exploration search capacity and convergence behavior [23]. Moreover, CSA has been fruitfully practiced for many optimization problems. In view of this, variants of CSA have continuous, binary, and mixed types. Continuous type with real-valued variables can address real problems [24], constrained problems [25], and multi-objective problems [18]. For continuous optimization problems, an optimization strategy was embedded into CSA to prop its performance degree in diagnosing Parkinson’s disease for twenty benchmark test problems [26]. A binary CSA was presented for binary optimization problems to solve feature selection problems [20]. An integration between CSA and Grey Wolf Optimizer (GWO) was utilized to tackle feature selection problems in addition to unconstrained test functions [27]. For multi-objective optimization problems, CSA was combined with the fruitfly optimization method to evolve combinatorial interaction test suites in the existence of constraints [28]. A hybrid CSA incorporated with fuzzy c-means and chaos theory has been applied to medical diagnosis problems [29]. Hybrid approaches of CSA with PSO were developed for feature selection, and global optimization problems [22, 30]. In [31], an arithmetic crossover is integrated with CSA to address engineering design problems. More variants and applications of CSA in addressing a broad scope of problems can be found in [10]. The above strategies and approaches have efficiently enhanced the search property of CSA to foster its performance in solving optimization problems. However, each of the revised approaches performed well in the applications deemed, albeit some may fall short of what is required in particular applications. Some of them have limited functionality and modest performance with complex optimization problems. There is still a necessity for more amelioration of CSA, particularly for complex real-world problems.

On the grounds of this, it is worth paying attention and investigating to improve CSA from the point of view of position updating strategy and adaptive strategies for its key parameters. A good positioning updating process with appropriate control parameters can be implemented for CSA to guide further the local and global search processes of crows in the environment. In this paper, an improved CSA with effective flight length and awareness probability is destined to handle the early convergence and modest search capability of CSA. First, the awareness probability of crows is expected to grow as a function of iterations. It is also anticipated that the more the flight length of a crow, the more likely it is to locate food. Thus, we present flight length as a descending function of time and the awareness probability as the rate of change of flight length. This is to foster the exploration and exploitation aptitudes of CSA. We adopted three functions of time for flight length and three functions for awareness probability to achieve this goal. These adaptable functions not only provide adequate guidance for crows in the environment, but they are also beneficial for mitigating the stagnation of CSA. These parametric functions are designed to equipoise the exploration and exploitation features in CSA. In this respect, three versions of CSA were developed, each using different growth functions to update the values of awareness probability and flight length in the course iterations of CSA. Each version adds a sensible enhancement to the original CSA. These versions are named Exponential CSA (ECSA), Power CSA (PCSA), and S-shaped CSA (SCSA). Then, we introduced an enhancement to the positioning updating process of these versions to manage the movement of crows further. The goal of this enhancement is to enable crows to scout and exploit every promising area in the search domain.

Lastly, a new parameter that can provide more exploration and exploitation abilities for these versions was proposed. This parameter can help the crows to explore diverse directions in the search space and exploit each search region to locate other crows’ food. To assess the performance of the evolved ECSA, PCSA, and SCSA, many evaluation experiments were conducted to compare them with the basic CSA and many promising meta-heuristic algorithms on three test suites of broadly well-known benchmark functions. Finally, their practicality and practicability were demonstrated in solving four engineering design problems. The comparison findings between the developed algorithms and other rival ones indicate the notable performance of ECSA, PCSA, and SCSA, which implies that these algorithms are competitive and promising. In sum, the critical contributions of this work can be abridged as shown below:

  1. 1.

    Three population-based algorithms, namely ECSA, PCSA, and SCSA, were derived from the basic CSA.

  2. 2.

    A thorough evaluation comparison was conducted between the developed algorithms and other meta-heuristics on three benchmark groups: classical benchmark functions, CEC-2015, and CEC-2017 test suites.

  3. 3.

    The credibility and practicability of the proposed algorithms were investigated on four engineering design optimization problems.

  4. 4.

    The statistical importance and convergence behavior of the proposed algorithms were investigated.

The rest of this work is structured as follows: Section 2 reviews the state-of-the-art optimization problems and methods. The following Section 3 provides a thorough description of the original CSA. In Section 4, we present an elaborated description of the proposed versions of CSA. The evaluation and statistical outcomes of the proposed versions compared to other selected meta-heuristics on extensive test environments are presented in Section 5. The appropriateness of the developed algorithms is verified on four engineering problems in Section 6, with a conclusion and future trends presented in Section 7.

2 Related works

This section presents a description of the random optimization domain. This field has several portions, like single-objective, unconstrained, multi-objective, and dynamic optimization. As the algorithms proposed in this work solve problems of single-objective optimization, the central hub here concerns the difficulties and relevant results in the areas of single-objective optimization problems and algorithms, as described below.

2.1 Single-objective optimization problems

These problems cope with a single objective connoting that, at most, one objective needs to be maximized or minimized. This form of optimization might be concerned with a set of constraints that fall into the groups of inequality and equality and which can be drafted as a minimization problem as shown below:

Minimize: \(F\left( \vec {x}\right) = \left\{ f_1\left( \vec {x}\right) \right\} \)

Subject to:

\(w_j\left( \vec {x}\right) \ge 0, j = 1, 2, \ldots , C\)

\(z_j\left( \vec {x}\right) = 0, j = 1, 2, \ldots , L\)

\(lb_j \le x_j \le ub_j, j = 1, 2, 3, \ldots , V\)

where L and C represent the number of equality and inequality constraints, respectively, V denotes the number of variables, \(ub_j\) and \(lb_j\) are the upper and lower limits of the jth variable, respectively.

The set and scope of variables, constraints, and objectives create a space in a d-dimensional search domain. For one-, two-and three-dimensional problems, the search space can be readily displayed in the Cartesian coordinate system, allowing its shapes to be viewed. However, problems with dimensions more than three cannot be drawn as these dimensions lie outside of what we encounter in our lives. Thus, real-world optimization problems with many variables present the most significant difficulty when addressing them. The scope of problems’ variables limits the scope of the search process, which is diversified. The optimization problems’ variables can be either binary or continuous, where these problems deal with either a binary or a continuous search domain. In the first type, there is a finite number of points between any two adjacent points, while there is an unlimited number of points between every two in the second type. Locating the global optimal in a continuous search space differs from that in a binary search space, where each search has its challenges. Even though most optimization problems have a varied scope of decision variables, several problems do not have a precise scope to be deemed during optimization [32].

Anyway, solving such problems demands particular attention. For example, an optimization algorithm might commence with an initial scope and expand it during optimization. The constraints of the optimization problem restrict the search space all the more. Due to the gaps they cause in the search space, the solutions in these areas need to be revised for the optimization problem. The search space can be split into diverse segregated regions with various constraints. Infeasible solutions go beyond the constrained areas that have been set.

Conversely, the feasible solutions are those that can be implemented within constrained areas. The sections of the search domain that are outside and within the confined regions are referred to as feasible areas and infeasible areas, respectively. An optimization technique that performs well in an open search area may need to be more effective when used in a confined search domain. Thus, optimization algorithms must be set up with appropriate operators to handle the constraints [33] properly. Another defiance when dealing with optimization problems is that there are several local optimums. The search space created by the objective function, decision variables, and constraints may be modest or too complex. In reality, the main difficulty of optimization methods in most studies given in the literature is the number of local solutions present in the problems.

In single-objective search space problems, only one optimal solution (i.e., the global best one) is available, which delivers the optimal target value. Many other solutions produce objective values that are not far from the best global ones. This form of solution is referred to as local solutions because they are locally the best ones when the entire search space is considered in its proximity. Still, they are not considered the best solutions globally when the search space is considered as a whole. Having too many local solutions brings about several optimization methods to drop into local optimums. Due to the fact that a real search space frequently includes several local solutions, an optimization algorithm should be able to avoid them in order to consistently obtain the best global solution. Additionally, convergence speed is an intricacy that faces optimization algorithms when addressing optimization problems.

An optimization method capable of averting local solutions is not inevitably eligible to find the global optimal. The estimated position of the global optimal is well defined when the optimization method avoids local solutions. The following stage is to boost the performance level of the got approximate solutions. Rapid convergence rate will certainly cause stagnation in local optima. Conversely, unexpected amendments in the solutions lead to eschewing local optima but slow down the convergence rate to the optimality. The main challenge an optimization technique must overcome when solving real-world problems is these two conflicting goals. The convergence speed is all-important in locating a precise approximation of the global optimal solution. When dealing with single-objective search spaces, there are other kinds of challenges like uncertainty, and isolation of the optimal, dynamic, and noisy objective functions. Each of these difficulties demands particular regard. These concepts are beyond the setting of this study, so concerned readers can refer to the work presented in [34].

2.2 Single-objective optimization algorithms

These algorithms can be broadly distributed into two key classes: deterministic methods, heuristic and meta-heuristic methods, with the latter category being the most common [35].

2.2.1 Deterministic algorithms

Deterministic approaches address optimization problems by making predetermined decisions; for example, if the initial conditions are the same, the ultimate solution will also be the same [36]. Deterministic methods consistently find the exact solution to a specific problem, provided they launch with identical outset points. The essential advantage of these algorithms is their reliability, as they get a solution in each independent run. Their computations could be more effective when acting with large-scale data structures. Yet, the stagnation of local optima is a hurdle to these methods as they usually do not have random conduct when tackling real-world optimization problems [37]. Bharathan et al. [38] evolved an integer linear programming model with penalty coefficients. Global constraint violations are permitted in this model, but they will be penalized appropriately. This strategy is superior to traditional methods when user requirements are complex. Local and Tabu search are examples of deterministic techniques [39]. Deterministic approaches are often ineffective when used to solve high-dimensional problems as the problem’s conditions constrain them. Meta-heuristics have been proposed to solve such challenging problems since many real-world problems exhibit one or more of the characteristics indicated above [35].

2.2.2 Stochastic and meta-heuristic algorithms

Stochastic methods use random operators, where different solutions can be found even if the launch point is unaltered, hence making stochastic methods less trustworthy than deterministic algorithms. Anyhow, the random behavior enhances in avoiding the local optima, which is the critical merit of stochastic methods. The accuracy of these methods can be further enhanced by adjusting and using more runs [40]. As per this, heuristic methods can produce high-quality approximate solutions promptly for large-scale data. However, heuristic methods are often created based on the unique knowledge of optimization problems [41]. Hence there will be several limitations to extending the algorithm.

Stochastic algorithms can be split into two classes: collective and individualist. In the first pool, a stochastic algorithm commences and fulfills optimization with a single solution, which is altered, at random, and ameliorated for a predetermined number of iterations or meets a final evaluation method. Simulated Annealing (SA) [42] and hill climbing [43] are the two most popular methods in this group. This family of algorithms demands low computational overhead and only needs a few function evaluations. Comparatively, several random solutions are produced and evolved by collective approaches during the optimization process. Typically, the solutions work together to establish the global optimal in the search space. There are a lot of optimization algorithms in this family; some of the most prominent ones are: Genetic Algorithm (GA) [44], Particle Swarm Optimization (PSO) [45], and Differential Evolution (DE) algorithm [46]. These algorithms might get many solutions by reducing the opportunity for stagnation in local optima, which is a crucial benefit of the collective techniques. Yet each alternative necessitates a single function assessment, and establishing effective collaboration between the solutions is a significant difficulty.

There is no way to be sure that the optimal solution will be found in solving many real-world fusion problems because most problems are NP-hard. Heuristics, which are approximate approaches utilizing iterative trial-and-error procedures, are typically used instead of accurate methods to approach the optimal solution. Many of them take inspiration from nature, and their most recent development is using meta-heuristics [22]. A meta-heuristic is a chief iterative process that directs and alters the actions of subordinate heuristics to provide solutions of a high standard quickly. Each iteration may change either a whole (or partially) single solution or an accumulation of solutions. The subordinate heuristics may be low (or high) level methods, a straightforward local search, or a construction technique. Even though meta-heuristic methods take part in some semblances with each other, they are some fundamental differences between them. In effect, they typically mimic some features of natural behavior, biological or physical phenomena present in nature, or manually tailored search strategies [47]. They can be principally classed into four key pools, according to their source of inspiration, as explained below:

  • Evolutionary-based Algorithms (EAs): algorithms in this category simulate concepts and models of biological evolutionary behavior of creatures found in nature based on natural selection and the idea of survival of the fittest. The most widespread techniques in this class are GA [44], DE algorithm [46], and Biogeography-Based Optimizer (BBO) [48]. Some other examples of this type of algorithm include Wildebeests Herd Optimizer (WHO) [49] and Learner Performance-based Behavior (LPB) [50].

  • Physics-Based (PB) methods are motivated by the prevailing physical rules that exist. The notable method devised by the inspirations appropriated from physics is SA [42]. Essentially, the physics’ laws have already generated significant results when couched into optimization methods, with some eminent examples being: Equilibrium Optimization (EO) [51] and Archimedes Optimization Algorithm (AOA) [3].

  • Swarm Intelligence (SI) techniques: the algorithms in this category simulate the collective social behavior of collections of flocks or colonies, such as swarms of birds, animal herds, insect colonies, schools of fish and flocks of many other species of creatures found in nature. Among the most successful SI-based methods are PSO [45] and ACO Algorithm [52]. Salp Swarm Algorithm (SSA) [32], Capuchin Search Algorithm (CSA) [53], Chameleon Swarm Optimizer (CSO) [54], White Shark Algorithm (WSA) [55] and Trees Social Relations (TSR) [56] are only a small number of the long list of SI-based meta-heuristics.

  • Human-based Algorithms (HBA): these methods are generally associated with human activities and behaviors [57]. Some of the newest human-based algorithms in this category include driving training-‘based optimization [58], Ali baba and the forty thieves algorithm [47], and stock exchange trading optimization [59].

These and other meta-heuristics have proven to be practical and effective in solving a wide variety of applications in engineering [32] and science [47, 55]. Many applications of similar algorithms can be located in the extended literature concerned with meta-heuristic research. To name a few, business activities [60], industrial designs [61, 62], Feature Selection (FS) [63], motif discovery problem [64], agriculture [65], medical [66] and classification problems [67] are among the beneficiaries [68, 69]. In most cases, these methods in the related applications have led to encouraging and promising results when solving practical, real-world optimization problems.

Optimization with meta-heuristic algorithms begins with a set of random solutions that demand to be amalgamated and altered rapidly and abruptly. This motivates solutions to proceed globally. This procedure, known as exploration or diversification of the search space, draws solutions to different areas of the search space by abrupt shifts [55]. The fundamental objective of this procedure is to identify the most encouraging regions of the search space and to avert local solutions [54]. After satisfactory solutions, solutions commence to change and advance locally progressively. The main goal of this procedure, known as exploitation, is to increase the performance level of the best solutions obtained during the exploration step. Even if the exploitation stage may involve the avoidance of local optima, the coverage of the whole search space is less extensive than it was during the diversification stage. In this situation, local solutions nearby to the global optimal can be avoided. This explanation illustrates how the exploration and exploitation stages tolerated two opposing aims. When is the preferable time to shift from exploration to exploitation? is a critical question [53]. No one can answer this question due to the randomness of meta-heuristics and the unknowable shape and nature of the search space. Because of this, most meta-heuristic optimization methods call for search agents to quickly proceed from exploration to exploitation through flexible mechanisms [70].

The above and other meta-heuristic methods have achieved encouraging levels of performance in a reasonable amount of time in tackling complex real-life problems identified in the intended applications and datasets. Still, they need to ensure that optimum solutions will be located in all experimental runs [53]. Additionally, several complex problems may emerge due to ongoing technological breakthroughs in various technical and scientific domains [32, 55, 71]. These problems must be effectively and continuously solved by getting optimal solutions for all of them. The proficiency of any meta-heuristic to identify the globally optimal solutions for all types of optimization problems cannot be guaranteed. Although only sometimes, an optimization technique is adequate for most problem types. Therefore, it is still thought necessary to facilitate existing algorithms or create new ones to tackle challenging real-world problems. This led to the motivation for this work, which aims to improve the capability of the basic CSA [71]. This is accomplished by creating three enhanced versions of this algorithm and exploring their efficiencies in solving well-known optimization problems with various numbers of dimensions and complexity. The following sections detail the primary CSA and proposed variants of the CSA.

3 Basic crow search algorithm

The inspiration for CSA was featured in the crows’ behavior in hiding food and the mechanism they follow to find where other crows’ food is stashed. Crows frequently conceal extra food in locations where it may be preserved and retrieved when needed. As greedy birds, crows try to find hidden places for other crows’ food to find better food by following other crows that act as thieves. Meanwhile, crows prevent their food from being robbed. The anti-tailing ability of crows enables them to notice if another crow follows them by a reasonable probability. If a crow notices that a crow follows it, it will trick that crow into random places in the environment rather than guiding it to where it hides its food. Based on this simulation of crow’s intelligence, CSA aims to find the optimum solution to the targeted optimization problem.

The key idea of mimicking the intelligent behavior of crows is the search mechanism and how crows prevent their food from being looted from their lairs by tricking other crows into random positions. The thieves’ experience is that the teachers store their food in safe places to protect their caches from being pilfered [72]. Simulation of crow’s behavior in a mathematical model of optimization can be characterized as follows: Several N crows in the flock are the population size, and the d-dimensional environment is the search space that crows need to search in, where d is the number of the decision variables. The position of crow i at iteration t is defined as:

$$\begin{aligned} x^{i}_{t} = [x_1^{i,t}, x_2^{i,t}, \ldots , x_d^{i,t}], \end{aligned}$$
(1)

where \(i = 1, 2, 3, \ldots , N\), \(t = 1, 2, 3, \ldots , T\), where T is the total number of iteration loops, and \(x^{i}_{t}\) represents the current position of crow i at iteration t.

Each crow has its corresponding memory m to store the hiding position, whereby crow i memorizes the position with the best food so far at iteration t and stores it in \(m^{i}_{t}\). During addressing an optimization problem, crow i chases crow j at iteration t to its hiding place \(m^{j}_{t}\). There are two situations, and either of them will happen:

  • Situation 1: Crow i was able to locate the hiding place \(m^{j}_{t}\) while crow j does not know that it is being pursued by crow i. Next, crow i changes its location as seen below:

    $$\begin{aligned} x^{i}_{t+1} = x^{i}_{t} + r_i fl (m^{j}_{t} - x^{i}_{t}) \end{aligned}$$
    (2)

    where \(r_i\) is a uniformly random value \(\in [0, 1]\), which affects the randomization during the search process and fl is the flight length of the crows.

Figure 1 displays the schematic illustration of the flight length (fl) implied in (2). This parameter constitutes one of the main control parameters of CSA and significantly affects the ability to search. Small values for fl will eventually lead to a local search, where the neighborhood search space will not be far from the present position of crow i.

Fig. 1
figure 1

Flight length of situation 1 (a) \(fl < 1\) and (b) \(fl > 1\) inspired from [9]

As Figure 1 illustrates, the next position \(x^{i}_{t+1}\) of the ith crow will be on the dashed line between \(x^{i}_{t}\) and \(m^{j}_{t}\) in the state if fl is set to a value less than 1. Alternatively, large values for fl will stimulate crows to global search, where the search space will be far away from the present position \(x^{i}_{t}\). When fl is set to larger than 1, the following location of the ith crow, \(x^{i}_{t+1}\), will be located anywhere on the dashed line. It may override the position \(m^{j}_{t}\) based on the value of \(r_i\).

  • Situation 2: If the jth crow observes that the ith crow is following it, it will fly to a random place in the search area to get it away from its hiding place, or denoted as \(m^{j}_{t}\). By combining the two states, the mathematical paradigm of CSA can be expressed in (3):

    $$\begin{aligned} x^{i}_{t+1} = \left\{ \begin{array}{cc} x^{i}_{t} + r_i fl (m^{j}_{t} - x^{i}_{t}) \quad r_j \ge AP \\ \text {A random position} \quad r_{j} > AP \end{array}{} \right. \end{aligned}$$
    (3)

    where AP is the awareness probability of crows, \(r_j\) is a randomly dispensed random value \(\in [0, 1]\) that contributes to random distribution during the random search process. The values of fl and AP are fixed at 2.0 and 0.1, respectively.

AP directly controls the counterbalance between diversification (i.e., exploration) and intensification (i.e., exploitation). A small value of AP creates a good solution in the neighborhood area, which will increase intensification. Conversely, large awareness probability values will have a great chance to scout the search space and tend to reach global search, which will increase diversification.

As a synopsis, optimization-based CSA is implemented through an iterative process in which the initially generated memories and positions of crows are valued and amended at each iteration loop up to convergence. The convergence process stops when the termination assessment criterion is attained. The best position reached by crows that have found food is indicated as a solution to the optimization problem. The pseudo-code of the primary CSA is given in Algorithm 1.

Algorithm 1
figure e

Pseudo code of the standard CSA.

Each new location of each crow is assessed iteratively inside each loop of CSA in accordance to a pre-defined fitness criterion. Crows’ positions are updated if their solutions are more accurate than the ones of the current positions. Crows remain in their positions if their solutions are of lesser quality than the current ones. Crows update their memories within each iteration loop if the quality level of the new positioning solutions, assessed per the fitness criterion, is better than the saved positions.

4 Proposed variant algorithms of CSA

4.1 Limitations of the basic CSA

Although CSA can search for optimal solutions during solving optimization problems, its search capacity is confined by its native positioning updating mechanism, which may result in local optima iteratively [20, 22, 30]. This phenomenon is illustrated by the fact that CSA is probably to face early convergence when addressing complex or even modest real-world optimization problems [21]. The cause is that crows in CSA rely on constant flight length and awareness probability parameters to navigate and search for other crows’ foods repeatedly. However, these fixed selected values for such key parameters cannot ensure that the CSA can escape stagnation or that it is not trapped in local optima. Besides, CSA faces another problem of weak exploration and exploitation capabilities. This phenomenon is clearly encountered due to the process of updating the locations of crows in CSA, which needs to consider the global positions of the crows obtained thus far. Therefore, some strategies must help update the positioning of crows in addition to fine-adjusting the key parameters, namely flight length and awareness probability. A desired exploration process can find a favorable region in the search space with an optimal solution. Then, a desired exploitation process can eventually locate this optimal solution. However, the exploration power of CSA needs to be improved in the initial search phase so that its low exploitation causes it tough to locate the global optimum solution in the overdue search stage. As a result of this fact, local optima is customarily received. Thus, a sensible trade-off should be formed between exploration and exploitation to reinforce its search aptitude. Accordingly, the movements of crows in the enhanced CSA in this work are carried out by a new positioning updating process with the global best position got so far that the best crows offer, as explained in more detail in the next section. Further, more is needed to support the best crows to find other crows’ food sources using only fixed flight length and awareness probability parameters. Because this implies that the best crow’s capacities to guide all the crows are weakened, the exploration and exploitation competencies of CSA are less and less in the late foraging process. As per this, the flight length value should increase gradually with time, while the value of awareness probability should gradually decrease over time. This is to assist the best crow in directing all the crows towards the food so that the exploration and exploitation abilities of CSA will be higher and higher in the overdue search process. Thereby, it is essential to strengthening the exploitation feature of CSA in the final search iterations to speed up the speed of convergence and shun suffering from quick convergence. To cope with the above issues, a new positioning updating model that uses adaptive flight length and awareness probability parameters in the implementation phase of CSA is presented to enhance the performance degree of CSA greatly. The following sections offer detailed descriptions of the proposed positioning updating model, in addition to the proposed enhanced versions of CSA, referred to as Exponential CSA (ECSA), Power CSA (PCSA), and S-shaped CSA (SCSA).

4.2 Proposed positioning updating process

In ECSA, PCSA, and SCSA, the position of the robber and owner crows needs to be updated at each iteration. A crow possesses a food source, and a robber crow endeavors to steal that food. In such a scenario, the position of both the owner and robber crows has changed accordingly. The owner crow’s memory is also updated based on its observation of the robber crow. The position of the crows in ECSA, PCSA, and SCSA are updated as per the mechanism proposed in (4).

$$\begin{aligned} x^{i}_{t+1} = \left\{ \begin{array}{ccc} x^{i}_{t}+fl_{t} (m^{j}_{t}-x^{i}_{t})r_1 \;\;\;\;\; r_j \le AP_{t}, r<0.5 \\ x^{i}_{t}-(1-fl_{t}) (m^{j}_{t}-x^{i}_{t})r_2\alpha \;\;\; Otherwise \\ \tau \left( l_j - (l_j -u_j)r_3 \right) \;\;\;\;\;\; r_j \ge AP_{t} \end{array} \right. \end{aligned}$$
(4)

where \(x^{i}_{t+1}\) designates the next position of the ith crow at iteration \(t+1\), \(x^{i}_{t}\) identifies the present position of the ith crow at the current iteration, \(m^{j}_{t}\) stands for the memory of the best crow throughput the iterative process of the entire swarms at iteration t, \(AP_{t}\) is the awareness probability of crows at iteration t which is updated iteratively during the iteration loops, \(fl_{t}\) represents the unit step of crows’ movement upon iteration t, \(r_j\), r, \(r_{1}\), \(r_{2}\) and \(r_{3}\) are random values yielded in the extent from 0 to 1, \(\alpha \) represents the component sgn\((rand-0.5)\) that is either 1 or -1 which effects the search direction, \(l_{j}\) and \(u_{j}\) stand for the lower and upper boundaries of the search domain at dimension j and \(\tau \) is defined as a function of iterations as drafted in (5).

$$\begin{aligned} \tau = a_0 e^{-(a_1t/T)^{a_2}} \end{aligned}$$
(5)

where the coefficients \(a_0\), \(a_1\) and \(a_1\) are constant values used to automatically update the parameter \(\tau \) at each iteration. These coefficients are basically beneficial for strengthening exploration and exploitation conducts. The coefficients \(a_0\), \(a_1\) and \(a_2\) are all set to 2.0, 4.0, and 2.0, respectively, for all of the problems that are later handled in this work. These values were captured by pilot testing for a bunch of test functions.

The parameter \(\tau \) is presented as a function of time to dominate the random movement of crows iteratively and thus decreases with the number of iterative generations. Specifically, this parameter was applied to strengthen the dynamic system of convergence by diminishing the search speed as well as enhancing the exploration and exploitation features of the evolved algorithms. This parameter can enable crows to explore more foraging space and exploit each area while foraging for food or other crows’ food. This is to arrive at an efficient convergence process, which can further strengthen the performance degree of ECSA, PCSA, and SCSA in tackling optimization problems.

The first two cases of (4), (i.e., when \(r_j < AP_{t} \)), were suggested to allow crows to take advantage of random numbers, and the component sgn\((rand-0.5)\) was proposed to enable crows to be very effective in exploiting and scouting the search space in different directions and locations. The third case of (4), (i.e., when \(r_j \ge AP_{t} \)) was suggested to empower crows to scout several random positions in the search domain to improve local and global search capabilities and to get a sufficient balance between exploration and exploitation. This gives crows in ECSA, PCSA, and SCSA great power to explore every potential position in the search area. The parameters \(fl_{t}\) and \(AP_{t}\) were used as interactive operators to manage these algorithms’ exploration and exploitation capabilities. With different values for \(fl_{t}\) and \(AP_{t}\), the proposed algorithms can alternate between global and local searches.

In the position updating mechanism in (4), the algorithms tend to evolve a new solution \(x^{i}_{t+1}\) for a particular problem that is better than the present solution \(x^{i}_{t}\). In the mechanism of updating crows’ positions in CSA [9] as shown in (3), the values of the parameters \(AP_t\) and \(fl_t\) are constants during the execution phase of CSA. This indicates that exploration and exploitation features of the standard CSA rely on fixed figures of these parameters. This affects the search behavior of CSA, which can turn into good exploration and exploitation without a rigid structure. Accordingly, to improve exploration and exploitation, the values of \(fl_t\) and \(AP_t\) parameters must be updated iteratively at each iteration of CSA.

To recapitulate, we have proposed three new variant algorithms of the primary CSA. The developed variants of CSA aim to boost the convergence rate of CSA to reach optimality by carefully enhancing two folds: exploration and exploitation of the search space. Each evolved algorithm uses a different mathematical growth model for each of the \(fl_t\) and \(AP_t\) coefficients, which are described below.

4.3 Exponential model-based CSA (ECSA)

The exponential model was first introduced in [73]. The exponential growth functions shown in (6) and (7) were proposed to represent the flight length and awareness probability of crows in ECSA, respectively.

$$\begin{aligned} fl_t (k; \beta _0, \beta _1) = \beta _0 (1 - e^{-\beta _1 k}) \end{aligned}$$
(6)
$$\begin{aligned} AP_t (k; \beta _0, \beta _1) = \beta _1 \beta _0 e^{-\beta _1 k} \end{aligned}$$
(7)

where \(k=\frac{T}{t}\), \(\beta _0\) stands for the initial estimation of the flight length, \(\beta _1\) stands for the final estimation of the flight length of crows that could be fulfilled approximately at the end of the ECSA’s iterative process. These parameters represent the coefficients of the exponential growth function.

It is important to be aware of the following:

$$\begin{aligned} AP_t(k; \beta _0, \beta _1) = \frac{\partial fl_t(k; \beta _0, \beta _1)}{\partial k} \end{aligned}$$
(8)

Several conventional and intelligent conventional are mentioned in the literature to estimate the parameters \(\beta _0\), and \(\beta _1\) for the functions of flight length and awareness probability [74]. One famous traditional parameter estimation method is the least square estimation method [75]. This method struggles with estimating accuracy and requires a lot of measurements to be able to offer accurate parameter estimates. Other approaches include using meta-heuristics, which may require great computational efforts to estimate parameters [74]. In this work, the parameters \(\beta _0\) and \(\beta _1\) used for the adaptive functions of \(fl_t\) and \(AP_t\) were picked by applying practical design by examining ECSA on a significant subset of test problems. The coefficients \(\beta _0\) and \(\beta _1\) are equal to 2.0 and 1.0, respectively, for all of the problems solved in this paper using the proposed ECSA. However, only optimal values are often experimentally obtained, perhaps, not the ‘best’ ones.

4.4 Power model-based CSA (PCSA)

The non-homogeneous Poisson process serves as the model’s foundation [76]. The functions shown in (9) and (10) were used to implement \(fl_t\) and \(AP_t\) of the crows in PCSA, respectively.

$$\begin{aligned} fl_t (k; \beta _0, \beta _1)= \beta _0 k^{\beta _1} \end{aligned}$$
(9)
$$\begin{aligned} AP_t (k; \beta _0, \beta _1)= \beta _0\beta _1 k^{\beta _1 - 1} \end{aligned}$$
(10)

where \(k=\frac{T}{t}\).

Equations 9 and 10 were utilized to amend \(fl_t\) and \(AP_t\) throughout the iterative process of PCSA. It is significant to know that (8) was used to find \(AP_t\) in this model. A range of values from 0 to 5 was applied to \(\beta _0\) and \(\beta _1\). For all of the benchmark problems optimized in this paper using the proposed PCSA, \(\beta _1\) and \(\beta _0\) are equal to 0.05 and 2.0, respectively. These values were found by empirical investigation of a large subset of test functions of varied complexity, where these values reported the best performance of the proposed PCSA.

It is evident from (9) and (10) that when t is small, the value of \(fl_t\) is as its maximum value and rapidly drops to its lowest value. On the contrary, \(AP_t\) is small at small t and gradually increases towards its maximum value. In such a context, crows in PCSA can find a food source at the end of their foraging. Using power functions for \(fl_t\) and \(AP_t\) can improve exploration and exploitation, as shown in the evaluation results.

4.5 Delayed s-shaped model-based CSA (SCSA)

The S-shaped model used in the proposed SCSA was introduced in [77, 78]. The proposed growth S-shaped model for \(fl_t\) is given in (11).

$$\begin{aligned} fl_t(k; \beta _0, \beta _1) = \beta _0 \left( 1- \left( 1+ \beta _1 k\right) e^{-\beta _1 k}\right) \end{aligned}$$
(11)

where \(k=\frac{T}{t}\).

Equation 8 was used to derive \(AP_t\) of the SCSA model from \(fl_t\) presented in (11), where the formula produced for \(AP_t\) can be expressed as follows:

$$\begin{aligned} AP_t (k; \beta _0, \beta _1) = \beta _0 \beta _1^2 k e^{-\beta _1 k} \end{aligned}$$
(12)

This work solves all testing problems using the proposed SCSA, where \(\beta _1\) and \(\beta _0\) are equal to 7.0 and 2.0, respectively. These values were determined by experimental testing of a large number of benchmark functions, in which these coefficients were altered many times until the proposed SCSA obtained a sensible solution.

To sum up, the parameters \(\beta _0\) and \(\beta _1\) in (6), (7), (9), (10), (11) and (12) can be fine-tuned to other problems as demanded. The three new versions of the basic CSA were proposed to provide an adequate setting for \(fl_t\) and \(AP_t\) to foster exploration and exploitation features of CSA. The above models are expected to achieve effective convergence and improve the performance of the proposed ECSA, PCSA, and SCSA in optimizing optimization problems. Moreover, they could provide great potential for ECSA, PCSA, and SCSA to evade stagnation in local optima areas and assist them in determining the global optimum solution.

4.6 Exploration ability

Many parameters improve exploration in the proposed algorithms, ECSA, PCSA, and SCSA, which are described as shown below:

  • \(\tau \): It manages the exploration quantity of the proposed algorithms and identifies how far the new location would be from the food source. It further intensifies exploitation aptness, eludes premature convergence, and prevents the descent of solutions into local optima.

  • sgn\((rand-0.5)\): It directs the direction of the exploration process. Since r implements a random number in the interval from 0 to 1, the likelihood of both positive and negative signs is comparable.

  • \(fl_t\): This adaptive function was elected based on several empirical tests. In the initial iterations, the owner crow and the robber crow are far away from each other, and the crows are all onward away from food or other crows’ food. Updating \(fl_t\) improves the proposed algorithms’ ability to search the space globally.

  • \(AP_t\): This adaptive function was selected based on an empirical test. Updating the values of \(AP_t\) for each proposed algorithm helps the crows in each algorithm to find unknown search areas at initial iterations when crows are very far from food or from each other.

4.7 Exploitation ability

The following describes the main parameters used in ECSA, PCSA, and SCSA to perform exploitation and local search in the search space:

  • \(\tau \): With the passage of iteration time, exploration wanes, and exploitation expands. In the last iterations, where a crow approaches a food source, updating the crow’s position with this control variable will aid the ability to locally search for the food source, which leads to exploitation.

  • sgn\((rand-0.5)\): It also manages the exploitation feature and identifies the direction of the local search.

  • \(a_0\): This quantity manages the exploitation property of the proposed algorithms by drilling around the optimum solution.

4.8 Computational complexity analysis

Typically, the complexity issue of optimization algorithms can be represented by a function that links the problem’s input size to the algorithm’s run-time. In doing so, the complexity issue of ECSA can be described as presented in (14).

$$\begin{aligned} \mathcal {O} (ECSA)= & {} \mathcal {O} (initialization) + \mathcal {O} (problem \; Def.)\\ \nonumber+ & {} \mathcal {O}(t (cost \; function)) + \mathcal {O}(t (Sol. \; update)) \\ \nonumber+ & {} \mathcal {O}(t (memory\; update)) \end{aligned}$$
(13)

where N, d, and t denote the number of crows, problem dimension, and iteration counter, respectively, and c identifies the cost of the objective function.

Algorithm 2
figure f

A pseudo-code summarizing the main steps of the proposed ECSA, PCSA, and SCSA algorithms.

Fig. 2
figure 2

Schematic diagram of the proposed algorithms of CSA for global optimization

The parameters in (14) form the basic components of the complexity issue of the optimization method. In consequence, the general computational complexity issue can be identified as shown below:

$$\begin{aligned} \mathcal {O} (ECSA) = \mathcal {O} (1 + nd + tcn + tn + tnd) \end{aligned}$$
(14)

As \(nd \ll tnd\) and \(tn \ll tcn\), (14) is reduced to (15):

$$\begin{aligned} \mathcal {O} (ECSA) \cong O (tcn + tnd) \end{aligned}$$
(15)

Notably, the complexity issue for PCSA and SCSA is the same as the complex issue for ECSA, which is given in (15). The complexity issue of ECSA, PCSA, and SCSA is of the polynomial order. In conclusion, these proposed algorithms can be considered efficient optimization algorithms. The fundamental steps of these algorithms can be abridged by the steps given in Algorithm 2, and the flowchart describing the general steps of these algorithms (i.e., ECSA, PCSA, and SCSA ) are presented in Fig. 2.

5 Experimental results and discussion

This section shows and explains the experimental results of the developed ECSA, PCSA, and SCSA on sixty-seven broadly well-known benchmark functions. A characterization of these test functions is also provided in this section. The outcomes are explained and compared with promising meta-heuristic optimization methods.

5.1 Description and purpose of the functions used

In this study, 67 optimization functions were used to demonstrate the effectiveness of the developed ECSA, PCSA, and SCSA. These functions can be clustered into the following classes: unimodal with 7 test functions [79], multimodal with 6 test functions [80], fixed-dimension multimodal with 10 test functions [79, 80], CEC-2015 with 15 benchmark functions [81] and CEC-2017 with 29 stable functions and one unstable test function [47, 82]. Details of these test functions, involving the test environment, functions’ dimensions, search spaces’ limits, and the optimum obtained value, are presented in Appendix A in Tables 23, 24 and 25, respectively. Each set of these test functions was used to assess specific views of the developed algorithms.

The first class (i.e., unimodal functions) that includes F\(_1\)-F\(_7\) has only one optimum solution. These test functions were chosen to judge the proposed algorithms’ exploitation feature and convergence. Multimodal test functions in the second class, which involve F\(_8\)-F\(_{13}\), have more than one optimum solution. These functions were chosen in the current work to assess the exploration behavior of the proposed algorithms, where they have many local optimum solutions and more than one global optima. However, a good optimization algorithm demands the ability to search the space globally to identify the global optimal and bypass the local entrapment. The third class (i.e., fixed-dimension multimodal functions), which includes F\(_{14}\)-F\(_{23}\), are homologous to multimodal functions, but they have fixed and low dimensions. These functions were employed here to assess the proposed algorithms’ exploration feature further. The desired algorithm must avoid local optimal solutions and quickly approach the optimal global solution. In a nutshell, the test functions F\(_{1}\)-F\(_{23}\) were chosen in this study because they are adequate in verifying the local optimum avoidance, diversification, and intensification behaviors of the proposed algorithms as well as their suitability for testing the convergence rate of the proposed algorithms.

The last two classes, CEC-2015 and CEC-2017 test groups include composite and hybrid benchmark test functions. These tests’ functions mimic the complexity of a real search domain by having several local optima and various function shapes in diverse test areas. As detailed in Appendix A, these functions are formed by shifting, rotation, extension, and hybridization of unimodal and multimodal test functions. These test problems implement more challenging optimization problems and are chosen in this work to present more challenges in evaluating the accuracy of the proposed algorithms. Besides, these cases were prepared to assess optimal local avoidance and the proposed algorithms’ exploration and exploitation behaviors. As per the above discussions, an adept optimization algorithm should be capable of bypassing local optimal solutions and speedily converging to the global optimum. Subsequently, the above test groups were selected to assess the efficacy of the proposed algorithms from the perspective of evading local optimal solutions and finding the optimum global ones, especially CEC-2015 and CEC-2017 benchmarks with extremely challenging test functions. With this, estimating and judging the exploration and exploitation aptitudes of the developed algorithms in this work is simple.

5.2 Experimental setup

To manifest the general efficacy of the developed ECSA, PCSA, and SCSA, their outcomes are compared with those of the standard CSA and other optimization methods on unimodal, multimodal, fixed-dimensional, CEC-2015, and CEC-2017 benchmark test groups presented in Appendix A. The competing methods presented here include four categories of meta-heuristic optimization algorithms: (i) GA [83] as an evolutionary algorithm, (ii) PSO [84], Spotted Hyena Optimizer (SHO) [85], GWO [86], Emperor Penguin Optimizer (EPO) [87], and CSA [9] as swarm intelligence algorithms, (iii) Gravitational Search Algorithm (GSA) [88] and Multi-Verse Optimizer (MVO) [89] as physics-based algorithms and (iv) Sine Cosine Algorithm (SCA) [90] as a mathematics-based algorithm. PSO and GA are the most popular and well-studied swarm intelligence and evolutionary algorithms in these comparative algorithms. Besides, SHO, GWO, EPO, CSA, and SCA are practical meta-heuristic algorithms, and finally, MVO and GSA are reliable and well-known physics-based algorithms. Parameter settings of the proposed ECSA, PCSA, and SCSA algorithms, the primary CSA, and the other comparative algorithms are provided in Table 1. The comparative meta-heuristics mentioned above were selected in this comparison because they have been broadly applied in the literature to address the aforementioned benchmark functions, where they provided promising performance. Moreover, these algorithms share many similarities with the proposed algorithms, including flexibility, generality, and simplicity. In addition, they are independent of the nature of the benchmark functions to be addressed.

Table 1 Parameter settings of ECSA, PCSA, and SCSA and other optimization methods

The settings of the parameters displayed in Table 1 were determined to fit the settings broadly presented in the literature. The initialization process of the proposed algorithms is similar to that used in the other ones for a fair-minded comparison between the proposed algorithms and those competitive ones. The proposed algorithms used 30 search agents associated with 1000 iterations (30,000 maximum number of function evaluations (FEs)) for all test functions in all test classes. Likewise, to realize a fair comparison, the other algorithms also used a maximum number of 30,000 FEs. The comparisons between the algorithms were made with similar floating-point precision. In this, the margins of differences between the findings are attributable to the degree of performance of the comparative methods. As presented in Table 1, the algorithms were assessed in 30 separate runs for each test problem in each experiment.

5.3 Performance Evaluation

This section presents the accuracy of the proposed algorithms and compares them with other optimization algorithms on classical unimodal (F\(_1\)-F\(_7\)), multimodal (F\(_8\)-F\(_{13}\)) and fixed-dimension multimodal (F\(_{14}\)-F\(_{23}\)) benchmark test functions. The average (AVG) and standard deviation (STD) values were employed as the best statistical measures on the benchmark test functions. These statistical values were calculated at the final iteration to evaluate the algorithms’ accuracy convincingly. The standard deviation results were computed to test the stability of the proposed algorithms during the independent runs. The algorithms’ stopping conditions were assigned to the total number of iterations. The best findings are emboldened in all tables.

5.3.1 Evaluation of functions F\(_1\)-F\(_7\)

These functions are convenient for judging the exploitation ability of the proposed algorithms since they only have one global optimum and no local ones. Table 2 shows the average (AVG) and standard deviation (STD) obtained, over 30 separate runs, by the proposed algorithms and the meta-heuristics mentioned above on unimodal functions.

Table 2 Results of ECSA, PCSA, SCSA, and other optimization algorithms in unimodal benchmark functions

From the findings on unimodal functions in Table 2, it is apparent that the proposed algorithms, ECSA, PCSA, and SCSA, reveal their strength in delivering very reasonable outcomes in comparison to the parent CSA and other promising methods. In particular, these proposed algorithms achieved the global optimum solution in many test functions compared to others. Notably, as it is seen, the proposed SCSA was the most efficient algorithm for the functions F\(_1\), F\(_3\), and F\(_6\), where it achieved the global optimum solutions compared to the others. For the function F\(_6\), the average and standard deviation values of SCSA were the best, with values of 0.00E+00 in both measures. Besides, it delivered competitive outcomes in the other test functions and was much better than many competing algorithms. This reliable performance is also seen for the proposed ECSA on average and standard deviations in F\(_4\), F\(_6\), F\(_3\), F\(_2\), and F\(_1\), respectively in which it achieved very promising results in these functions, had the second and third ranks in these test functions after EPO and SCSA. More particularly, ECSA was the most proficient algorithm among the other algorithms for function F\(_5\), as it identified the best global perfect solution in this function.

The proposed PSCA achieved the second-best result in F\(_1\), F\(_3\), and F\(_6\) with only slight differences in the outcomes in comparison to SCSA. In F\(_5\), ECSA ranked first with an optimal result of 7.97E-01, whereas PSCA ranked second with only a little amount of difference in the mean result, and SCSA collected the third rank after ECSA and PSCA in respect of the average score with only a very slight difference. PSCA and SCSA are the second and third-best optimization methods regarding the average results in function F\(_6\). Specifically, SCSA ranked first with an optimal result of 0.00, where PSCA received the second rank after SCSA with a mean score of 7.08E-33, and ECSA received the third rank after SCSA and PSCA with a mean score of 5.81E-28. The results of ECSA, PCSA, and SCSA on the unimodal functions (F\(_1\) - F\(_3\)) confirm that these algorithms outperformed the basic CSA, PSO, MVO, SCA, GSA, and others in these test cases. However, in F\(_2\), F\(_4\), and F\(_7\), these algorithms provided relatively comparable results to EPO and GWO algorithms. Therefore, it can be deduced that the evolved algorithms in this work were capable of finding optimal results on many unimodal functions. Further, the small standard deviation results of the proposed ECSA, PCSA, and SCSA in all unimodal functions expose that these algorithms are stable and that this excellence is entrenched. On the basis of the characteristics of the unimodal functions under study, it can be certainly stated that ECSA, PCSA, and SCSA benefited from the high exploitation capability.

5.3.2 Evaluation of functions F\(_8\)-F\(_{23}\)

The functions F\(_8\) to F\(_{23}\) are well-suited for examining the exploration conduct of the developed algorithms. Tables 3 and 4 exhibit the performance scores of different algorithms, over 30 separate runs, for high dimensional functions (F\(_8\)-F\(_{13}\)) and fixed-dimensional functions (F\(_{14}\)-F\(_{23}\)), respectively.

Table 3 Results of ECSA, PCSA, SCSA, and other optimization algorithms in multimodal test functions

Table 3 substantiates that the proposed ECSA, PCSA, and SCSA algorithms were able to get promising results in the majority of multimodal test functions (i.e., F\(_8\) - F\(_{13}\)) while some of the other methods did not. The results obtained by ECSA, PCSA, and SCSA in these functions were in the vicinity of the global optimum solutions, with only a slight margin difference to these solutions. For a thorough discussion, it is apparent from the findings in Table 3 that SCSA outperformed other methods in F\(_{12}\). For F\(_{8}\), which is the most challenging function in this group, ECSA, PCSA, and SCSA got better scores than those reported by the parent CSA, which roughly reached results reasonably approaching the global optimum reported by SHO. For F\(_9\), HS presented better accuracy than other optimization methods, while ECSA, PCSA, and SCSA still presented very sensible results in this function. Also, the proposed ECSA, PCSA, and SCSA delivered highly effective results in F\(_{10}\) and F\(_{11}\). For F\(_{12}\), SCSA is the first best optimizer. When reading Table 3 once more, one can notice that almost all optimization algorithms acted sensibly well, with the proposed ECSA, PCSA, and SCSA achieving high-performance levels better than those performed by CSA. These findings confirm that ECSA, PCSA, and SCSA have good scores concerning exploration capacity. The standard deviation figures of ECSA, PCSA, and SCSA are tiny in these benchmark test functions, which asserts that their performance is stable.

Table 4 Results of ECSA, PCSA, SCSA, and other optimization algorithms in fixed-dimensional test functions

The experimental tests presented in Table 4 are purposed to corroborate the exploration tact of the proposed ECSA, PCSA, and SCSA algorithms on more sophisticated test functions with complex search spaces than the tests presented in Tables 2 and 3. It is evident from the outcomes shown in Table 4 that ECSA, PCSA, and SCSA are superior to many other promising algorithms in the majority of fixed-dimensional functions concerning the accuracy values. Their performance is also very comparable to other rivals in the other test functions. Eventually, the AVG outcomes of the best solutions got during the 30 independent runs prove that ECSA, PCSA, and SCSA show excellent and consistent performance on average. In more detail, it is apparent from the results in Table 4 that ECSA, PCSA, and SCSA are the best optimizers in F\(_{14}\) - F\(_{19}\), where they got convincing statistical results in these functions markedly better than those obtained by other algorithms. There is no noteworthy difference between the results of ECSA, PCSA, and SCSA algorithms. Still, there is a large of difference between the findings obtained by these algorithms and those obtained by the other algorithms in F\(_{14}\), F\(_{15}\), F\(_{16}\), and F\(_{17}\). The average accuracy values of ECSA, PCSA, and SCSA are better than the average accuracy values of CSA in F\(_{22}\), F\(_{21}\), F\(_{20}\) and F\(_{19}\). In comparison, the STD values of CSA are better than ECSA, PCSA, and SCSA in these test functions. Specifically, there is a minimal statistical difference between ECSA, PCSA, SCSA, GA, EPO, and GWO in F\(_{17}\), and there is a slight difference between ECSA, PCSA, SCSA, and SHO in F\(_{19}\) and F\(_{20}\). Some of the optimum results are marked in favor of the proposed algorithm, while another meta-heuristic algorithm finds a better solution. For example, the optimum result of F\(_{21}\) is -10.153; the proposed ECSA, PCSA, and SCSA obtained mean values of -2.05, -6.72, and -6.57, where these values are close to the optimal results. The same applies to F\(_{22}\) and F\(_{23}\), where the performance of ECSA, PCSA, and SCSA is comparable to SHO in F\(_{22}\) and F\(_{23}\). Moreover, the proposed ECSA, PCSA, and SCSA algorithms are characterized as having the slightest STD results in most of the fixed-dimensional functions in comparison to all other competing algorithms. These small STD results divulge that the dominance of ECSA, PCSA, and SCSA is well-established. Eventually, these results prove that the ECSA, PCSA, and SCSA have excellent and stable performance on average. In short, as can be seen, statistically, the proposed ECSA, PCSA, and SCSA are superior to CSA, MVO, PSO, GSA, SCA, GA, SHO, GWO, and EPS, as the mean outcomes of ECSA, PCSA, and SCSA in the majority of fixed-dimensional functions over 30 separate runs is, in each function, a much lower.

5.3.3 Implementation time

In addition to the average (AVG) and standard deviation (STD) values used in comparisons of competing algorithms above, the computational time taken by the algorithms to accomplish computations is also a root criterion that is frequently used to exemplify the efficacy of algorithms. Alternatively stated, the computational time is crucial to reveal whether the computational burden of new amended optimization algorithms is satisfactory and within the bounds of other competing methods. The proposed algorithms of CSA (i.e., ECSA, PCSA, SCSA) and the other competing algorithms were carried out in MATLAB 2021 A platform. All of these algorithms were carried out under identical conditions on a Windows 10 with an Intel Core i7-5200U\(^{TM}\) CPU at 2.2 GHz and 8.0 GB of RAM. This is to ascertain that the comparison is fair. Table 5 shows the average execution times and their associated standard deviation values taken by ECSA, PCSA, SCSA, and all other competitors, over 30 separate runs, in optimizing the functions (i.e., F\(_{1}\) to F\(_{23}\)). The best results in this table are emboldened to give them more significance than the other results.

Table 5 Results of computational times (in seconds) of the proposed algorithms and other competing algorithms in solving benchmark test functions F\(_{1}\) - F\(_{23}\)
Fig. 3
figure 3

Convergence curves of the proposed algorithms, ECSA, PCSA, SCSA, and the basic CSA for F\(_{1}\) - F\(_{12}\)

The comparison presented in Table 5 between the competitor algorithms was made in respect of the execution time that the algorithm takes to accomplish the computation. Although the execution times of ECSA, PCSA, and SCSA, as displayed in this table, are sometimes slightly more significant than those of CSA and some of the other algorithms, the increase in execution times of these proposed algorithms is not significant in many cases. Appropriately, the execution times of the proposed algorithms were reasonable since they are fallen within the scope of execution times of other algorithms. In short, the proposed ECSA, PCSA, and SCSA algorithms’ computational times are relatively short and associated with small Std values.

5.4 Convergence curves of the developed algorithms

The most common qualitative results of optimization techniques utilized in the literature are convergence curves. In this situation, an algorithm’s best results to date are recorded after each iteration loop. To demonstrate how closely an algorithm approximates the global optimal solutions over a certain number of iterations, the convergence curves are represented as lines. The convergence curves of the proposed ECSA, PCSA and SCSA, and the parent CSA, in respect of the best fitness values of all test functions of the standard benchmark functions, were obtained in a two-dimensional environment, as shown in Figs. 3 and 4, where the preceding Fig. 3 exhibits the convergence curves for F\(_1\) to F\(_{12}\), while the latter Fig. 4 presents the curves for F\(_{13}\) to F\(_{23}\).

Fig. 4
figure 4

Convergence curves of the proposed algorithms, ECSA, PCSA, SCSA, and the basic CSA for F\(_{13} - \)F\(_{23}\)

The convergence curves of the three algorithms derived from CSA are displayed over 1000 iterations specified on the x-axis in Figs. 3 and 4 against the best fitness values obtained so far on the y-axis. In these curve trends, the best optimization method is the one that illustrates rapid convergence and reaches a minor error. This implies that we support the algorithm that settles at a low fitness value after a few number of iterations. An attractive outcome is that all the proposed algorithms of CSA have credibly found the minimum of F\(_{1} - \) F\(_{12}\). Hence, these problems may be relatively easy to solve with a 100% success rate. Looking deeply at Figs. 3 and 4, one can notice reasonable variations in the behaviors of the three developed algorithms of CSA. This variation is ascribed to how the optimization method acts regarding exploitation and exploration features.

Fig. 5
figure 5

Qualitative results for F\(_1\), F\(_4\), F\(_6\), F\(_{10}\), F\(_{11}\) and F\(_{16}\): convergence curve, the average fitness of all crows, search history and path in the first dimension of the first crow

This also varies from one test function to another test function for the same optimization algorithm based on the nature of the function. In general, the convergence curves stabilize at or after approximately 200 iterations. The proposed versions of CSA show good convergence behaviors in all test functions and outperform the native CSA. Looking at each plot separately, one can say that SCSA has surpassed its companion versions in benchmark function F\(_{21}\), where it realized a meager fitness value in less than ten iterations. After then, the curve continued to fall, although only very little. Again, one can perceive that SCSA excelled in getting the best visual results, followed by PCSA, which behaved similarly to PCSA, and both are superior to ECSA, and CSA is the worst among them. Overall, the convergence curves in Figs. 3 and 4 demonstrate that the proposed ECSA, PCSA, and SCSA maintain an appropriate balance between exploration and exploitation for locating the optimal global solution reliably. Thus, the success rate of these algorithms is high for solving optimization functions.

5.5 Qualitative analysis of ECSA

To show the qualitative outcomes of ECSA, four metric measures were used in a 2-D environment given by solving functions F\(_1\), F\(_4\), F\(_6\), F\(_{10}\), F\(_{11}\) and F\(_{16}\).

The metric measures employed in the qualitative results can be characterized as follows:

  • The first metric displays the convergence of the best global crow through a path of iterations. Convergence analysis of optimization algorithms is vital to grasp methods’ exploration and exploitation features better. Remarkably, the results of the convergence curves reveal that ECSA exhibits sensible convergence rates and has encouraging conduct in all of the considered functions. Within the first 500 iterations, ECSA quickly converges to the most favorable regions in the search space. In the following 500 iterations, ECSA gradually converges to the global or near-global optimal in F\(_1\), F\(_6\), and F\(_{10}\). For F\(_4\) and F\(_{11}\), ECSA continued to approach the global solutions. The stability of ECSA with various kinds of functions supports the ability to obtain a steady convergence. The soft convergence curves and the regularity with which these curves converge to the slightest error throughout an iterative process serve as evidence. The crows are prompted to travel to the global optima and are prompted to move locally rather than globally, demonstrating how ECSA utilizes the search space.

  • Average fitness curves read the average objective values of all crows at each iteration of ECSA. These fitness curves decrease significantly throughout iterations in all tested test functions. In more detail, ECSA gave a fast convergence response for F\(_1\), F\(_6\), F\(_{10}\) and F\(_{16}\), has rational convergence for F\(_{4}\), and found optimal solutions with a sensible convergence response for F\(_{11}\). This ascertains that ECSA promotes the global best crow and enhances the fitness values of all crows.

  • The search history records the crows’ positions during optimization. ECSA is exploring the most favorable regions in the search space for the benchmark functions. The proposed ECSA does not become stuck in local optima and is exploring the whole search space. For F\(_1\), F\(_4\), and F\(_{6}\), the sample points are slightly split into the unpromising regions. Most of the sample points in the test functions, F\(_{10}\) - F\(_{11}\), and F\(_{16}\), are distributed around unpromising regions. This is owing to the complexity of these test functions. This insinuates that ECSA explored the whole search space and averted falling into local optima. The sample points are dispersed around the optimal solution, which ensures that ECSA can diversify and intensify the search process in the search space efficiently and effectively.

  • At each loop of the iterative ECSA process, the first crow’s path shows the value of the first variable. The route curves demonstrate that crows exhibit significant favorable regions during the initial stages of optimization.

The average fitness and convergence curves in Fig. 5 demonstrate that ECSA has an appropriate balance between exploration and exploitation to effectively locate the global optimal solution. Generally, the above qualitative outcomes substantiate the coveted features of ECSA and affirm that its success level in addressing benchmark test functions is mainly reliable. This is attributed to the robust global and local search mechanisms of the developed ECSA in the search space.

5.6 Evaluation of CEC-2015 benchmark

The performance level of the proposed variants of CSA was evaluated using a more complicated benchmark set, CEC-2015. These functions include hybrid and composition test functions [91]. Therefore, they are valuable for assessing the exploration and exploitation features of the proposed algorithms. These test functions have a search space equal to \([-100,100]\) with dimensions equal to 30 for each. Appendix A in Table 24 details these functions. The settings of the parameters for ECSA, PCSA, and SCSA and all other competing algorithms used in this test set are exhibited in Table 1. The number of FEs for each function was assigned to 30,000, considering that each method’s number of iterations and crows are 1000 and 30, respectively. The performance degree of ECSA, PCSA and SCSA, and other competing methods on the CEC-2015 benchmark group is provided in Table 6.

Table 6 Results of ECSA, PCSA, and SCSA and other methods on the CEC-2015 test group

Table 6 compares the performance of the proposed ECSA, PCSA, and SCSA algorithms with the primary CSA and eight other meta-heuristics for CEC-2015 benchmark functions. It is evident from this table that ECSA, PCSA, and SCSA excelled other algorithms in the majority of the studied functions. In this context, ECSA achieved better mean values for test functions C15-f1, C15-f4, C15-f8, and C15-f11. PCSA arrived at better mean scores for C15-f2 and C15-f5, and SCSA realized better mean scores for C15-f6 and C15-f10. Additionally, the proposed ECSA, PCSA, and SCSA yielded findings near optimality for C15-f3, C15-f7, C15-f9, C15-f12, C15-f13, and C15-f15, respectively, and these results are analogous to other algorithms. For the test functions C15-f3 and C15-f9, all the competing methods achieved the same results of 3.20E+02 and 1.00E+03, respectively. Also, CSA produced near-optimal outcomes for the C15-f4 test function comparable to the results revealed by ECSA, PCSA, and SCSA. On the C15-f7 test function, CSA reported an average fitness value of 7.11E+02, close to the result of 7.02 E+02 reported by ECSA, PCSA, and SCSA. For the C15-f8 test function, ECSA reported an optimal result of 1.43E+03, whereas PCSA, SCSA, and CSA reported good results close to optimality for this function 1.65E+03, 1.56E+03, and 7.75E+03, respectively. For C15-f10, SCSA reported an optimal result of 1.17E+03, whereas ECSA and PCSA reported 1.24E+03 and 2.87E+03, respectively, comparable to the result obtained by SCSA. However, CSA did not perform as well as ECSA, PCSA, and SCSA for the C15-f10 test function. Even though ECSA, PCSA, and SCSA have very sensible results in CEC-2015 benchmark functions, the edges of differences between the mean scores obtained by CSA and those obtained by ECSA, PCSA, and SCSA in the C15-f14 test function are minimal. Still, the standard deviations of ECSA, PCSA, and SCSA are minimal compared to that reported by CSA for this function. For the remaining functions, C15-f11, C15-f12, C15-f13, and C15-f15, ECSA, PCSA, and SCSA scored optimal or near-optimal average outcomes. In sum, the results obtained by ECSA, PCSA, and SCSA are outstanding and almost better than the competitors, namely CSA, GWO, SHO, MVO, PSO, GSA, EPO, SCA, and GA, in the majority of the CEC-2015 test functions. Regarding the standard deviation outcomes, ECSA, PCSA, and SCSA have tiny figures in most of these functions compared to others. This confirms that the excellence of these algorithms is stable, and this stability is solid.

5.7 Evaluation of CEC-2017 benchmark

For further testing, the degree of reliability of the proposed ECSA, PCSA, and SCSA against more challenging benchmark problems, the CEC-2017 as a recent and challenging test group was employed. This group comprises hybrid and composition functions, as well as unimodal, multimodal, and multimodal functions that have been rotated and shifted [82]. As a result, it is sufficiently adequate for rating the exploration and exploitation conducts of the evolved algorithms. It is worth mentioning that given the unsteady behavior of the C17-f2 function, it was taken away from this suite. The search area for all of these functions in this test suite is \([-100, 100]\) with ten dimensions for each test problem. Due to the high complexity of these problems and because they contain a lot of local optima, they were chosen to judge the strength level of the evolved algorithms in avoiding local optimums and getting the optimal global solutions. Appendix A in Table 25 gives more details about these functions.

The performance of ECSA, PCSA, and SCSA was tested via this test set, and the outcomes were compared with previous meta-heuristics. The majority of the functions in this group rank among the most difficult hybrid and compositional functions. All competing methods’ findings were gathered using 50 search agents, 1000 iterations, and 30 separate runs. Taking into account the predetermined number of iterations and search agents, 50,000 FEs were allocated to each test function. The parameter settings of the competing methods can be found in Table 1. The outcomes of ECSA, PCSA and SCSA, and other comparative algorithms are displayed in Table 7.

Table 7 Results of ECSA, PCSA, and SCSA and other algorithms in the CEC-2017 test group

The outcomes in Table 7 underscore the notability of the proposed ECSA, PCSA, and SCSA over other meta-heuristics in optimizing challenge functions. These algorithms presented the best exclusive average fitness scores in 6 out of 29 functions (C17-f1, C17-f3, C17-f4, C17-f19, C17-f22, and C17-f23). In further detail, the proposed algorithms have remarkable performance degrees in uni-modal test problems (C17-f1, C17-f3), where they could constantly find the global optimum solution over 30 separate runs. Also, they could find the optimum solutions in three cases in the hybrid functions, namely C17-f15, C17-f18, and C17-f19. It is obviously seen that ECSA is the best algorithm among all other competitors, which scored the best average results in C17-f7, C17-f10, C17-f17, C17-f21, C17-f18, C17-f11, and C17-f14. At the same time, PCSA is the second-best optimizer that scored the best results in 8 out of 29 test functions, namely C17-f4, C17-f1, C17-f3, C17-f23, C17-f19, C17-f25, C17-f9 and C17-f16. Besides, SCSA is the third best optimizer, which reported the best average outcomes in C17-f29, C17-f26, C17-f30, C17-f24, C17-f5, and C17-f20. Moreover, ECSA and PCSA reported the best average scores in the C17-f25 test function. This confirms that we have formerly settled that SCSA has high accuracy when solving optimization functions in different search domains. However, the proposed ECSA, PCSA, and SCSA flopped to acquire the optimal solutions in a few test functions, such as C17-f6. As a result, these proposed algorithms are sometimes trapped in local optimums, but they are not far from the global optimum.

For the C17-f8 test function, CSA reported the best average score of 811.04, whereas SCSA, ECSA, and PCSA reported the second, third, and fourth best scores with slight differences from that reported by CSA. For the C17-f9 function, PCSA achieved the best mean value, whereas ECSA and SCSA reported the third and fourth best outcomes with modicum differences from that reported by PCSA. For the C17-f12 test function, the performance of GWO is better than other methods in respect of the average fitness value. SHO is the best optimizer for C17-f13, which reported the best average score and is better than ECSA, PCSA, and SCSA, with slight difference margins. In connection with the composition functions that make up the most complicated functions in CEC-2017, the optimal solutions were revealed by the proposed methods in C17-23, C17-25, and C17-26 test cases. ECSA and SCSA won the best average fitness value for C17-f27. Hence, the performance of ECSA, PCSA, and SCSA is better than other algorithms regarding average fitness values, and they won the top three optimizers for these test functions. Regarding the STD figures in Table 7, the presented ECSA, PCSA, and SCSA performed remarkably better than other optimization methods in most test functions. This sustains the conviction that the proposed algorithms have significant stability when applied to complex test functions in different search areas. These results divulge that ECSA, PCSA, and SCSA are ranked first, following their strength in exploration and exploitation capacities.

Table 8 Average rank of all methods obtained using Friedman’s test in the first test group of unimodal, multimodal, and fixed-dimensional test functions
Table 9 Results of Holm’s method based on the statistical results of Group 1 for \(\alpha =0.05\)

Lastly, CSA, SHO, and GWO algorithms provided plausible solutions in different test functions of CEC-2017. The SCA, GSA, and GA algorithms behaved almost poorly in these test functions, while PSO and MVO behaved almost modestly. Overall, SCSA, ECSA, PCSA, SHO, and GWO functioned much better than the others in most CEC-2017 functions. The top three optimizers were the overall ranking of the proposed ECSA, PCSA, and SCSA algorithms in this test bed.

5.8 Statistical test analysis

To determine if the performance differences of all methods in the benchmark test functions examined in this study are statistically significant using the non-parametric Friedman’s test, a statistical analysis test was first done in this section. More than five algorithms must be tested and compared on more than 10 test functions for a reliable comparison. This study compares the performance of ECSA, PCSA, and SCSA concerning the primary CSA and eight other meta-heuristics. In consideration of the benchmark functions, this study thoroughly tested three test groups:

  • Group 1, which consists of unimodal, multimodal, and fixed-dimensional functions with 23 test functions,

  • Group 2, which is the CEC-2015 test functions, including 15 set functions, and

  • Group 3, this test set is the CEC-2017 test function which includes 29 set functions.

Friedman’s test requires computing the mean ranked value, for which a comparison is needed to examine the \(p-\) values gained for a level of significance (\(\alpha = 0.05\)) with Friedman’s test to recognize if the null hypothesis is rejected or not. The mathematical formulation and commentaries regarding Friedman’s test can be found in [92, 93]. The null hypothesis was rejected for all three test sets of benchmark functions, indicating a statistically large difference between the performances of the competing methods in each test set. The lowest-ranked algorithm by Friedman’s test is commonly utilized as a control one for post-hoc analysis. Further steps are needed to determine the performance of the optimization methods that differ significantly from ECSA or SCSA, which are reported as the best algorithms in the first, second, and third test groups, as shown below, and find out which algorithms have similar performance as ECSA or SCSA. To this effect, we performed a post-hoc statistical test using Holm’s test procedure [92] to conduct a pairwise comparison between the control algorithm and the other algorithms. This test method shows that the performance of the two algorithms is considerably dissimilar if the difference in the mean ranking of the compared algorithms is greater than the \(p-\) values. This statistical test is carried out here to discern which algorithms are better, similar, or worse than ECSA, PCSA, and SCSA at a significance level of 0.05.

Table 10 Average ranking of all algorithms obtained using Friedman’s test on the test functions of the second group (i.e., CEC-2015)
Table 11 Results of Holm’s method based on the average ranking results of Group 2 for \(\alpha =0.05\))

Holm’s method is a widely applicable multiple-test method based on successive rejection manner. It ranks all algorithms based on their p-values and compares them with \(\alpha /k - i\), where k is the degree of freedom and i represents the algorithm number. This procedure commences with the most significant p-value and consecutively rejects the null hypothesis as long as that \(pi < \alpha /k - i\). Once the algorithm cannot deny the hypothesis, it stops and considers all the rest hypotheses agreeable.

Holm’s technique is a multiple-test approach based on successive rejection that is generally applicable. It compares all algorithms with \(alpha/k - i\) and ranks them according to their p-values, where k is the degree of freedom and i is the algorithm number. As long as \(pi alpha/k - i\), this technique sequentially rejects the null hypothesis starting with the most significant p-value. Once the algorithm is unable to refute the hypothesis, it pauses and accepts all remaining hypothese.

Table 8 shows the mean rank of all algorithms obtained by utilizing Friedman’s test in light of algorithms’ results in unimodal, multimodal, and fixed-dimensional multimodal benchmark functions.

The p-value retrieved by Friedman’s test based on the mean outcomes of Group 1 is 4.747802E-11. In this test group, SCSA surpassed all evaluated methods, with the lowest mean rank of 4.130434, and the performance score was substantially better than PCSA, ECSA, CSA, EPO, GWO, PSO, SHO, GA, GSA, MVO, and SCA. The statistical outcomes of Holm’s method obtained on the test functions of Group 1 are given in Table 9.

Holm’s test in Table 9 rejects those hypotheses with p-value \(\le 0.007142\). It is seen from these results that ECSA, PCSA, and SCSA are proficient in yielding promising results such as those of other efficacious optimization methods presented in the literature.

Table 10 displays the average ranking of all algorithms determined by Friedman’s test after the results of all algorithms in the CEC-2015 test suite (i.e., Group 2).

The p-value evaluated by Friedman’s test based on the average accuracy outcomes for Group 2 is 7.596059E-09. The results for Group 2, presented in Table 10, exhibit that ECSA ranked first, SCSA ranked second, and PCSA ranked fourth. In sum, ECSA significantly outperformed SCSA, EPO, PCSA, PSO, SHO, SCA, GWO, MVO, CSA, GSA, and GA at the examined level of significance.

The results of Holm’s method got after the application of Friedman’s test on the test functions of Group 2 is shown in Table 11.

Holm’s test in Table 11 rejects those hypotheses with p-value \(\le 0.008333\). It is evident from these findings that the proposed algorithms are promising optimization algorithms as those evaluated in this study.

Table 12 displays the average ranking of all algorithms as determined by Friedman’s test on the outcomes of all CEC-2017 test functions (i.e., Group 3).

Table 12 Average ranking of all algorithms obtained using Friedman’s test on the test functions of the third test group (i.e., CEC-2017)

Finally, the p-value retrieved by Friedman’s test for Group 3 is 6.930422E-11. In the results of this group, shown in Table 12, which compares the performance of each optimization algorithm in all test functions of CEC-2017, ECSA placed first, SCSA placed second, and PCSA placed fourth. In short, the lowest average rank belonged to ECSA, with an average rank of 3.762068, and vastly outperformed SCSA, CSA, PCSA, PSO, SHO, MVO, GWO, GSA, SCA, and GA.

The results of Holm’s test obtained after applying Friedman’s test on Group 3 functions are shown in Table 13.

Table 13 Results of Holm’s test method based on the average statistical results of Group 3 for \(\alpha =0.05\)

In Table 13, Holm’s procedure rejects those hypotheses that have p-value \(\le 0.016666\). From the outcomes shown in Table 13, it is clear that the proposed versions of CSA are as practical optimization algorithms as those evaluated in this study.

As a principal inference drawn from all CEC-2015 and CEC-2017 functions’ statistical analysis in this paper, ECSA performs considerably better than PCSA, SCSA, and CSA. In concise terms, the results reported in Tables 8, 9, 10, 11, 12 and 13 show that the proposed algorithms, ECSA, PCSA, and SCSA, are statistically highly similar to each other in these test groups. These proposed algorithms have successfully avoided local optimum solutions and are highly efficient in both exploration and exploitation features.

6 Engineering design problems

This section explores the competence of the proposed ECSA, PCSA, and SCSA algorithms in solving four well-known classical engineering design problems: (1) the speed reducer problem, (2) the tension/compression spring problem, (3) the pressure vessel problem, and (4) the welded beam problem. These design problems reflect challenging benchmark test problems with different characteristics, dimensions, and varied levels of complexity and constraints. Because of this, their search spaces are highly comparable to those that the proposed ECSA, PCSA, and SCSA may encounter while addressing constrained optimization problems.

Fig. 6
figure 6

A structural design of a speed reducer problem

Table 14 A comparison of the results achieved by ECSA, PCSA, SCSA, and other algorithms for the speed reducer design problem
Table 15 Statistical results obtained from ECSA, PCSA and SCSA and other optimization algorithms for speed reducer design problem

As presented below, the effectiveness of the proposed algorithms in solving these design problems was compared with other meta-heuristic algorithms mentioned above and in Table 1. These comparative algorithms were selected in this comparison because they were extensively applied in the literature to address these engineering design problems, where they provided promising performance. Further, these algorithms share many similarities with the proposed algorithms, including flexibility, generality, and simplicity. In addition, these algorithms are independent of the nature of the engineering design problems to be addressed. To achieve a fair comparison between the proposed ECSA, PCSA, and SCSA algorithms, (EPO [87], SHO [85], GWO [86], PSO [84], MVO [89], SCA [90], GSA [88] and GA [83]) the primary CSA, and the other competing meta-heuristics, the settings of standard parameters such as the maximum number of iterations and the number of search agents used to solve these problems were the same and set to 1000 and 30, respectively. Moreover, several constraints should not be infringed by the optimum solution(s) arrived at while solving these engineering design problems. Therefore, when addressing these design problems, the proposed algorithms are equipped with a static penalty function in handling the constraints while solving these problems as described below:

$$\begin{aligned} \zeta (z) = f(z) \pm \left[ \sum _{i=1}^{m} l_i \cdot max (0, t_i(z))^\alpha + \sum _{j=1}^{n} o_j \left| U_j(z) \right| ^\beta \right] \end{aligned}$$
(16)

where \(\zeta (z)\) is the objective function, \(o_j\) and \(l_i\) stand for positive penalty constants, \(U_j(z)\) and \(t_i(z)\) are the constraints of the objective function. The values of \(\alpha \) and \(\beta \) were assigned to 1 and 2, respectively.

This approach sets the penalty value for each solution in the static penalty function, which can help the proposed algorithms’ search agents move into the problem’s search space. The results of solving the aforementioned engineering design problems by the proposed algorithms compared to other competitors are presented below.

6.1 Speed reducer design problem

The structural design of the speed reducer design problem is shown in Fig. 6. This design is complex because it consists of seven design variables [94].

The weight that should be reduced in this design problem is subject to four constraints [85], which are explained as follows:

  • Transverse deflections of the shafts

  • Surface stress

  • Bending stress of the gear teeth

  • Stresses in the shafts

Fig. 7
figure 7

A schematic diagram of a tension/compression spring design

Table 16 A comparison of the outcomes reached by ECSA, PCSA, SCSA, and other algorithms for the tension/compression spring design problem

The variables of this design problem were set as follows: \(l_1\), \(l_2\), \(d_1\), \(d_2\), b, m, and z. These parameters are the first shaft’s distance between bearings, the second shaft’s distance between bearings, the first and second shafts’ diameters, the faces of the shafts, the module of teeth, and the number of teeth in the pinion, respectively. These variables were successively implemented while solving this problem by a vector as \(\vec {x}= [x_1 x_2 x_3 x_4 x_5 x_6 x_7]\). The mathematical formula for the speed reducer problem can be formulated as follows:

$$\begin{aligned} \texttt {Minimize}: f(\vec {x})= & {} 0.7854x_1x_2^2(3.3333x_3^2+14.9334x_3\\{} & {} -43.0934) -1.508x_1(x_6^2+x_7^2)\\{} & {} +7.4777(x_6^3+x_7^3)\\{} & {} + 0.7854(x_4x_6^2+x_5x_7^2) \end{aligned}$$

This function is subject to the following eleven constraints:

$$\begin{aligned} g_1(\vec {x})= & {} \frac{27}{x_1x_2^2x_3}-1 \le 0\\ g_2(\vec {x})= & {} \frac{397.5}{x_1x_2^2x_3^2}-1 \le 0\\ \end{aligned}$$
$$\begin{aligned} g_3(\vec {x})= & {} \frac{1.9 x_4^3}{x_2x_6^4x_3}-1 \le 0\\ g_4(\vec {x})= & {} \frac{1.93 x_5^3}{x_2x_7^4x_3}-1 \le 0\\ g_5(\vec {x})= & {} \frac{[(745(x_4/x_2x_3))^2+16.9\times 10^6]^{1/2}}{110x_6^3}-1 \le 0\\ g_6(\vec {x})= & {} \frac{[(745(x_5/x_2x_3))^2+157.5\times 10^6]^{1/2}}{85x_7^3}-1 \le 0\\ g_7(\vec {x})= & {} \frac{x_2x_3}{40}-1 \le 0\\ g_8(\vec {x})= & {} \frac{5x_2}{x_1}-1 \le 0\\ g_9(\vec {x})= & {} \frac{x_1}{12x_2}-1 \le 0\\ g_{10}(\vec {x})= & {} \frac{1.5x_6+1.9}{x_4}-1 \le 0\\ g_{11}\vec {x})= & {} \frac{1.1x_7+1.9}{x_5}-1 \le 0 \end{aligned}$$

where the range of the design parameters \(b, m, z, l_1, l_2, d_1\) and \(d_2\) were applied as \(2.6\le x_1\le 3.6\), \(0.7\le x_2\le 0.8\), \(17\le x_3\le 28\), \(7.3\le x_4\le 8.3\), \(7.3\le x_5\le 8.3\), \(2.9\le x_6\le 3.9\) and \(5.0\le x_4\le 5.5\), respectively.

The optimum cost and best designs obtained by ECSA, PCSA, SCSA, and other optimization methods for the speed reducer design problem are in Table 14.

Per the optimum costs shown in Table 14, ECSA, PCSA, and SCSA are adept at finding optimum designs for the speed reducer problem with the lowest costs. In terms of best, worst, average, and standard deviation results, a summary of the statistical outcomes for ECSA, PCSA, SCSA, and other meta-heuristics for this design problem, over 30 independent runs, is exhibited in Table 15.

Table 17 Statistical results obtained by ECSA, PCSA, and SCSA and other algorithms for tension/compression spring design problem

As per the results in Table 15, the proposed ECSA, PCSA, and SCSA identified the best statistical solutions with respect to best, average, worst, and standard deviation values among all other competitors. This is for more certainty that the proposed algorithms are superior to other existing algorithms in terms of these statistical results.

6.2 tension/compression spring design

The design of the tension/compression spring problem is shown in Figure 7 [95].

This problem aims to lessen the design’s weight of the tension/compression spring. This design problem has several constraints: minimum deflection, shear stress, and surge frequency. The parameters of this problem are mean coil diameter (D), wire diameter (d), and the number of active coils (N). These variables can be drafted as a vector as: \(\vec {x} = [x_1, x_2, x_3]\), where the parameters of \(\vec {x}\) are d, D and N, respectively. The mathematical formulation of this problem can be described as given below:

Minimize: \(f(\vec {x}) = (x_3 +2)x_2x^2_1\)

The following restrictions apply to this design problem: \(g_1(\vec {x}) = 1-\frac{x^3_2x_3}{71785x^4_1}\le 0\)

\(g_2(\vec {x}) =\frac{4x^2_2-x_1x_2}{12566(x_2x^3_1-x^4_1)}+\frac{1}{5108x^2_1}-1 \le 0\)

\(g_3(\vec {x}) = 1-\frac{140.45x_1}{x^2_2x_3} \le 0\)

\(g_4(\vec {x}) = \frac{x_1+x_2}{1.5} -1\le 0\)

where \(0.05 \le x_1 \le 2.0\), \(0.25 \le x_2 \le 1.3\) and \(2 \le x_3 \le 15.0\).

A comparison of the optimal designs gained by ECSA, PCSA, SCSA, and those algorithms mentioned above for the tension/compression spring design problem is shown in Table 16.

As per the results in Table 16, the proposed ECSA, PCSA, SCSA, and CSA have reached the optimum design for this problem with an optimal cost of 0.01266523. This cost is a little lower than the costs obtained by other comparative algorithms. A summary of the statistical results of the tension/compression spring design problem collected by ECSA, PCSA, SCSA, and different algorithms over 30 independent runs is given in Table 17.

It is perceived from the outcomes reported in Table 17 that ECSA, PCSA, and SCSA performed better again by providing many improved statistical results in terms of best, average, worst, and standard deviation results compared to the others.

6.3 Pressure vessel design

The pressure vessel design problem has been widely used in optimization [90]. This problem aims to reduce the total cost of material formation and welding of the cylindrical vessel, which is covered on both ends with hemispherical heads, as shown in Figure 8.

Fig. 8
figure 8

A representative structure of the cross-section of a pressure vessel design problem

Table 18 A comparison of the results achieved by ECSA, PCSA, SCSA, and other algorithms for the pressure vessel design problem
Table 19 Statistical results obtained by ECSA, PCSA, SCSA, and other algorithms for pressure vessel design problem

The variables of this design problem are defined as follows:

  • Inner radius (R)

  • Thickness of the shell (\(T_s\))

  • Length of the cylindrical section of the vessel without looking at the head (L)

  • Thickness of the head (\(T_h\))

The variable vector of this design problem can be formulated as follows: \(\vec {x} = [x_1, x_2, x_3, x_4]\), where the parameters of this vector represent \(T_s\), \(T_h\), R and L, respectively. The mathematical formulation of this design problem can be defined as shown below:

$$\begin{aligned} f(\vec {x})= & {} 0.6224x_1x_3x_4 + 1.7781 x_2x^2_3 \\+ & {} 3.1661x^2_1x_4+19.84x^2_1x_3 \end{aligned}$$

This problem is subject to the following constraints:

$$\begin{aligned} g_1(\vec {x}) = -x_1 +0.0193x_3\le 0 \end{aligned}$$
$$\begin{aligned} g_2(\vec {x}) = -x_2 +0.00954x_3 \le 0 \end{aligned}$$
$$\begin{aligned} g_3(\vec {x}) = -\pi x^2_3x_4 -\frac{4}{3}\pi x^3_3+1296000 \le 0 \end{aligned}$$
$$\begin{aligned} g4(\vec {x}) = x_4 -240 \le 0 \end{aligned}$$

where \(0 \le x1 \le 99\), \(0 \le x_2 \le 99\), \(10 \le x_3 \le 200\) and \(10 \le x_4 \le 200\).

This design problem has been addressed in the literature using various optimization methods, such as those mentioned above. Table 18 illustrates the cost results achieved by ECSA, PCSA, SCSA, and other algorithms for the pressure vessel design problem.

As shown in Table 18, the proposed ECSA, PCSA, and SCSA can find the optimal design for the pressure vessel problem with the lowest cost of 5885.332773. Table 19 displays the statistical results for the pressure vessel design problem obtained by ECSA, PCSA, SCSA, and other algorithms, over 30 independent runs, regarding the best, worst, mean, and standard deviation costs.

Fig. 9
figure 9

A schematic structure of a welded beam design

Table 20 A comparison of the cost results achieved by ECSA, PCSA, and SCSA, and other algorithms for the welded beam problem

It may be ascertained from Table 19 that ECSA, PCSA, and SCSA outperformed other algorithms and provided highly competitive statistical results regarding standard deviation and average cost values compared to others.

6.4 Welded beam design

The welded beam problem always strives to reduce the manufacturing cost of the welded beam structure shown in Fig. 9 [96].

The welded beam structure in Fig. 9 comprises a beam, A, and the welding coveted to be linked to the section, B. This design problem undergoes a set of constraints identified as follows:

  • End deflection of the beam (\(\delta \))

  • Buckling load on the bar (\(P_c\))

  • Shear stress (\(\tau \))

  • Bending stress in the beam (\(\theta \))

There is a need to find the parameters of the welded beam structure to optimize this design problem, which is: the clamped bar’s length (l), the thickness of the weld (h), the thickness of the bar (b) and the height of the bar (t). The design variable vector can be written as: \(\vec {x} = [x_1, x_2, x_3, x_4]\), where the parameters of \(\vec {x}\) stand for h, l, t and b, respectively. The mathematical formulation of the cost function of this design problem to be minimized is given as follows:

Minimize: \(f(\vec {x}) = 1.10471x^2_1x_2 +0.04811x_3x_4(14.0+x_2)\)

Subject to the following constraints,

\(g_1(\vec {x}) = \tau (\vec {x})-\tau _{max}\le 0\)

\(g_2(\vec {x}) = \sigma (\vec {x})- \sigma _{max} \le 0\)

\(g_3(\vec {x}) = x_1 -x_4 \le 0\)

\(g_4(\vec {x}) = 1.10471x^2_1 +0.04811x_3x_4(14.0+x_2) -5.0 \le 0\)

\(g_5(\vec {x}) = 0.125- x_1 \le 0\)

\(g_6(\vec {x}) = \delta (\vec {x})- \delta _{max} \le 0\)

\(g_7(\vec {x}) = P-P_c(\vec {x}) \le 0\)

where the other parameters are defined as presented below:

\(\tau (\vec {x})=\sqrt{((\tau ')^2 + (\tau '')^2)+\frac{2\tau '\tau '' x_2}{2R}}, \tau '=\frac{p}{\sqrt{2}x_1x_2}\)

\(\tau ''=\frac{MR}{J}, M=P(L+\frac{x_2}{2}), R=\sqrt{(\frac{x_1+x_3}{2})^2+\frac{x_2^2}{4}}\)

\(J=2\left\{ \sqrt{2}x_1x_2\left[ \frac{x_2^2}{12}+(\frac{x_1+x_3}{2})^2\right] \right\} , \sigma (\vec {x})=\frac{6PL}{x_4x_3^2}\)

\(\delta (\vec {x})=\frac{4PL^3}{Ex_4x_3^3}, P_c(\vec {x})=\frac{4.013\sqrt{EGx^2_3x_4^6/36}}{L^2}\left( 1-\frac{x^3}{2L}\sqrt{\frac{E}{4G}}\right) \)

where \(P =6000lb, L =14\)in, \(\delta _{max} = 0.25\)inch, \(E =30*10^6\) psi, \(G = 12*10^6\) psi, \(\delta _{max} = 13600psi\), \(\sigma _{max} = 30000\) psi. The ranges of the variables were used as \(0.1 \le x_i \le 2.0\) when \(i = 1\) and 4 and \(0.1 \le x_i \le 10.0\) when \(i = 2\) and 3.

This problem was addressed by several algorithms such as EPO [87], SHO [85], GWO [86], PSO [84], MVO [89], SCA [90], GSA [88] and GA [83]. The results obtained by ECSA, PCSA, SCSA, and other algorithms for the welded beam design problem are presented in Table 20.

It is evident from the outcomes in Table 20 that ECSA, PCSA, and SCSA delivered optimal designs for the welded beam structure by finding the optimum cost of approximately 1.72485230, thus exceeding all other algorithms in terms of accuracy in optimization. Table 21 displays the statistical results of ECSA, PCSA, SCSA, and different algorithms over 30 independent runs concerning best, worst, mean, and standard deviation results.

Table 21 Statistical results obtained from ECSA, PCSA, and SCSA and other algorithms for welded beam design problem

The results in Table 21 show that ECSA, PCSA, and SCSA outperformed all other algorithms with minimal standard deviation and average cost values compared to different algorithms.

The efficiency of an optimization algorithm can also be judged based on how long it takes to solve a problem. The processing time was calculated to evaluate the proposed algorithms’ performance accurately. No matter how effective the optimization is, it is useless if the algorithm takes too long to solve a problem. Therefore, the running time must be within the acceptable range. The implementation time is influenced by the size and complexity of the problem as well as the capability of the machine being used. The average computational times taken by ECSA, PCSA, SCSA, and other algorithms to solve the above four classical engineering design problems are presented in Table 22, where the machine and software specifications are as follows: previously specified.

Table 22 Average running time of the proposed algorithms of CSA and other algorithms in solving various engineering design problems

It can be noticed from Table 22 that the average computational times required by the proposed ECSA, PCSA, and SCSA in optimizing different engineering design problems are within the range of other algorithms. The computational times for ECSA, PCSA, and SCSA are also better than some of the other rivals. This demonstrates that the proposed algorithms are computationally efficient on a machine with moderate specifications like the one mentioned above.

Table 23 Characteristics of the unimodal, multimodal, and fixed-dimensional test functions. U: Unimodal, M: Multimodal, and F: fixed-dimensional
Table 24 A description of the CEC-2015 benchmark test functions
Table 25 Characteristics of the CEC-2017 benchmark test functions: U: Unimodal, M: Multimodal, H: Hybrid, and C: Composition

In short, as can be seen from the results of the proposed ECSA, PCSA, and SCSA algorithms in solving the above engineering design problems, several advantages could be noted by the performance of these algorithms in solving these problems.

  • First, these algorithms are independent of the nature of these design problems, which means there is no bias for these algorithms in solving these problems. This can provide a good judgment of the efficiency and practicality of the proposed algorithms in solving real-world optimization problems.

  • Second, as per the design costs and statistical results are shown in Tables 20 to 15, one can say that the proposed algorithms are suitable for solving these design problems as the designs costs of these algorithms are the lowest among all other competing algorithms. In other words, the proposed ECSA, PCSA, and SCSA algorithms could identify the optimal designs for these problems by finding the optimal costs in each, which implies that these proposed could escape out of local optimums.

  • Another benefit can be captured from the performance levels of the proposed algorithms in solving these problems, where one can suggest that these algorithms can be used for further studies in judging the efficiency of newly developed meta-heuristic algorithms. This includes conducting comparative studies and verification of new meta-heuristic optimization algorithms. The above advantages of the proposed algorithms are positive points to recommend the proposed ECSA, PCSA, and SCSA algorithms as promising candidates for solving other engineering design problems.

As per the NFL theorem [6] mentioned in the introduction section, it is beyond the bounds of the possibility of finding a single algorithm that can solve all kinds of problems with a high level of efficiency. Because of this, an algorithm may tackle a specific problem very efficiently while bringing poor results to another problem. Knowing the properties of the fitness function is necessary to comment on which algorithm should be selected to solve a problem. Hence, to choose an algorithm when encountering a similar problem, it is best to study it first and then carefully select the most suitable algorithm. In this, it is better to choose and test many algorithms to solve the problem of interest under the same conditions and settings with a particular fitness function. Then, the algorithm with superior performance and the most efficient among all other algorithms in solving the problem can be selected and recommended for solving different problems similar to the solved problem.

7 Conclusion and future works

In this study, three improved variants of the Crow Search Algorithm (CSA), named Exponential CSA (ECSA), Power CSA (PCSA), and S-shaped CSA (SCSA), were proposed. Exponential, power, and s-shaped growth functions were used to implement these versions’ flight length and awareness probability to improve their exploration and exploitation capabilities. Another key adaptive control parameter was suggested in the positioning updating mechanism of ECSA, PCSA, and SCSA to improve their exploration and exploitation features further. Extensive experiments were conducted to reveal the performance degree of the proposed algorithms by solving three test groups of benchmark functions consisting of 67 familiar optimization problems with different levels of complexity. Besides, the effectiveness of ECSA, PCSA, and SCSA was also proven by their applications on four engineering design problems. In line with the computational and statistical results, it can be realized and confirmed that the proposed algorithms provided very competitive outcomes with several advantages over a group of widely well-known meta-heuristics regarding the stability and quality of the solutions obtained. Upon reading the outcomes, the proposed algorithms provide a fundamental framework for relatively low-dimensional optimization problems, which may expand their reliability to solve large-scale problems. In this, there is a need for further study to solve other real-world problems with several search fields and diverse kinds of constraints. It also has the potential to expand the applications of ECSA, PCSA, and SCSA to address multi-objective problems in diverse fields.