Performance assessment of meta-heuristics for composite layup optimisation

This paper investigates the performance of several meta-heuristic algorithms, including Particle Swarm Optimisation (PSO), different variants of Differential Evolution (DE), Biogeography-Based Optimisation (BBO), Cultural Algorithm (CA), Optics-Inspired Optimisation (OIO), and League Championship Algorithm (LCA), for optimum layup of laminated composite plates. The study provides detailed Pseudo codes for different algorithms. The buckling capacity maximisation of a 64-layer laminated composite plate under various load scenarios has been considered as the benchmark problem, in which the design variables are the stacking sequences of layers. A Deep Statistical Comparison (DSC) method is employed to rank the performance of different algorithms. The DSC uses a nonparametric two-sample Kolmogorov-Smirnov test to conduct the performance comparisons between the algorithms. The overall performance rankings obtained from the DSC suggest that the LCA, OIO, and PSO algorithms perform remarkably better in comparison to other algorithms. The comparisons provide some interesting conclusions on the performance of different algorithms.


Introduction
Composite materials refer to multi-phase materials with enhanced properties that are fabricated by an artificial combination of two or more distinct materials [1]. These materials have found important structural applications to date. One particular structural application of composite materials is in the laminated composite plates. Recent advances revealed that laminated composite plates are an increasingly popular structural type for a wide range of applications in different industries, such as aerospace, automotive, marine, building, and renewable energy. The high strength-to-weight ratios and flexibility in design are two major advantages of such structures. The different design parameters of laminated composite structures provide a great opportunity for designers to achieve the desired cost-efficient optimum designs for a given application.
A recent literature review provided by Nikbakt et al. [2] reveals that the optimum design of laminated composite plates to enhance their mechanical behaviour and minimise costs have been attracted much attention in recent years. Researchers have been implemented different optimisation algorithms to attain optimum designs for different kinds of design variables, objective functions, and constraints. Among the different objective functions investigated in the literature, buckling load maximisation is one of the prominent objective functions that have been widely taken into consideration by researchers for the optimum design of laminated composites [3][4][5].
Meta-heuristic optimisation algorithms, such as Genetic Algorithms (GAs) [6], Simulated Annealing (SA) [7], Differential Evolution (DE) [8], Ant Colony optimisation (ACO) [9], Particle Swarm Optimisation (PSO) [10], Cultural Algorithms (CAs) [11,12], Biogeography-Based Optimisation (BBO) [13], and Harmony Search (HS) [14], are attractive techniques to solve complex optimisation problems [15][16][17][18][19][20]. Meta-heuristics are nature-inspired search techniques that take the advantage of solution perturbation and stochasticity to find acceptable solutions for real-world problems in a reasonable time [21]. Like other engineering disciplines, these algorithms have been successfully applied to optimise the laminated composite structures [22][23][24][25][26][27][28][29]. For example, Karakaya and Soykasap [30] employed GA and SA to enhance the buckling capacity and attain optimum stacking sequence for hybrid laminated composite plates with carbon/epoxy and glass/ epoxy materials. Kaveh et al. [31] investigated the application of the BBO algorithm in stacking sequence optimisation of laminated composite plates for various load cases and aspect ratios to maximise the buckling load factor. HS was utilised by Almeida et al. [32] to optimise the buckling load factor of the balanced laminated composite plate under compressive in-plane loads. Akcair et al. [33] applied DE and SA to optimise the buckling load factor of the laminated composite plate under different increments in fibre orientations. Kaveh et al. [34] proposed a novel improved rank-based version of Quantum-Inspired Evolutionary Algorithm (QEA) for optimum stacking sequence of hybrid laminated composite plates under uncertain buckling loads. The literature review also reveals that researchers have been adopted/proposed novel metaheuristic algorithms for buckling maximisation of laminated composite plates. For example, Rao et al. [35] employed a Scatter Search Algorithm (SSA), Kaveh et al. [36,37] applied the Charged System Search (CSS) algorithm, Jaya Algorithm (JA), and Colliding Bodies Optimisation (CBO).
In recent years, some novel meta-heuristic algorithms have been developed by researchers to solve a wide range of engineering problems. League Championship Algorithm (LCA) [38] inspired by the sporting competitions between the teams in sports leagues is one of such novel algorithms, which has been able to exhibit efficient performance for different kinds of problems [39][40][41][42][43]. Optics Inspired Optimisation (OIO) [44] algorithm is another recently developed meta-heuristic inspired by the optical characteristics of spherical mirrors in physics, which has been remarkably efficient for engineering applications [45][46][47][48]. The main objective of this paper is to assess the performance of different meta-heuristic algorithms, including PSO, different variants of DE, BBO, CA, OIO, and LCA, for optimum layup of laminated composite plates. The buckling capacity maximisation of a 64-layer laminated composite plate under various load scenarios is considered as the benchmark problem. A Deep Statistical Comparison (DSC) is conducted to assess the performance of different algorithms. The DSC uses the nonparametric two-sample Kolmogorov-Smirnov (KS) test to pair-wise performance comparison between each pair of algorithms. Then, the algorithms are ranked based on the results obtained from the two-sample KS test. The study provides some interesting conclusions about the performance of investigated algorithms.
The rest of the paper is organised as follows. In Sect. 2, the algorithmic details of investigated meta-heuristics, as well as their Pseudo codes, are presented. Section 3 presents the problem formulation for buckling load maximisation of laminated composite plates. The numerical test and comparisons are presented in Sect. 4. Finally, the conclusions are provided in Sect. 5.

Investigated optimisation algorithms
In this section, the algorithmic details of PSO, DE, CA, LCA, and OIO will be presented. Although the description of all details related to the mentioned algorithms in this study is not possible, this section will try to explain the main elements of algorithms without discussing the unnecessary details. However, the detailed Pseudo codes of algorithms would provide a clear picture to readers about how the algorithms work. All algorithms will be described for the minimisation problem with the objective function of f ðXÞ and vector of variables X ¼ ½x 1 ; x 2 ; . . .; x n , in which variables should satisfy the search space constraints represented by vectors X min ¼ ½x min 1 ; x min 2 ; . . .; x min n and X max ¼ ½x max 1 ; x max 2 ; . . .; x max n as follows: . . .; n.

Particle Swarm Optimisation (PSO)
Cooperative behaviour observed between the individuals in a group of species enhances their capabilities to deal with environmental challenges. These cooperation capabilities make them able to achieve difficult goals, which are almost impossible to gain by every single individual in the group. Swarm Intelligence (SI) techniques try to inspire the collective intelligence exhibited by a group or swarm of species in nature to perform the optimisation process more efficiently [49]. One of the prominent SI techniques is the PSO algorithm inspired by the social behaviour of natural swarms. The PSO was originally developed by Kennedy and Eberhart [10] in the 1990s. The algorithm assumes a given set of particles in the search space, representing potential solutions for the problem. Each particle has its position and velocity, which are defined as n-dimensional vectors for the n-dimensional problem. The algorithm tries to benefit from the experience acquired by every individual particle and the whole swarm to update the positions and velocities of the particles. To explain how the algorithm works, let us assume X t i ¼ ½x t i;1 ; x t i;2 ; . . .; x t i;n and V t i ¼ ½t t i;1 ; t t i;2 ; . . .; t t i;n are the position and velocity of the i th particle in the solution space at iteration t , in which i ¼ 1; 2; . . .; N p and N p is the swarm size. The PSO updates the velocities and positions of particles using the following formulas [10]: In Eqs. (1) and (2), X tþ1 i and V tþ1 i represent the position and velocity of the ith particle at iteration t þ 1, x t is the weighting parameter at iteration t which controls the influence of the previous velocity of the particle on its new velocity, rand represents a uniformly generated random number between 0 and 1, c 1 and c 2 are the acceleration parameters, P t i indicates the position with the best objective function value experienced by the ith individual until iteration t, and G t is the best position experienced by the whole swarm until iteration t. This study assumes that the value of the weighting parameter is gradually reduced in each iteration as x t ¼ x damp x tÀ1 , in which x damp is a damping factor. In this study, the initial value for the weighting parameter is set to 1 (i.e. x 0 ¼ 1). Algorithm 1 shows the Pseudo code of the PSO algorithm for a minimisation problem.

Differential Evolution (DE)
The DE algorithm originally developed by Storn and Price [50] is a population-based Evolutionary Algorithm (EA). The overall idea of the algorithm is to use the weighted differences of different individuals for generating new individuals in the search space. In DE, the solution-finding process is performed by three main operators, including mutation, cross-over, and selection operators.
The DE kick-starts the optimisation process with N I randomly generated individuals within the search space.
Neural Computing and Applications (2022) 34:2031-2054 2033 For each solution X t i , the algorithm applies the mutation operator to generate a mutant vector V tþ1 i ¼ ½v tþ1 i;1 ; v tþ1 i;2 ; . . .; v tþ1 i;n in each iteration. These mutant vectors will be used to form a new generation of individuals. According to the literature, a variety of mutation operators have been developed for the DE algorithm [51], in which the mutant vectors V tþ1 i are generated by weighted difference combinations of different individuals. The most popular mutation operators of DE are as follows [51,52]: • ''DE\rand-to-best\ 1'': • ''DE\current-to-best\1'': In Eqs. (3), (4), (5), (6), V tþ1 i represents the mutant vector constructed for the ith individual at iteration t þ 1, indexes r 1 ; r 2 ; r 3 ; r 4 2 ½1; 2; . . .; N I indicate the randomly generated numbers which must satisfy r 1 6 ¼ r 2 6 ¼ r 3 Best represents the individual with the best objective function value at iteration t, and F is the scaling factor which is usually assumed as a constant value.
The obtained mutant vectors V tþ1 i are different than their parent solutions X t i in all dimensions. However, changing current solutions in all dimensions may not provide good outcomes at all times. The main reason for this statement stems from the fact that the small changes in the solution vector can result in significant changes in objective function values. Hence, the DE tries to randomly keep the original values of some variables in the offspring solutions. To this end, the algorithm employs the cross-over operator to generate a new trial vector U tþ1 i ¼ ½u tþ1 i;1 ; u tþ1 i;2 ; . . .; u tþ1 i;n based on the mutant vectors V tþ1 i obtained through one of the above-mentioned mutation operators as follows: where u tþ1 i;k denotes the kth variable of trial vector U tþ1 i constructed for the ith individual at iteration t þ 1, v tþ1 i;k is the kth variable of mutant vector V tþ1 i obtained for the ith individual at iteration t þ 1, rand is the random number between 0 and 1, CR is the cross-over rate control which is usually considered as a constant value, and randiða; bÞ represents a random integer number between a and b.
The DE algorithm uses the selection operator to decide whether the trial vector U tþ1 i generated by the cross-over operator could be a member of the next generation or not. The selection operator can be expressed as follows [52]: The Pseudo code of the DE algorithm for a minimisation problem is presented in Algorithm 2. In this study, we categorise the DE algorithm based on the applied mutation operators into different versions, including DE\best\1, DE\rand-to-best\1, DE\current-to-rand\1, and DE\currentto-best\1.

Cultural Algorithms (CAs)
CAs developed by Reynolds [11] in the 1990s is an EA inspired by the principles of human social evolution. In real human societies, culture can be viewed as a source of information exchanged between individuals, which can affect the behaviours of the individuals [53]. According to the bio-cultural evolution theory [54], the overall human evolutionary process is a combination of genetic and cultural evolutionary processes. CA was developed based on the bio-cultural evolutionary mechanism [11], which is computationally different from the conventional EAs. The algorithm has found interesting applications in different research areas, ranging from computer science to different branches of engineering [12,[55][56][57][58][59].
In contrast to the conventional EAs which are based on a single population space, CA works with two population and belief spaces. These two spaces affect each other through some sorts of communication protocols. Like other EAs, the population space consists of a set of individuals who are potential solutions for the problem. The belief space records the cultural information gained by the individuals during the evolutionary process, which includes different types of knowledge components. The general Pseudo-code of the CA is illustrated in Algorithm 3. The algorithm consists of a population space } t and a belief space B t , which interact each other using Accept(), Update(), and Influence() functions.
Let us assume } t be the population space, which is consisted of N S individual solutions represented by X t i ¼ ½x t i;1 ; x t i;2 ; . . .; x t i;n , where i ¼ 1; 2; . . .; N S . As it was mentioned earlier, the belief space includes different knowledge components acquired by individuals. There are different kinds of knowledge components in the literature that could be used in CAs depending on the type of problem to be solved, including situational, normative, historical, topographical, and domain knowledge components. In this study, it is assumed that the belief space consists of situational and normative knowledge components as follows: where B t , S t , and N t are the belief space, situational knowledge, and normative knowledge, respectively. The situational knowledge component, which can be represented by S t ¼ ½s t 1 ; s t 2 ; . . .; s t n , contains the best solution obtained so far. The normative knowledge component N t is consisted of a set of information for each variable of the problem, which can be mathematically expressed as follows: where indicates the belief interval of the kth dimension of the problem, x t min;k and x t max;k are the lower and upper normative bounds for the kth dimension of the problem, respectively, L t k and U t k represent the objective function values corresponding to the lower and upper normative bounds, respectively.
The Accept() function of CA selects some elite individuals from the population space to update the belief space in each iteration. Usually, a given percentage of the individuals with better objective functions are selected to update the belief space. In this study, it is assumed that N Accept of the population would be accepted to update the belief space.
The Update() function updates the different knowledge components of the belief space using the accepted individuals as follows: where l ¼ 1; 2; . . .; %N Accept Â N S , X t l is the lth accepted individual at iteration t and x t l;k represents its kth variable. Finally, the Influence() function generates a new generation of individuals based on cultural information. There are different influence functions developed for CA in literature. Based on the previous experience of authors, the following influence function is considered in this study: where x tþ1 i;k is the kth variable of the ith individual at iteration t þ 1, size I t k À Á ¼ x t max;k À x t min;k is the size of the normative interval for the kth variable, N i;k 0; 1 ð Þ represents the random number generated by a normal distribution with the mean value of 0 and standard deviation of 1, and b [ 0 is a constant parameter.
Algorithm 4 shows the detailed Pseudo code of CA. Interested readers are referred to Ref. [12] for more details on CA and its variants.

Biogeography-based optimisation (BBO)
BBO is another EA inspired by the emigration and immigration behaviours of different species in nature. It was originally developed by Simon [13] in 2008 based on the mathematical migration models in biogeography science [60] and has been found significant applications in different fields [61][62][63][64]. In nature, the biological species migrate from one habitat to another one to access new resources. The migration process of species follows given mathematical patterns. BBO inspires this concept to perform the optimisation process. The method works with a population of habitats, in which each habitat is a potential solution for the problem. The algorithm employs two main operators to form a new generation of solutions, including migration and mutation operators.
Similar to other algorithms explained earlier, let us assume a set of N H habitats represented by vectors In BBO, the fitness function of each habitat is called as habitat suitability index (HSI), which means the habitats with high HSI values represent better solutions for the problem. In the migration operator of the BBO algorithm, two immigration and emigration rates are defined for each habitat based on the objective function values. Let k i and l i be the immigration and emigration rates for the ith habitat. The values of these rates for each habitat are obtained based on the migration models available from biogeography science. Figure 1 shows a simple linear migration model which is usually used in literature to define the migration rates of BBO. As it can be seen from this figure, the sum of immigration and emigration rates for each habitat is equal to one (i.e. k i þ l i ¼ 1). Based on Fig. 1, the habitats with higher (lower) fitness function values will have lower (larger) immigration rates k i and larger (lower) emigration rates l i . The BBO uses these rates to perform the migration operator between the habitats in the search space. In BBO, the migration operator replaces the kth variable of ith habitat by the corresponding variable of the jth habitat as follows: where x t i;k represents the kth variable of habitat i at iteration t and x t j;k is the kth variable of habitat j at iteration t. It should be noted that the index j is selected based on the emigration rates l i and roulette wheel selection method. After performing the migration operator, the mutation operator randomly replaces the variables of habitats with a random value generated in the feasible search domain. The mutation probability for each habitat depends on the mutation rates m i , which are defined based on the fitness values. The details for calculating mutation rates m i are available in Ref. [13]. Algorithm 5 shows the detailed Pseudo code for the BBO algorithm.

League Championship Algorithm (LCA)
LCA developed by Kashan [38,65] is a population-based meta-heuristic algorithm inspired by the sporting competitions between the teams in sports leagues. The algorithm simulates the sporting league by assuming each solution candidate as a team that competes with other teams to provide the best possible solution for the problem. The position and fitness function of each team represent its formation and playing strength, respectively. LCA performs a match analysis to generate new formations for teams, which simulates the process performed by coaches to find a suitable formation for their teams in real sporting competitions. In LCA, each week can be assumed as equivalent to one optimisation iteration.
The solution finding process in LCA is similar to the championship process in sports leagues, in which different teams play in pairs based on a league schedule in a given week. The outcome of each match is determined based on the strengths of the teams. Then, the teams perform match analysis and change their formations to enhance their strengths for the next week. This process is repeated until the termination criteria are met.
. . .; x t i;n be the position of the ith team at week t, where i ¼ 1; 2; . . .; N team and N team is the number of teams. Let us also assume that the best formation experience with the best strength obtained by the ith team until week t is represented by B t i ¼ ½b t i;1 ; b t i;2 ; . . .; b t i;n . LCA employs a single round-robin schedule to determine the teams that should play against each other. Figure 2 illustrates how different matches in each season are arranged in LCA between the teams. For a league with N team number of teams, each season will be consisted of N team Â ðN team À Fig. 1 Migration model in BBO [13] 1Þ=2 matches. LCA continues the league championship for S seasons, which will result in a total of S Â ðN team À 1Þ weeks of contests.
When the teams play in pairs with each other, the outcome of the match for a given team can be a win, lose, or tie. To find the outcome of each match, LCA employs a probabilistic approach. Let us assume team i with the formation X t i plays against team j with the formation X t j . Then, the winning probability of team i is expressed as follows: where b f is an arbitrary ideal value which is set to the best objective function value obtained by the teams until week t ). Based on this definition, the winning probability of team j against team i would be equal to p t j ¼ 1 À p t i . In real sporting competitions, coaches evaluate the strengths and weaknesses of their teams to improve their performance. Meanwhile, they need also to evaluate the opportunities and threats provided by the opponent teams. The coaches try to take the advantage of opportunities, while they have to monitor their opponents to protect their teams against possible external threats. The strengths and weaknesses are called internal factors, while the opportunities and threats are the external factors. To simulate these concepts, LCA performs a match analysis to update the formation of teams. Let us assume team i plays against team l based on the league schedule. The new formation of team i denoted by . . .; x tþ1 i;n depends on the previous experiences of both teams i and l at previous week t. Let j be the team that has played with team i at week t, and m be the team that has played with team l at week t. By considering these definitions, LCA updates the formation of team i as follows: • If teams i and l both had won their opponents at previous week t: • If team i had won its previous match at week t, but team l was a loser at week t: • If team i had lost its previous match at week t, but team l was a winner at week t: • If teams i and l both had lost their matches at previous week t: In the above equations, b t i;k , b t m;k , and b t j;k represent the kth variable of the best formation experienced by teams i, m and j until week t, respectively, w 1 is the approach coefficient that controls the acceleration of the team i towards the winner, and w 2 is the retreat coefficient that controls the retract team i from the loser. In Eqs. (19), (20), (21) and (22), y t i;k is a binary variable that determines whether the kth variable of vector B t i should be changed or not. Let Y t i ¼ ½y t i;1 ; y t i;2 ; . . .; y t i;n be the binary change array, in which the unit (zero) value for a given element of the array means the corresponding variable in B t i would (would not) change. LCA uses a truncated geometric distribution [66] to calculate the number of ones in array Y t i , represented by q t i , as follows: where p c is a control parameter and q 0 is the lower bound for q t i . q 0 is typically taken as 1 [38]. The greater values for the parameter p c will result in smaller changes in vector B t i and vice versa.
Algorithm 6 shows the detailed Pseudo code of LCA.

Optics inspired optimisation (OIO)
The OIO, recently developed by Kashan [44], is a metaheuristic algorithm inspired by the optical characteristics of spherical mirrors in physics. In OIO, each solution vector is modelled as an artificial light point and the surface of the objective function is assumed as a spherical mirror that reflects the incident ray based on the governing equations in optics. Let us assume NO light points represented by vectors X t i ¼ ½x t i;1 ; x t i;2 ; . . .; x t i;n for a minimisation problem. For each light point X t i , OIO randomly selects another solution represented by X t j ¼ ½x t j;1 ; x t j;2 ; . . .; x t j;n from the population as an artificial mirror in which j ¼ 1; 2; . . .; NO and j 6 ¼ i. the artificial light point X t i formed by the artificial mirror X t j as follows: where I t i represents the image position of the artificial light point X t i , p t i;j indicates the distance between the artificial light point i and the artificial mirror j on the objective function axis, and r t j is the curvature radius of the artificial mirror X t j . The parameter p t i;j is defined as follows: where s t i;j is the position of artificial light point i on the objective function axis. If the artificial mirror is concave, s t i;j is randomly selected as ½ represents a random value between a and b, and d 1 indicates the physical infinity that can be any positive value. In OIO, the value of physical infinity is initially assumed as d 1 ¼ maxf X t l À Á l¼1;2;...;NO . The physical infinity d 1 would be updated in the artificial spherical aberration stage of the algorithm. For the case of a convex mirror, s t i;j is randomly assigned as The curvature radius of the artificial mirror X t j is calculated as follows: In this equation, m t j is the position of the centre of curvature for artificial mirror j, which is randomly deter- for convex mirrors. The image vector I t i represents a new solution for the problem, which differs from the artificial light point X t i in all dimensions. OIO constructs a new light point (or solution) X tþ1 i by keeping the number of changes in vector X t i less than n. Let us assume U t i X t i . OIO uses Eq. (23) to determine the number of changes q t i . Then, q t i number of variables are randomly selected from I t i , and their values are assigned to their corresponding variables in vector U t i . Finally, if the objective function of U t i is better than X t i , then U t i would be the new artificial light point, i.e.
Otherwise, the algorithm keeps the previous light point for the next iteration, i.e. X tþ1 i ¼ X t i . It should be noted that the value of m t j should satisfy a certain condition. In some cases, the difference between the objective function values of artificial light point i and the artificial mirror j is significantly high. In these cases, the image of light point i on artificial mirror j would be a blurry image, which can cause premature convergence of OIO from the numerical viewpoint.
In OIO, to avoid the spherical aberration phenomenon, the algorithm repeatedly updates the value of the parameter m t j . The interested readers are referred to Ref. [44] for more details on the spherical aberration mechanism of OIO. Algorithm 7 shows the full detailed Pseudo code of the OIO algorithm.

Problem formulation
As it was mentioned earlier, buckling capacity maximisation for optimum layup of laminated composite plates is an important problem that has been widely investigated by researchers. Laminated composite plates subjected to inplane compressive loads are potentially susceptible to lose their stability. Hence, the buckling capacity of laminated composites under in-plane compressive loads should be considered in the design process. Let us assume a simply supported laminated composite plate shown in Fig. 3 with dimensions a and b in x and y directions, respectively, in which N x and N y are the in-plane compressive loads in x and y directions, respectively. According to the classical laminated plate theory, the buckling load factor can be expressed as follows [67]: where k b is the buckling load factor, r is the aspect ratio which represents the ratio of length to width, D ij indicates the bending stiffness of composite plate, a and b are the dimensions of the laminate in x and y directions, respectively, p and q are the half-waves in the x and y directions, respectively. It should be noted that the parameters D 16 and D 26 are neglected in Eq. (27), as they are zero for specially orthotropic laminates and very small for the symmetrically balanced laminates. Thus, the buckling load factor obtained by Eq. (27) is exact for specially orthotropic laminates and approximately correct for the symmetrically balanced laminates. It is clear from Eq. (27) that the buckling load factor is a function of the material property, length, width, stacking sequence, applied in-plane compressive loads, and value of parameters p and q. It is worth mentioning that various values for parameters p and q will result in different buckling load factors. The smallest buckling load factor yielded by Eq. (27) for different values of parameters p and q will be the critical buckling load factor for the laminated composite plate. The optimum stacking sequence problem of the laminated composite plates under in-plane compressive loads for maximum buckling capacity can be mathematically expressed as follows: where f : ð Þ represents the objective function of the problem, X ¼ x 1 ; x 2 ; . . .; x n ½ indicates the vector of design variables, x i is the fibre orientation for the ith ply which should be selected from a given discrete set, n is the number of design variables, S is the vector containing the allowable fibre orientations, and p is the number of allowable fibre orientations. It should be noted that for a composite laminate with m layers, the number of design variables will be equal to m/2 due to symmetry (i.e. n ¼ m=2).

Numerical results
In this section, the performance of different meta-heuristics in the buckling load maximisation of a 64-layer laminated composite plate will be investigated. The plate is assumed as a symmetrically balanced laminated composite with simply supported boundary conditions. The design variables are the stacking sequences of layers, which should be selected from the discrete set of 0 2 ; AE15; AE30; AE45; AE60; AE75; 90 2 ½ . In previous studies, possible fibre orientations were taken as 0 2 ; AE45; 90 2 ½ [27,31]. However, this study considers more possible fibre orientations to make the problem more challenging for the algorithms. It is assumed that the laminated composite plate is made of graphite/epoxy material with mechanical properties presented in Table 1. The length of the plate is assumed to be 0.508 m with various aspect ratios and load cases listed in Table 2.

Internal parameters
According to Sect. 2, the algorithms have a set of internal parameters which can significantly affect their performance. To find the best possible values for these parameters, several performance sensitivity analyses have been performed by considering different values for internal parameters. The sensitivity analyses were performed based on a trial and error approach. Table 3 lists the possible values for internal parameters of each algorithm alongside their suitable values obtained from the sensitivity analyses. From different combinations of possible values, the suitable parameter values were selected based on the best performance exhibited by each algorithm. To keep the article in a manageable size, only the final results of sensitivity analyses were presented in Table 3. In this study, the obtained internal parameters from the sensitivity analyses will be used to assess the performance of different algorithms.

Performance comparisons
To investigate the performance of different algorithms in optimum design of 64-layer laminated composite, 5000 function evaluations were defined as the termination criterion. The algorithms were repeated for 30 independent runs, and their best, average, and worst results alongside the standard deviations for different load cases were reported.
For different load cases, Tables 4, 5, 6, 7, 8, 9, 10, 11 and 12 present the optimum stacking sequences corresponding to the maximum buckling load factors obtained by each algorithm alongside the best, mean, standard deviation, and worst results. At first glance, it is observable from the results in Tables 4, 5 , 6, 7, 8, 9, 10, 11 and 12 that none of the algorithms is capable of exhibiting better performance than others in all load cases and the efficiency of each algorithm differs from a given load case to another one. The reason behind this contradiction can be explained based on the ''No Free Lunch'' theorem [68] which states that it is almost impossible to develop a general strategy to solve different problem types in an equally efficient manner. With this introductory statement, the performance of the algorithms will be investigated in more detail in this section to find out which algorithms are capable of providing the most promising results for the different load cases of the laminated composite layup optimisation problem.
From Tables 4,5,6,7,8,9,10,11 and 12,it can be seen that the algorithms exhibit quite similar performances for the second, fifth, and eighth load cases, whereas their performances seem to be different for other load cases. The numerical results in Tables 4,5,6,7,8,9,10,11 and 12 can be interpreted in different ways. In terms of the best results, for the first load case, it turns out that PSO, CA, OIO, and LCA can find the maximum buckling load factor. LCA found better best solution than all other algorithms in the third case. On the other hand, PSO and BBO exhibit better performances than others for the fourth load case in terms of the best solution. Moreover, the best solutions obtained by PSO, BBO, and LCA are better than those yielded by other algorithms in the ninth load case. For the rest of the load cases, all algorithms were able to find the optimum buckling load factors.
If the results reported in Tables 4, 5, 6, 7, 8, 9, 10, 11 and 12 are compared in terms of standard deviations, it can be seen that the standard deviations yielded by BBO, OIO, and LCA algorithms for the second, fifth, and eighth load cases are equal to zero, which indicate that these algorithms are capable of finding the optimum solutions in each independent run. Comparison of the standard deviations yielded by different algorithms for other load cases suggests that LCA and OIO algorithms are capable of providing the lowest standard deviations for the almost rest of the load cases except the seventh load case, which make their performance more impressive than others. Comparing with other algorithms, the efficiency of LCA and OIO may stem from the fact that their search operators can keep the diversity of the population at a high level to avoid premature convergence to the local optimum points in the search space. Although the DE\current-to-rand\1 algorithm yielded the lowest standard deviation for the seventh load case, the LCA and OIO are still competitive in this load case as well.
The worst result obtained from 30 independent runs for each algorithm is also another important criterion, which can show how the algorithms are capable of generating better results in the worst-case scenarios. For the second, fifth, and eighth load cases, the BBO, OIO, and LCA performed better than other algorithms in terms of worst results. For the rest of the load cases except the seventh and last load cases, LCA obtained better worst results than all other algorithms. However, DE\current-to-rand\1 and OIO algorithms provided better worst buckling load factors for the seventh and last load cases, respectively. Figure 4 presents the boxplots to provide statistical intuitive performance comparisons between different algorithms for the first and fourth load cases. The reason behind the selection of these load cases stems from the fact that these load cases are challenging enough to clearly show the statistical differences between the algorithms. For the first load case, it is shown in Fig. 4 that the ranges and variances of the results obtained by the PSO, OIO, and LCA are significantly smaller than those for other algorithms, which show the stability of these algorithms in finding maximum buckling load factors. For the fourth load case, the superiority of PSO, OIO, and LCA is still observable. However, the BBO is also exhibited competitive performance in the fourth load case. It can also be seen that the mean values obtained by the PSO, OIO, and LCA for the first and fourth load cases are well distributed near the optimum buckling load factors. The overall conclusion that can make from Fig. 4 is that the LCA is statistically more stable than all other algorithms. However, the statistical performances of PSO and OIO are also remarkable.
To investigate the convergence properties of different algorithms, the average convergence diagrams for the first and fourth load cases are illustrated in Fig. 5. From Fig. 5, it can be observed that the LCA and PSO exhibit faster convergence rates than other algorithms. Although the convergence rate of OIO is slower than others, this algorithm can find higher quality solutions than most of the other algorithms as the iterations proceed. This may reflect the fact that the trade-offs between the exploration and exploitation phases in LCA and PSO are more appropriately implemented and they are computationally more efficient. It seems that OIO switches from the exploration phase to the exploitation phase with a significant delay in comparison to the LCA and PSO. However, despite this delay, OIO is still capable of converging to better final solutions in comparison to most of the investigated algorithms. It is also observable from Fig. 5 that the convergence behaviours of different variants of DE seem to be somehow similar.

Deep statistical comparison
Recent research trends in meta-heuristics revealed that the comparison of the efficiency of algorithms only based on the basic statistical parameters, such as best, mean, worst, and standard deviation, is not statistically enough to make proper conclusions [69,70]. Various statistical tests in the literature are applicable to evaluate, compare, and rank the performance of meta-heuristics for a given problem [69]. Recently, Eftimov et al. [70] proposed a Deep Statistical Comparison (DSC) method for the performance comparison of algorithms. The approach uses the two-sample Kolmogorov-Smirnov (KS) test to pair-wise performance comparison between each pair of algorithms. Then, the algorithms are ranked based on the results obtained from the two-sample KS test. In this study, in order to provide a fair comparison, the DSC method is employed to rank the performance of investigated algorithms in the optimum layup problem of laminated composite plates.
The two-sample KS test is a nonparametric statistical test that determines whether two sets of data come from the same continuous distribution or not. The test assumes the null hypothesis that the results obtained from each pair of algorithms come from the same continuous distribution. In other words, the KS test considers the null hypothesis that the performances of each pair of algorithms are statistically equivalent. Let us assume a KS be the significance level used by the KS test, which indicates the probability threshold for acceptance or rejection of the null hypothesis. For the selected significance level a KS , the KS test computes p value for each pair of algorithms. If p value is smaller than the significance level a KS , the null hypothesis would be rejected. Otherwise, the null hypothesis would be accepted. The acceptance of the null hypothesis means that there is no significant difference between the two algorithms and they perform statistically the same. However, if the null hypothesis is rejected, it means that the two algorithms perform statistically different. The DSC approach use the p value for ranking the performance of different algorithms. For more details about the ranking formulations, the interested readers are referred to Ref. [70].
The selection of a KS plays an important role in the acceptance or rejection of the null hypothesis. Inappropriate values for this parameter could result in the wrong conclusions about the performances of different algorithms. If a KS is taken as 1, it would result in the rejection  Table 3 Internal parameters of different algorithms and their appropriate values obtained from the sensitivity analyses of the null hypothesis for all values of p value , which would mean the performances of all algorithms are always statistically different. The value of a KS is typically considered between 0.05 and 0.1 [70]. In this study, the    Fig. 6. It is obvious that the diagonal elements in Fig. 6 should be equal to one, which means that each algorithm comes from the same distribution in comparison to itself. Figure 6 illustrates differences between the performances of the DE\best\1 algorithm and others in these load cases. It also implies that there are no significant differences between most of the algorithms in load cases 2, 5, and 8, and their performances are statistically equivalent. In the rest of the load cases which are more challenging than load cases 2, 5, and 8, the null hypotheses are rejected and there are significant differences between the performances of different algorithms. Finally, Table 13 lists the performance rankings of different algorithms for different load cases. The algorithms with the same rank values in each load case reveal the fact that their performances are statistically equivalent. From this table, it can be concluded that the LCA algorithm performs better or equal in comparison to other algorithms in different load cases, except load case 7. Among different variants of the DE algorithm, the DE\current-to-rand\1 performs better than others as it is ranked as the first algorithm in load cases 6 and 7. The last column of Table 13 shows the overall ranking of algorithms calculated based on the overall scores obtained in different load cases, in which the LCA and OIO algorithms are ranked as the two most efficient algorithms between the investigated nine algorithms.

Concluding remarks
The performance of nine meta-heuristic algorithms, including PSO, different variants of DE, CA, BBO, OIO and LCA, were assessed for the optimum layup problem of laminated composite plates. The buckling capacity  Fig. 6 p-value obtained from the two-sample Kolmogorov-Smirnov (KS) test for each pair of algorithms in different load cases maximisation of a 64-layer laminated composite plate under various load cases has been investigated as the benchmark problem, in which the design variables are the stacking sequences of layers. The performances of algorithms in finding maximum buckling load factors were evaluated in terms of the basic statistical parameters, including best, mean, standard deviation, and worst results. The numerical results revealed that the ranges and variances of the results obtained by the PSO, OIO, and LCA are significantly smaller than those for other algorithms, which show the stability of these algorithms in finding maximum buckling load factors.
To provide a fair comparison between the algorithms, a Deep Statistical Comparison (DSC) method was employed to rank the performance of different algorithms. The DSC approach uses a nonparametric two-sample Kolmogorov-Smirnov (KS) test for pair-wise performance comparison between each pair of algorithms. The KS test assumes the null hypothesis that the performances of each pair of algorithms are statistically equivalent. The results obtained from the KS test with the significance level of a KS ¼ 0:05 revealed that there are significant differences between the performances of different algorithms for most of the load cases. The performance rankings obtained from the DSC method suggested that the LCA algorithm performs better or equal in comparison to other algorithms in most of the load cases. The overall ranking of algorithms calculated based on the overall scores obtained from different load cases was as follows: The convergence diagrams obtained from 30 independent runs revealed that the PSO and LCA exhibit faster convergence rates than other algorithms. This may reflect the fact that the trade-offs between the exploration and exploitation phases in LCA and PSO are more adequately implemented. Despite its remarkable performance in terms of final results, the convergence rate of OIO is slower than other algorithms. It was observed that OIO switches from the exploration phase to the exploitation phase with a significant delay in comparison to the LCA and PSO.

Declarations
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.