1 Introduction

Composite materials refer to multi-phase materials with enhanced properties that are fabricated by an artificial combination of two or more distinct materials [1]. These materials have found important structural applications to date. One particular structural application of composite materials is in the laminated composite plates. Recent advances revealed that laminated composite plates are an increasingly popular structural type for a wide range of applications in different industries, such as aerospace, automotive, marine, building, and renewable energy. The high strength-to-weight ratios and flexibility in design are two major advantages of such structures. The different design parameters of laminated composite structures provide a great opportunity for designers to achieve the desired cost-efficient optimum designs for a given application.

A recent literature review provided by Nikbakt et al. [2] reveals that the optimum design of laminated composite plates to enhance their mechanical behaviour and minimise costs have been attracted much attention in recent years. Researchers have been implemented different optimisation algorithms to attain optimum designs for different kinds of design variables, objective functions, and constraints. Among the different objective functions investigated in the literature, buckling load maximisation is one of the prominent objective functions that have been widely taken into consideration by researchers for the optimum design of laminated composites [3,4,5].

Meta-heuristic optimisation algorithms, such as Genetic Algorithms (GAs) [6], Simulated Annealing (SA) [7], Differential Evolution (DE) [8], Ant Colony optimisation (ACO) [9], Particle Swarm Optimisation (PSO) [10], Cultural Algorithms (CAs) [11, 12], Biogeography-Based Optimisation (BBO) [13], and Harmony Search (HS) [14], are attractive techniques to solve complex optimisation problems [15,16,17,18,19,20]. Meta-heuristics are nature-inspired search techniques that take the advantage of solution perturbation and stochasticity to find acceptable solutions for real-world problems in a reasonable time [21]. Like other engineering disciplines, these algorithms have been successfully applied to optimise the laminated composite structures [22,23,24,25,26,27,28,29]. For example, Karakaya and Soykasap [30] employed GA and SA to enhance the buckling capacity and attain optimum stacking sequence for hybrid laminated composite plates with carbon/epoxy and glass/epoxy materials. Kaveh et al. [31] investigated the application of the BBO algorithm in stacking sequence optimisation of laminated composite plates for various load cases and aspect ratios to maximise the buckling load factor. HS was utilised by Almeida et al. [32] to optimise the buckling load factor of the balanced laminated composite plate under compressive in-plane loads. Akcair et al. [33] applied DE and SA to optimise the buckling load factor of the laminated composite plate under different increments in fibre orientations. Kaveh et al. [34] proposed a novel improved rank-based version of Quantum-Inspired Evolutionary Algorithm (QEA) for optimum stacking sequence of hybrid laminated composite plates under uncertain buckling loads. The literature review also reveals that researchers have been adopted/proposed novel meta-heuristic algorithms for buckling maximisation of laminated composite plates. For example, Rao et al. [35] employed a Scatter Search Algorithm (SSA), Kaveh et al. [36, 37] applied the Charged System Search (CSS) algorithm, Jaya Algorithm (JA), and Colliding Bodies Optimisation (CBO).

In recent years, some novel meta-heuristic algorithms have been developed by researchers to solve a wide range of engineering problems. League Championship Algorithm (LCA) [38] inspired by the sporting competitions between the teams in sports leagues is one of such novel algorithms, which has been able to exhibit efficient performance for different kinds of problems [39,40,41,42,43]. Optics Inspired Optimisation (OIO) [44] algorithm is another recently developed meta-heuristic inspired by the optical characteristics of spherical mirrors in physics, which has been remarkably efficient for engineering applications [45,46,47,48]. The main objective of this paper is to assess the performance of different meta-heuristic algorithms, including PSO, different variants of DE, BBO, CA, OIO, and LCA, for optimum layup of laminated composite plates. The buckling capacity maximisation of a 64-layer laminated composite plate under various load scenarios is considered as the benchmark problem. A Deep Statistical Comparison (DSC) is conducted to assess the performance of different algorithms. The DSC uses the nonparametric two-sample Kolmogorov–Smirnov (KS) test to pair-wise performance comparison between each pair of algorithms. Then, the algorithms are ranked based on the results obtained from the two-sample KS test. The study provides some interesting conclusions about the performance of investigated algorithms.

The rest of the paper is organised as follows. In Sect. 2, the algorithmic details of investigated meta-heuristics, as well as their Pseudo codes, are presented. Section 3 presents the problem formulation for buckling load maximisation of laminated composite plates. The numerical test and comparisons are presented in Sect. 4. Finally, the conclusions are provided in Sect. 5.

2 Investigated optimisation algorithms

In this section, the algorithmic details of PSO, DE, CA, LCA, and OIO will be presented. Although the description of all details related to the mentioned algorithms in this study is not possible, this section will try to explain the main elements of algorithms without discussing the unnecessary details. However, the detailed Pseudo codes of algorithms would provide a clear picture to readers about how the algorithms work. All algorithms will be described for the minimisation problem with the objective function of \(f({\varvec{X}})\) and vector of variables \({\varvec{X}}=[{x}_{1},\boldsymbol{ }{x}_{2},\boldsymbol{ }\dots ,\boldsymbol{ }{x}_{n}]\), in which variables should satisfy the search space constraints represented by vectors \({{\varvec{X}}}^{\mathrm{min}}=[{x}_{1}^{\mathrm{min}}, {x}_{2}^{\mathrm{min}}, \dots , {x}_{n}^{\mathrm{min}}]\) and \({{\varvec{X}}}^{\mathrm{max}}=[{x}_{1}^{\mathrm{max}}, {x}_{2}^{\mathrm{max}}, \dots , {x}_{n}^{\mathrm{max}}]\) as follows: \({x}_{k}^{\mathrm{min}}\le {x}_{k}\le {x}_{k}^{\mathrm{max}}, k=1, 2, \dots , n\).

2.1 Particle Swarm Optimisation (PSO)

Cooperative behaviour observed between the individuals in a group of species enhances their capabilities to deal with environmental challenges. These cooperation capabilities make them able to achieve difficult goals, which are almost impossible to gain by every single individual in the group. Swarm Intelligence (SI) techniques try to inspire the collective intelligence exhibited by a group or swarm of species in nature to perform the optimisation process more efficiently [49]. One of the prominent SI techniques is the PSO algorithm inspired by the social behaviour of natural swarms. The PSO was originally developed by Kennedy and Eberhart [10] in the 1990s. The algorithm assumes a given set of particles in the search space, representing potential solutions for the problem. Each particle has its position and velocity, which are defined as \(n\)-dimensional vectors for the \(n\)-dimensional problem. The algorithm tries to benefit from the experience acquired by every individual particle and the whole swarm to update the positions and velocities of the particles.

To explain how the algorithm works, let us assume \({{\varvec{X}}}_{i}^{t}=[{x}_{i,1}^{t}, {x}_{i,2}^{t}, \dots , {x}_{i,n}^{t} ]\) and \({{\varvec{V}}}_{i}^{t}=[{\upsilon }_{i,1}^{t}, {\upsilon }_{i,2}^{t}, \dots , {\upsilon }_{i,n}^{t} ]\) are the position and velocity of the \(i\) th particle in the solution space at iteration \(t\) , in which \(i=1, 2, \dots , {N}_{p}\) and \({N}_{p}\) is the swarm size. The PSO updates the velocities and positions of particles using the following formulas [10]:

$${\varvec{V}}_{i}^{t + 1} = \omega^{t} {\varvec{V}}_{i}^{t} + c_{1} {\text{rand}}\left( {{\varvec{P}}_{i}^{t} - {\varvec{X}}_{i}^{t} } \right) + c_{2} {\text{rand}}\left( {{\varvec{G}}^{t} - {\varvec{X}}_{i}^{t} } \right)$$
(1)
$${\varvec{X}}_{i}^{t + 1} = {\varvec{X}}_{i}^{t} + {\varvec{V}}_{i}^{t + 1}$$
(2)

In Eqs. (1) and (2), \({\varvec{X}}_{i}^{t + 1}\) and \({\varvec{V}}_{i}^{t + 1}\) represent the position and velocity of the \(i\)th particle at iteration \(t+1\), \({\omega }^{t}\) is the weighting parameter at iteration \(t\) which controls the influence of the previous velocity of the particle on its new velocity, \(\mathrm{rand}\) represents a uniformly generated random number between 0 and 1, \({c}_{1}\) and \({c}_{2}\) are the acceleration parameters, \({{\varvec{P}}}_{i}^{t}\) indicates the position with the best objective function value experienced by the \(i\)th individual until iteration \(t\), and \({{\varvec{G}}}^{t}\) is the best position experienced by the whole swarm until iteration \(t\). This study assumes that the value of the weighting parameter is gradually reduced in each iteration as \({\omega }^{t}={\omega }_{\mathrm{damp}}{\omega }^{t-1}\), in which \({\omega }_{\mathrm{damp}}\) is a damping factor. In this study, the initial value for the weighting parameter is set to 1 (i.e. \({\omega }^{0}=1\)). Algorithm 1 shows the Pseudo code of the PSO algorithm for a minimisation problem.

2.2 Differential Evolution (DE)

The DE algorithm originally developed by Storn and Price [50] is a population-based Evolutionary Algorithm (EA). The overall idea of the algorithm is to use the weighted differences of different individuals for generating new individuals in the search space. In DE, the solution-finding process is performed by three main operators, including mutation, cross-over, and selection operators.

figure a

The DE kick-starts the optimisation process with \({N}_{\mathrm{I}}\) randomly generated individuals within the search space. For each solution \({{\varvec{X}}}_{i}^{t}\), the algorithm applies the mutation operator to generate a mutant vector \({{\varvec{V}}}_{i}^{t+1}=[{v}_{i,1}^{t+1}, {v}_{i,2}^{t+1}, \dots , {v}_{i,n}^{t+1}]\) in each iteration. These mutant vectors will be used to form a new generation of individuals. According to the literature, a variety of mutation operators have been developed for the DE algorithm [51], in which the mutant vectors \({{\varvec{V}}}_{i}^{t+1}\) are generated by weighted difference combinations of different individuals. The most popular mutation operators of DE are as follows [51, 52]:

  • “DE\best\1”:

    $${\varvec{V}}_{i}^{t + 1} = {\varvec{X}}_{{{\text{Best}}}}^{t} + F\left( {{\varvec{X}}_{{r_{1} }}^{t} - {\varvec{X}}_{{r_{2} }}^{t} } \right)$$
    (3)
  • “DE\rand-to-best\ 1”:

    $${\varvec{V}}_{i}^{t + 1} = {\varvec{X}}_{{r_{1} }}^{t} + F\left( {{\varvec{X}}_{{{\text{Best}}}}^{t} - {\varvec{X}}_{{r_{2} }}^{t} } \right) + F\left( {{\varvec{X}}_{{r_{3} }}^{t} - {\varvec{X}}_{{r_{4} }}^{t} } \right)$$
    (4)
  • “DE\current-to-rand\1”

    $${\varvec{V}}_{i}^{t + 1} = {\varvec{X}}_{{r_{1} }}^{t} + F\left( {{\varvec{X}}_{{r_{2} }}^{t} - {\varvec{X}}_{i}^{t} } \right) + F\left( {{\varvec{X}}_{{r_{3} }}^{t} - {\varvec{X}}_{{r_{4} }}^{t} } \right)$$
    (5)
  • “DE\current-to-best\1”:

    $${\varvec{V}}_{i}^{t + 1} = {\varvec{X}}_{{r_{1} }}^{t} + F\left( {{\varvec{X}}_{{{\text{Best}}}}^{t} - {\varvec{X}}_{i}^{t} } \right) + F\left( {{\varvec{X}}_{{r_{2} }}^{t} - {\varvec{X}}_{{r_{3} }}^{t} } \right)$$
    (6)

In Eqs. (3), (4), (5), (6), \({{\varvec{V}}}_{i}^{t+1}\) represents the mutant vector constructed for the \(i\)th individual at iteration \(t+1\), indexes \({r}_{1},{r}_{2}, {r}_{3}, {r}_{4}\in [1, 2, \dots , {N}_{\mathrm{I}}]\) indicate the randomly generated numbers which must satisfy \({r}_{1}\ne {r}_{2}\ne {r}_{3}\ne {r}_{4}\ne i\), \({{\varvec{X}}}_{\mathrm{Best}}^{t}\) represents the individual with the best objective function value at iteration \(t\), and \(F\) is the scaling factor which is usually assumed as a constant value.

The obtained mutant vectors \({{\varvec{V}}}_{i}^{t+1}\) are different than their parent solutions \({{\varvec{X}}}_{i}^{t}\) in all dimensions. However, changing current solutions in all dimensions may not provide good outcomes at all times. The main reason for this statement stems from the fact that the small changes in the solution vector can result in significant changes in objective function values. Hence, the DE tries to randomly keep the original values of some variables in the offspring solutions. To this end, the algorithm employs the cross-over operator to generate a new trial vector \({{\varvec{U}}}_{i}^{t+1}=[{u}_{i,1}^{t+1}, {u}_{i,2}^{t+1}, \dots , {u}_{i,n}^{t+1}]\) based on the mutant vectors \({{\varvec{V}}}_{i}^{t+1}\) obtained through one of the above-mentioned mutation operators as follows:

$$u_{{i,k}}^{{t + 1}} = \left\{ {\begin{array}{*{20}c} {v_{{i,k}}^{{t + 1}} } & {{\text{if}}\;{\text{rand}} < CR~{\text{or}}~k = {\text{randi}}\left( {1,n} \right)} \\ {x_{{i,k}}^{t} ~} & {{\text{otherwise}}} \\ \end{array} } \right.$$
(7)

where \({u}_{i,k}^{t+1}\) denotes the \(k\)th variable of trial vector \({{\varvec{U}}}_{i}^{t+1}\) constructed for the \(i\)th individual at iteration \(t+1\), \({v}_{i,k}^{t+1}\) is the \(k\)th variable of mutant vector \({{\varvec{V}}}_{i}^{t+1}\) obtained for the \(i\)th individual at iteration \(t+1\), \(\mathrm{rand}\) is the random number between 0 and 1, \(CR\) is the cross-over rate control which is usually considered as a constant value, and \(\mathrm{randi}(a,b)\) represents a random integer number between \(a\) and \(b\).

The DE algorithm uses the selection operator to decide whether the trial vector \({{\varvec{U}}}_{i}^{t+1}\) generated by the cross-over operator could be a member of the next generation or not. The selection operator can be expressed as follows [52]:

$${\varvec{X}}_{i}^{t + 1} = \left\{ {\begin{array}{*{20}c} {{\varvec{U}}_{i}^{t + 1} } & { {\text{if }} f\left( {{\varvec{U}}_{i}^{t + 1} } \right) < f\left( {{\varvec{X}}_{i}^{t} } \right)} \\ {{\varvec{X}}_{i}^{t} } & {{\text{otherwise}}} \\ \end{array} } \right.$$
(8)

The Pseudo code of the DE algorithm for a minimisation problem is presented in Algorithm 2. In this study, we categorise the DE algorithm based on the applied mutation operators into different versions, including DE\best\1, DE\rand-to-best\1, DE\current-to-rand\1, and DE\current-to-best\1.

figure b

2.3 Cultural Algorithms (CAs)

CAs developed by Reynolds [11] in the 1990s is an EA inspired by the principles of human social evolution. In real human societies, culture can be viewed as a source of information exchanged between individuals, which can affect the behaviours of the individuals [53]. According to the bio-cultural evolution theory [54], the overall human evolutionary process is a combination of genetic and cultural evolutionary processes. CA was developed based on the bio-cultural evolutionary mechanism [11], which is computationally different from the conventional EAs. The algorithm has found interesting applications in different research areas, ranging from computer science to different branches of engineering [12, 55,56,57,58,59].

In contrast to the conventional EAs which are based on a single population space, CA works with two population and belief spaces. These two spaces affect each other through some sorts of communication protocols. Like other EAs, the population space consists of a set of individuals who are potential solutions for the problem. The belief space records the cultural information gained by the individuals during the evolutionary process, which includes different types of knowledge components. The general Pseudo-code of the CA is illustrated in Algorithm 3. The algorithm consists of a population space \({\boldsymbol{\wp }}^{t}\) and a belief space \({\mathcal{B}}^{t}\), which interact each other using Accept(), Update(), and Influence() functions.

figure c

Let us assume \(\boldsymbol{\wp }^{t}\) be the population space, which is consisted of \({N}_{\mathrm{S}}\) individual solutions represented by \({{\varvec{X}}}_{i}^{t}=[{x}_{i,1}^{t}, {x}_{i,2}^{t}, \dots , {x}_{i,n}^{t} ]\), where \(i=1, 2, \dots , {N}_{\mathrm{S}}\). As it was mentioned earlier, the belief space includes different knowledge components acquired by individuals. There are different kinds of knowledge components in the literature that could be used in CAs depending on the type of problem to be solved, including situational, normative, historical, topographical, and domain knowledge components. In this study, it is assumed that the belief space consists of situational and normative knowledge components as follows:

$${\mathbf{\mathcal{B}}}^{t} = \left\{ {{\mathbf{\mathcal{S}}}^{t} ,{\mathbf{\mathcal{N}}}^{t} } \right\}$$
(9)

where \({\mathcal{B}}^{t}\), \({\mathcal{S}}^{t}\), and \({\mathcal{N}}^{t}\) are the belief space, situational knowledge, and normative knowledge, respectively. The situational knowledge component, which can be represented by

\({\mathcal{S}}^{t}=[{s}_{1}^{t},{s}_{2}^{t},\dots , {s}_{n}^{t}]\), contains the best solution obtained so far. The normative knowledge component \({\mathbf{\mathcal{N}}}^{t}\) is consisted of a set of information for each variable of the problem, which can be mathematically expressed as follows:

$${\mathbf{\mathcal{N}}}^{t} = \left[ {\begin{array}{*{20}c} {{\varvec{I}}_{1}^{t} } & {{\varvec{I}}_{2}^{t} } & \ldots & {{\varvec{I}}_{n}^{t} } \\ {L_{1}^{t} } & {L_{2}^{t} } & \ldots & {L_{n}^{t} } \\ {U_{1}^{t} } & {U_{2}^{t} } & \ldots & {U_{n}^{t} } \\ \end{array} } \right]$$
(10)

where \({{\varvec{I}}}_{k}^{t}=\left[{x}_{\mathrm{min},k}^{t},{x}_{\mathrm{max},k}^{t}\right]\) indicates the belief interval of the \(k\)th dimension of the problem, \({x}_{\mathrm{min},k}^{t}\) and \({x}_{\mathrm{max},k}^{t}\) are the lower and upper normative bounds for the \(k\)th dimension of the problem, respectively, \({L}_{k}^{t}\) and \({U}_{k}^{t}\) represent the objective function values corresponding to the lower and upper normative bounds, respectively.

The Accept() function of CA selects some elite individuals from the population space to update the belief space in each iteration. Usually, a given percentage of the individuals with better objective functions are selected to update the belief space. In this study, it is assumed that \({N}_{\mathrm{Accept}}\) of the population would be accepted to update the belief space.

The Update() function updates the different knowledge components of the belief space using the accepted individuals as follows:

$${\mathbf{\mathcal{S}}}^{t + 1} = \left\{ {\begin{array}{*{20}c} {{\varvec{X}}_{l}^{t} } & {{\text{if }} f\left( {{\varvec{X}}_{l}^{t} } \right) < f\left( {{\mathbf{\mathcal{S}}}^{t} } \right)} \\ {{\mathbf{\mathcal{S}}}^{t} } & {{\text{otherwise}}} \\ \end{array} } \right.$$
(11)
$$x_{{{\text{min}},k}}^{{t + 1}} = \left\{ {\begin{array}{*{20}c} {x_{{l,k}}^{t} } & {{\text{if}}~\;x_{{l,k}}^{t} \le x_{{{\text{min}},k}}^{t} ~{\text{or}}\;~f\left( {\user2{X}_{l}^{t} } \right) < L_{k}^{t} } \\ {x_{{{\text{min}},k}}^{t} } & {{\text{otherwise}}} \\ \end{array} } \right.$$
(12)
$$x_{{{\text{max}},k}}^{{t + 1}} = \left\{ {\begin{array}{*{20}c} {x_{{l,k}}^{t} } & {~{\text{if}}~\;x_{{l,k}}^{t} \ge x_{{{\text{max}},k}}^{t} ~{\text{or}}\;~f\left( {\user2{X}_{l}^{t} } \right) < U_{k}^{t} } \\ {x_{{max,k}}^{t} } & {{\text{otherwise}}} \\ \end{array} } \right.$$
(13)
$$L_{k}^{{t + 1}} = \left\{ {\begin{array}{*{20}c} {f\left( {x_{l}^{t} } \right)} & {~{\text{if}}~\;x_{{l,k}}^{t} \le x_{{{\text{min}},k}}^{t} ~{\text{or}}~\;f\left( {\user2{X}_{l}^{t} } \right) < L_{k}^{t} } \\ {L_{k}^{t} } & {{\text{otherwise}}} \\ \end{array} } \right.$$
(14)
$$U_{k}^{{t + 1}} = \left\{ {\begin{array}{*{20}c} {f\left( {x_{l}^{t} } \right)} & {{\text{if }}~\;x_{{l,k}}^{t} \ge x_{{{\text{max}},k}}^{t} ~{\text{or}}~\;f\left( {\user2{X}_{l}^{t} } \right) < U_{k}^{t} } \\ {U_{k}^{t} } & {~{\text{otherwise}}} \\ \end{array} } \right.$$
(15)

where \(l = 1, 2, \ldots , \% N_{{{\text{Accept}}}} \times N_{{\text{S}}}\), \({\varvec{X}}_{l}^{t}\) is the \(l\)th accepted individual at iteration \(t\) and \({x}_{l,k}^{t}\) represents its \(k\)th variable.

Finally, the Influence() function generates a new generation of individuals based on cultural information. There are different influence functions developed for CA in literature. Based on the previous experience of authors, the following influence function is considered in this study:

$$x_{{i,k}}^{{t + 1}} = \left\{ {\begin{array}{*{20}c} {x_{{i,k}}^{t} + \left| {size\left( {\user2{I}_{k}^{t} } \right)\;N_{{i,k}} \left( {0,1} \right)} \right|~} & {if~\;x_{{i,k}}^{t} < x_{{{\text{min}},k}}^{t} } \\ {x_{{i,k}}^{t} - \left| {size\left( {\user2{I}_{k}^{t} } \right)\,N_{{i,k}} \left( {0,1} \right)} \right|~} & {if\;~x_{{i,k}}^{t} > x_{{{\text{max}},k}}^{t} } \\ {x_{{i,k}}^{t} + \beta size\left( {\user2{I}_{k}^{t} } \right)\,N_{{i,k}} \left( {0,1} \right)~} & {otherwise} \\ \end{array} } \right.$$
(16)

where \(x_{i,k}^{t + 1}\) is the \(k\)th variable of the \(i\)th individual at iteration \(t+1\), \(size\left({{\varvec{I}}}_{k}^{t}\right)={x}_{\mathrm{max},k}^{t}-{x}_{\mathrm{min},k}^{t}\) is the size of the normative interval for the \(k\)th variable, \({N}_{i,k}\left(\mathrm{0,1}\right)\) represents the random number generated by a normal distribution with the mean value of 0 and standard deviation of 1, and \(\beta >0\) is a constant parameter.

Algorithm 4 shows the detailed Pseudo code of CA. Interested readers are referred to Ref. [12] for more details on CA and its variants.

figure d

2.4 Biogeography-based optimisation (BBO)

BBO is another EA inspired by the emigration and immigration behaviours of different species in nature. It was originally developed by Simon [13] in 2008 based on the mathematical migration models in biogeography science [60] and has been found significant applications in different fields [61,62,63,64]. In nature, the biological species migrate from one habitat to another one to access new resources. The migration process of species follows given mathematical patterns. BBO inspires this concept to perform the optimisation process. The method works with a population of habitats, in which each habitat is a potential solution for the problem. The algorithm employs two main operators to form a new generation of solutions, including migration and mutation operators.

Similar to other algorithms explained earlier, let us assume a set of \({N}_{\mathrm{H}}\) habitats represented by vectors \({{\varvec{X}}}_{i}^{t}=[{x}_{i,1}^{t}, {x}_{i,2}^{t}, \dots , {x}_{i,n}^{t} ]\), where \(i=1, 2, \dots , {N}_{\mathrm{H}}\). In BBO, the fitness function of each habitat is called as habitat suitability index (HSI), which means the habitats with high HSI values represent better solutions for the problem. In the migration operator of the BBO algorithm, two immigration and emigration rates are defined for each habitat based on the objective function values. Let \({\lambda }_{i}\) and \({\mu }_{i}\) be the immigration and emigration rates for the \(i\)th habitat. The values of these rates for each habitat are obtained based on the migration models available from biogeography science. Figure 1 shows a simple linear migration model which is usually used in literature to define the migration rates of BBO. As it can be seen from this figure, the sum of immigration and emigration rates for each habitat is equal to one (i.e. \({\lambda }_{i}+{\mu }_{i}=1\)). Based on Fig. 1, the habitats with higher (lower) fitness function values will have lower (larger) immigration rates \({\lambda }_{i}\) and larger (lower) emigration rates \({\mu }_{i}\). The BBO uses these rates to perform the migration operator between the habitats in the search space. In BBO, the migration operator replaces the \(k\)th variable of \(i\)th habitat by the corresponding variable of the \(j\)th habitat as follows:

$$x_{i,k}^{t}\leftarrow x_{j,k}^{t}$$
(17)

where \({x}_{i,k}^{t}\) represents the \(k\)th variable of habitat \(i\) at iteration \(t\) and \({x}_{j,k}^{t}\) is the \(k\)th variable of habitat \(j\) at iteration \(t\). It should be noted that the index \(j\) is selected based on the emigration rates \({\mu }_{i}\) and roulette wheel selection method. After performing the migration operator, the mutation operator randomly replaces the variables of habitats with a random value generated in the feasible search domain. The mutation probability for each habitat depends on the mutation rates \({m}_{i}\), which are defined based on the fitness values. The details for calculating mutation rates \({m}_{i}\) are available in Ref. [13]. Algorithm 5 shows the detailed Pseudo code for the BBO algorithm.

Fig. 1
figure 1

Migration model in BBO [13]

figure e

2.5 League Championship Algorithm (LCA)

LCA developed by Kashan [38, 65] is a population-based meta-heuristic algorithm inspired by the sporting competitions between the teams in sports leagues. The algorithm simulates the sporting league by assuming each solution candidate as a team that competes with other teams to provide the best possible solution for the problem. The position and fitness function of each team represent its formation and playing strength, respectively. LCA performs a match analysis to generate new formations for teams, which simulates the process performed by coaches to find a suitable formation for their teams in real sporting competitions. In LCA, each week can be assumed as equivalent to one optimisation iteration.

The solution finding process in LCA is similar to the championship process in sports leagues, in which different teams play in pairs based on a league schedule in a given week. The outcome of each match is determined based on the strengths of the teams. Then, the teams perform match analysis and change their formations to enhance their strengths for the next week. This process is repeated until the termination criteria are met.

Let \({{\varvec{X}}}_{i}^{t}=[{x}_{i,1}^{t}, {x}_{i,2}^{t}, \dots , {x}_{i,n}^{t} ]\) be the position of the \(i\)th team at week t, where \(i=1, 2, \dots , {N}_{team}\) and \({N}_{team}\) is the number of teams. Let us also assume that the best formation experience with the best strength obtained by the \(i\)th team until week t is represented by \({{\varvec{B}}}_{i}^{t}=[{b}_{i,1}^{t}, {b}_{i,2}^{t}, \dots , {b}_{i,n}^{t} ]\). LCA employs a single round-robin schedule to determine the teams that should play against each other. Figure 2 illustrates how different matches in each season are arranged in LCA between the teams. For a league with \({N}_{team}\) number of teams, each season will be consisted of \({N}_{team}\times ({N}_{team}-1)/2\) matches. LCA continues the league championship for \(S\) seasons, which will result in a total of \(S\times ({N}_{team}-1)\) weeks of contests.

Fig. 2
figure 2

League scheduling algorithm in LCA for the case of N_team = 8 [38]

When the teams play in pairs with each other, the outcome of the match for a given team can be a win, lose, or tie. To find the outcome of each match, LCA employs a probabilistic approach. Let us assume team \(i\) with the formation \({{\varvec{X}}}_{i}^{t}\) plays against team \(j\) with the formation \({{\varvec{X}}}_{j}^{t}\). Then, the winning probability of team \(i\) is expressed as follows:

$$p_{i}^{t} = \frac{{f\left( {{\varvec{X}}_{j}^{t} } \right) - \hat{f}}}{{f\left( {{\varvec{X}}_{j}^{t} } \right) + f\left( {{\varvec{X}}_{i}^{t} } \right) - 2\hat{f}}}$$
(18)

where \(\widehat{f}\) is an arbitrary ideal value which is set to the best objective function value obtained by the teams until week \(t\) (i.e. \(\widehat{f}={\mathrm{min}}_{i=1, 2, \dots ,{N}_{team} }\left\{f({{\varvec{B}}}_{i}^{t})\right\}\)). Based on this definition, the winning probability of team \(j\) against team \(i\) would be equal to \({p}_{j}^{t}=1-{p}_{i}^{t}\).

In real sporting competitions, coaches evaluate the strengths and weaknesses of their teams to improve their performance. Meanwhile, they need also to evaluate the opportunities and threats provided by the opponent teams. The coaches try to take the advantage of opportunities, while they have to monitor their opponents to protect their teams against possible external threats. The strengths and weaknesses are called internal factors, while the opportunities and threats are the external factors. To simulate these concepts, LCA performs a match analysis to update the formation of teams. Let us assume team \(i\) plays against team \(l\) based on the league schedule. The new formation of team \(i\) denoted by \({{\varvec{X}}}_{i}^{t+1}=[{x}_{i,1}^{t+1}, {x}_{i,2}^{t+1}, \dots , {x}_{i,n}^{t+1} ]\) depends on the previous experiences of both teams \(i\) and \(l\) at previous week \(t\). Let \(j\) be the team that has played with team \(i\) at week \(t\), and \(m\) be the team that has played with team \(l\) at week \(t\). By considering these definitions, LCA updates the formation of team \(i\) as follows:

  • If teams \(i\) and \(l\) both had won their opponents at previous week \(t\):

    $$x_{i,k}^{t + 1} = b_{i,k}^{t} + y_{i,k}^{t} \left( {\psi_{1} {\text{rand}}\left( {b_{i,k}^{t} - b_{m,k}^{t} } \right) + \psi_{1} {\text{rand}}\left( {b_{i,k}^{t} - b_{j,k}^{t} } \right)} \right)$$
    (19)
  • If team \(i\) had won its previous match at week \(t\), but team \(l\) was a loser at week \(t\):

    $$x_{i,k}^{t + 1} = b_{i,k}^{t} + y_{i,k}^{t} \left( {\psi_{2} {\text{rand}}\left( {b_{m,k}^{t} - b_{i,k}^{t} } \right) + \psi_{1} {\text{rand}}\left( {b_{i,k}^{t} - b_{j,k}^{t} } \right)} \right)$$
    (20)
  • If team \(i\) had lost its previous match at week \(t\), but team \(l\) was a winner at week \(t\):

    $$x_{i,k}^{t + 1} = b_{i,k}^{t} + y_{i,k}^{t} \left( {\psi_{2} {\text{rand}}\left( {b_{i,k}^{t} - b_{m,k}^{t} } \right) + \psi_{2} {\text{rand}}\left( {b_{j,k}^{t} - b_{i,k}^{t} } \right)} \right)$$
    (21)
  • If teams \(i\) and \(l\) both had lost their matches at previous week \(t\):

    $$x_{i,k}^{t + 1} = b_{i,k}^{t} + y_{i,k}^{t} \left( {\psi_{2} {\text{rand}}\left( {b_{m,k}^{t} - b_{i,k}^{t} } \right) + \psi_{2} {\text{rand}}\left( {b_{j,k}^{t} - b_{i,k}^{t} } \right)} \right)$$
    (22)

In the above equations, \({b}_{i,k}^{t}\), \({b}_{m,k}^{t}\), and \({b}_{j,k}^{t}\) represent the \(k\)th variable of the best formation experienced by teams \(i\), \(m\) and \(j\) until week \(t\), respectively, \({\psi }_{1}\) is the approach coefficient that controls the acceleration of the team \(i\) towards the winner, and \({\psi }_{2}\) is the retreat coefficient that controls the retract team \(i\) from the loser. In Eqs. (19), (20), (21) and (22), \({y}_{i,k}^{t}\) is a binary variable that determines whether the kth variable of vector \({{\varvec{B}}}_{i}^{t}\) should be changed or not. Let \({{\varvec{Y}}}_{i}^{t}=[{y}_{i,1}^{t}, {y}_{i,2}^{t}, \dots , {y}_{i,n}^{t} ]\) be the binary change array, in which the unit (zero) value for a given element of the array means the corresponding variable in \({{\varvec{B}}}_{i}^{t}\) would (would not) change. LCA uses a truncated geometric distribution [66] to calculate the number of ones in array \({\varvec{Y}}_{i}^{t}\), represented by \(q_{i}^{t}\), as follows:

$$q_{i}^{t} = \frac{{ln\left( {1 - \left( {1 - \left( {1 - p_{c} } \right)^{{n - q_{0} + 1}} } \right)rand} \right)}}{{ln\left( {1 - p_{c} } \right)}} + q_{0} - 1$$
(23)

where \(p_{c}\) is a control parameter and \(q_{0}\) is the lower bound for \({q}_{i}^{t}\). \({q}_{0}\) is typically taken as 1 [38]. The greater values for the parameter \({p}_{c}\) will result in smaller changes in vector \({{\varvec{B}}}_{i}^{t}\) and vice versa.

Algorithm 6 shows the detailed Pseudo code of LCA.

2.6 Optics inspired optimisation (OIO)

The OIO, recently developed by Kashan [44], is a meta-heuristic algorithm inspired by the optical characteristics of spherical mirrors in physics. In OIO, each solution vector is modelled as an artificial light point and the surface of the objective function is assumed as a spherical mirror that reflects the incident ray based on the governing equations in optics.

Let us assume \(NO\) light points represented by vectors \({{\varvec{X}}}_{i}^{t}=[{x}_{i,1}^{t}, {x}_{i,2}^{t}, \dots , {x}_{i,n}^{t} ]\) for a minimisation problem. For each light point \({{\varvec{X}}}_{i}^{t}\), OIO randomly selects another solution represented by \({{\varvec{X}}}_{j}^{t}=[{x}_{j,1}^{t}, {x}_{j,2}^{t}, \dots , {x}_{j,n}^{t} ]\) from the population as an artificial mirror in which \(j=1, 2, \dots , NO\) and \(j\ne i\). Depending on the objective function values of light points \(i\) and\(j\), the solution \({{\varvec{X}}}_{j}^{t}\) can be treated as a convex or concave mirror for light point \({{\varvec{X}}}_{i}^{t}\). If \(f\left({{\varvec{X}}}_{i}^{t}\right)>f({{\varvec{X}}}_{j}^{t})\), then \({{\varvec{X}}}_{j}^{t}\) would be a concave mirror for the light point \({{\varvec{X}}}_{i}^{t}\). Otherwise, \({{\varvec{X}}}_{j}^{t}\) would be a convex mirror. OIO finds the image position of the artificial light point \({\varvec{X}}_{i}^{t}\) formed by the artificial mirror \({\varvec{X}}_{j}^{t}\) as follows:

$${\varvec{I}}_{i}^{t} = {\varvec{X}}_{j}^{t} - \frac{{r_{j}^{t} }}{{2p_{i,j}^{t} - r_{j}^{t} }}\left( {{\varvec{X}}_{i}^{t} - {\varvec{X}}_{j}^{t} } \right)$$
(24)

where \({{\varvec{I}}}_{i}^{t}\) represents the image position of the artificial light point \({{\varvec{X}}}_{i}^{t}\), \({p}_{i,j}^{t}\) indicates the distance between the artificial light point \(i\) and the artificial mirror \(j\) on the objective function axis, and \({r}_{j}^{t}\) is the curvature radius of the artificial mirror \({{\varvec{X}}}_{j}^{t}\). The parameter \({p}_{i,j}^{t}\) is defined as follows:

$$p_{i,j}^{t} = s_{i,j}^{t} - f\left( {{\varvec{X}}_{j}^{t} } \right)$$
(25)

where \({s}_{i,j}^{t}\) is the position of artificial light point \(i\) on the objective function axis. If the artificial mirror is concave, \({s}_{i,j}^{t}\) is randomly selected as \(U\left[f\left({{\varvec{X}}}_{i}^{t}\right),f\left({{\varvec{X}}}_{i}^{t}\right)+{d}_{\infty }\right]\), where \(U\left[a,b\right]\) represents a random value between \(a\) and \(b\), and \({d}_{\infty }\) indicates the physical infinity that can be any positive value. In OIO, the value of physical infinity is initially assumed as \({d}_{\infty }={\left|maxf\left({{\varvec{X}}}_{l}^{t}\right)\right|}_{l=1, 2, \dots ,NO}\). The physical infinity \({d}_{\infty }\) would be updated in the artificial spherical aberration stage of the algorithm. For the case of a convex mirror, \({s}_{i,j}^{t}\) is randomly assigned as \(U\left[f\left({{\varvec{X}}}_{j}^{t}\right),f\left({{\varvec{X}}}_{j}^{t}\right)+{d}_{\infty }\right]\).

figure f

The curvature radius of the artificial mirror \({{\varvec{X}}}_{j}^{t}\) is calculated as follows:

$$r_{j}^{t} = m_{j}^{t} - f\left( {{\varvec{X}}_{j}^{t} } \right)$$
(26)

In this equation, \({m}_{j}^{t}\) is the position of the centre of curvature for artificial mirror \(j\), which is randomly determined as \(U\left[f\left({{\varvec{X}}}_{i}^{t}\right),f\left({{\varvec{X}}}_{i}^{t}\right)+{d}_{\infty }\right]\) for concave mirrors and \(U\left[f\left({{\varvec{X}}}_{i}^{t}\right)-{d}_{\infty },f\left({{\varvec{X}}}_{i}^{t}\right)\right]\) for convex mirrors.

The image vector \({{\varvec{I}}}_{i}^{t}\) represents a new solution for the problem, which differs from the artificial light point \({{\varvec{X}}}_{i}^{t}\) in all dimensions. OIO constructs a new light point (or solution) \({{\varvec{X}}}_{i}^{t+1}\) by keeping the number of changes in vector \({{\varvec{X}}}_{i}^{t}\) less than \(n\). Let us assume \({{\varvec{U}}}_{i}^{t}\leftarrow {{\varvec{X}}}_{i}^{t}\). OIO uses Eq. (23) to determine the number of changes \({q}_{i}^{t}\). Then, \({q}_{i}^{t}\) number of variables are randomly selected from \({{\varvec{I}}}_{i}^{t}\), and their values are assigned to their corresponding variables in vector \({{\varvec{U}}}_{i}^{t}\). Finally, if the objective function of \({{\varvec{U}}}_{i}^{t}\) is better than \({{\varvec{X}}}_{i}^{t}\), then \({{\varvec{U}}}_{i}^{t}\) would be the new artificial light point, i.e. \({{\varvec{X}}}_{i}^{t+1}={{\varvec{U}}}_{i}^{t}\). Otherwise, the algorithm keeps the previous light point for the next iteration, i.e. \({{\varvec{X}}}_{i}^{t+1}={{\varvec{X}}}_{i}^{t}\).

It should be noted that the value of \({m}_{j}^{t}\) should satisfy a certain condition. In some cases, the difference between the objective function values of artificial light point \(i\) and the artificial mirror \(j\) is significantly high. In these cases, the image of light point \(i\) on artificial mirror \(j\) would be a blurry image, which can cause premature convergence of OIO from the numerical viewpoint.

In OIO, to avoid the spherical aberration phenomenon, the algorithm repeatedly updates the value of the parameter \({m}_{j}^{t}\). The interested readers are referred to Ref. [44] for more details on the spherical aberration mechanism of OIO. Algorithm 7 shows the full detailed Pseudo code of the OIO algorithm.

figure g

3 Problem formulation

As it was mentioned earlier, buckling capacity maximisation for optimum layup of laminated composite plates is an important problem that has been widely investigated by researchers. Laminated composite plates subjected to in-plane compressive loads are potentially susceptible to lose their stability. Hence, the buckling capacity of laminated composites under in-plane compressive loads should be considered in the design process. Let us assume a simply supported laminated composite plate shown in Fig. 3 with dimensions \(a\) and \(b\) in \(x\) and \(y\) directions, respectively, in which \({N}_{x}\) and \({N}_{y}\) are the in-plane compressive loads in \(x\) and \(y\) directions, respectively. According to the classical laminated plate theory, the buckling load factor can be expressed as follows [67]:

$$\lambda_{b} \left( {p,q} \right) = \pi^{2} \left[ {\frac{{p^{4} D_{11} + 2\left( {D_{12} + 2D_{66} } \right)\left( {rpq} \right)^{2} + \left( {rq} \right)^{4} D_{22} }}{{\left( {ap} \right)^{2} N_{x} + \left( {raq} \right)^{2} N_{y} }}} \right]$$
(27)

where \({\lambda }_{b}\) is the buckling load factor, \(r\) is the aspect ratio which represents the ratio of length to width, \({D}_{ij}\) indicates the bending stiffness of composite plate, \(a\) and \(b\) are the dimensions of the laminate in x and y directions, respectively, \(p\) and \(q\) are the half-waves in the x and y directions, respectively. It should be noted that the parameters \({D}_{16}\) and \({D}_{26}\) are neglected in Eq. (27), as they are zero for specially orthotropic laminates and very small for the symmetrically balanced laminates. Thus, the buckling load factor obtained by Eq. (27) is exact for specially orthotropic laminates and approximately correct for the symmetrically balanced laminates. It is clear from Eq. (27) that the buckling load factor is a function of the material property, length, width, stacking sequence, applied in-plane compressive loads, and value of parameters \(p\) and \(q\). It is worth mentioning that various values for parameters \(p\) and \(q\) will result in different buckling load factors. The smallest buckling load factor yielded by Eq. (27) for different values of parameters \(p\) and \(q\) will be the critical buckling load factor for the laminated composite plate.

Fig. 3
figure 3

Laminated composite plate

The optimum stacking sequence problem of the laminated composite plates under in-plane compressive loads for maximum buckling capacity can be mathematically expressed as follows:

Find \({\varvec{X}}=\left[{x}_{1},\boldsymbol{ }{x}_{2},\boldsymbol{ }\dots ,\boldsymbol{ }{x}_{n}\right]\)

To maximise: \(f\left({\varvec{X}}\right)={\lambda }_{b}\)

Subject to:

$$x_{i} \in {\varvec{S}} = \left[ {s_{1} ,\user2{ }s_{2} ,\user2{ } \ldots ,\user2{ }s_{p} } \right]$$
(28)

where \(f\left(.\right)\) represents the objective function of the problem, \({\varvec{X}}=\left[{x}_{1},\boldsymbol{ }{x}_{2},\boldsymbol{ }\dots ,\boldsymbol{ }{x}_{n}\right]\) indicates the vector of design variables, \({x}_{i}\) is the fibre orientation for the \(i\)th ply which should be selected from a given discrete set, \(n\) is the number of design variables, \({\varvec{S}}\) is the vector containing the allowable fibre orientations, and \(p\) is the number of allowable fibre orientations. It should be noted that for a composite laminate with m layers, the number of design variables will be equal to m/2 due to symmetry (i.e. \(n=m/2\)).

4 Numerical results

In this section, the performance of different meta-heuristics in the buckling load maximisation of a 64-layer laminated composite plate will be investigated. The plate is assumed as a symmetrically balanced laminated composite with simply supported boundary conditions. The design variables are the stacking sequences of layers, which should be selected from the discrete set of \(\left[{0}_{2}, \pm 15, \pm 30, \pm 45, \pm 60, \pm 75, {90}_{2}\right]\). In previous studies, possible fibre orientations were taken as \(\left[{0}_{2}, \pm 45,{90}_{2}\right]\) [27, 31]. However, this study considers more possible fibre orientations to make the problem more challenging for the algorithms. It is assumed that the laminated composite plate is made of graphite/epoxy material with mechanical properties presented in Table 1. The length of the plate is assumed to be 0.508 m with various aspect ratios and load cases listed in Table 2.

Table 1 Mechanical properties of the graphite/epoxy [27]
Table 2 Different load cases, lengths, and widths considered for 64-layer laminated composite plate

4.1 Internal parameters

According to Sect. 2, the algorithms have a set of internal parameters which can significantly affect their performance. To find the best possible values for these parameters, several performance sensitivity analyses have been performed by considering different values for internal parameters. The sensitivity analyses were performed based on a trial and error approach. Table 3 lists the possible values for internal parameters of each algorithm alongside their suitable values obtained from the sensitivity analyses. From different combinations of possible values, the suitable parameter values were selected based on the best performance exhibited by each algorithm. To keep the article in a manageable size, only the final results of sensitivity analyses were presented in Table 3. In this study, the obtained internal parameters from the sensitivity analyses will be used to assess the performance of different algorithms.

Table 3 Internal parameters of different algorithms and their appropriate values obtained from the sensitivity analyses

4.2 Performance comparisons

To investigate the performance of different algorithms in optimum design of 64-layer laminated composite, 5000 function evaluations were defined as the termination criterion. The algorithms were repeated for 30 independent runs, and their best, average, and worst results alongside the standard deviations for different load cases were reported.

For different load cases, Tables 4, 5, 6, 7, 8, 9, 10, 11 and 12 present the optimum stacking sequences corresponding to the maximum buckling load factors obtained by each algorithm alongside the best, mean, standard deviation, and worst results. At first glance, it is observable from the results in Tables 4, 5, 6, 7, 8, 9, 10, 11 and 12 that none of the algorithms is capable of exhibiting better performance than others in all load cases and the efficiency of each algorithm differs from a given load case to another one. The reason behind this contradiction can be explained based on the “No Free Lunch” theorem [68] which states that it is almost impossible to develop a general strategy to solve different problem types in an equally efficient manner. With this introductory statement, the performance of the algorithms will be investigated in more detail in this section to find out which algorithms are capable of providing the most promising results for the different load cases of the laminated composite layup optimisation problem.

Table 4 Optimum stacking sequences and statistical results obtained by different algorithms for load case 1
Table 5 Optimum stacking sequences and statistical results obtained by different algorithms for load case 2
Table 6 Optimum stacking sequences and statistical results obtained by different algorithms for load case 3
Table 7 Optimum stacking sequences and statistical results obtained by different algorithms for load case 4
Table 8 Optimum stacking sequences and statistical results obtained by different algorithms for load case 5
Table 9 Optimum stacking sequences and statistical results obtained by different algorithms for load case 6
Table 10 Optimum stacking sequences and statistical results obtained by different algorithms for load case 7
Table 11 Optimum stacking sequences and statistical results obtained by different algorithms for load case 8
Table 12 Optimum stacking sequences and statistical results obtained by different algorithms for load case 9

From Tables 4, 5, 6, 7, 8, 9, 10, 11 and 12, it can be seen that the algorithms exhibit quite similar performances for the second, fifth, and eighth load cases, whereas their performances seem to be different for other load cases. The numerical results in Tables 4, 5, 6, 7, 8, 9, 10, 11 and 12 can be interpreted in different ways. In terms of the best results, for the first load case, it turns out that PSO, CA, OIO, and LCA can find the maximum buckling load factor. LCA found better best solution than all other algorithms in the third case. On the other hand, PSO and BBO exhibit better performances than others for the fourth load case in terms of the best solution. Moreover, the best solutions obtained by PSO, BBO, and LCA are better than those yielded by other algorithms in the ninth load case. For the rest of the load cases, all algorithms were able to find the optimum buckling load factors.

If the results reported in Tables 4, 5, 6, 7, 8, 9, 10, 11 and 12 are compared in terms of standard deviations, it can be seen that the standard deviations yielded by BBO, OIO, and LCA algorithms for the second, fifth, and eighth load cases are equal to zero, which indicate that these algorithms are capable of finding the optimum solutions in each independent run. Comparison of the standard deviations yielded by different algorithms for other load cases suggests that LCA and OIO algorithms are capable of providing the lowest standard deviations for the almost rest of the load cases except the seventh load case, which make their performance more impressive than others. Comparing with other algorithms, the efficiency of LCA and OIO may stem from the fact that their search operators can keep the diversity of the population at a high level to avoid premature convergence to the local optimum points in the search space. Although the DE\current-to-rand\1 algorithm yielded the lowest standard deviation for the seventh load case, the LCA and OIO are still competitive in this load case as well.

The worst result obtained from 30 independent runs for each algorithm is also another important criterion, which can show how the algorithms are capable of generating better results in the worst-case scenarios. For the second, fifth, and eighth load cases, the BBO, OIO, and LCA performed better than other algorithms in terms of worst results. For the rest of the load cases except the seventh and last load cases, LCA obtained better worst results than all other algorithms. However, DE\current-to-rand\1 and OIO algorithms provided better worst buckling load factors for the seventh and last load cases, respectively.

Figure 4 presents the boxplots to provide statistical intuitive performance comparisons between different algorithms for the first and fourth load cases. The reason behind the selection of these load cases stems from the fact that these load cases are challenging enough to clearly show the statistical differences between the algorithms. For the first load case, it is shown in Fig. 4 that the ranges and variances of the results obtained by the PSO, OIO, and LCA are significantly smaller than those for other algorithms, which show the stability of these algorithms in finding maximum buckling load factors. For the fourth load case, the superiority of PSO, OIO, and LCA is still observable. However, the BBO is also exhibited competitive performance in the fourth load case. It can also be seen that the mean values obtained by the PSO, OIO, and LCA for the first and fourth load cases are well distributed near the optimum buckling load factors. The overall conclusion that can make from Fig. 4 is that the LCA is statistically more stable than all other algorithms. However, the statistical performances of PSO and OIO are also remarkable.

Fig. 4
figure 4

Box plots of different algorithms: a first load case, b fourth load case

To investigate the convergence properties of different algorithms, the average convergence diagrams for the first and fourth load cases are illustrated in Fig. 5. From Fig. 5, it can be observed that the LCA and PSO exhibit faster convergence rates than other algorithms. Although the convergence rate of OIO is slower than others, this algorithm can find higher quality solutions than most of the other algorithms as the iterations proceed. This may reflect the fact that the trade-offs between the exploration and exploitation phases in LCA and PSO are more appropriately implemented and they are computationally more efficient. It seems that OIO switches from the exploration phase to the exploitation phase with a significant delay in comparison to the LCA and PSO. However, despite this delay, OIO is still capable of converging to better final solutions in comparison to most of the investigated algorithms. It is also observable from Fig. 5 that the convergence behaviours of different variants of DE seem to be somehow similar.

Fig. 5
figure 5

Average convergence diagrams of different algorithms: a first load case, b fourth load case

4.3 Deep statistical comparison

Recent research trends in meta-heuristics revealed that the comparison of the efficiency of algorithms only based on the basic statistical parameters, such as best, mean, worst, and standard deviation, is not statistically enough to make proper conclusions [69, 70]. Various statistical tests in the literature are applicable to evaluate, compare, and rank the performance of meta-heuristics for a given problem [69]. Recently, Eftimov et al. [70] proposed a Deep Statistical Comparison (DSC) method for the performance comparison of algorithms. The approach uses the two-sample Kolmogorov-Smirnov (KS) test to pair-wise performance comparison between each pair of algorithms. Then, the algorithms are ranked based on the results obtained from the two-sample KS test. In this study, in order to provide a fair comparison, the DSC method is employed to rank the performance of investigated algorithms in the optimum layup problem of laminated composite plates.

The two-sample KS test is a nonparametric statistical test that determines whether two sets of data come from the same continuous distribution or not. The test assumes the null hypothesis that the results obtained from each pair of algorithms come from the same continuous distribution. In other words, the KS test considers the null hypothesis that the performances of each pair of algorithms are statistically equivalent. Let us assume \({\alpha }_{\mathrm{KS}}\) be the significance level used by the KS test, which indicates the probability threshold for acceptance or rejection of the null hypothesis. For the selected significance level \({\alpha }_{\mathrm{KS}}\), the KS test computes \({p}_{\mathrm{value}}\) for each pair of algorithms. If \({p}_{\mathrm{value}}\) is smaller than the significance level \({\alpha }_{\mathrm{KS}}\), the null hypothesis would be rejected. Otherwise, the null hypothesis would be accepted. The acceptance of the null hypothesis means that there is no significant difference between the two algorithms and they perform statistically the same. However, if the null hypothesis is rejected, it means that the two algorithms perform statistically different. The DSC approach use the \({p}_{\mathrm{value}}\) for ranking the performance of different algorithms. For more details about the ranking formulations, the interested readers are referred to Ref. [70].

The selection of \({\alpha }_{\mathrm{KS}}\) plays an important role in the acceptance or rejection of the null hypothesis. Inappropriate values for this parameter could result in the wrong conclusions about the performances of different algorithms. If \({\alpha }_{\mathrm{KS}}\) is taken as 1, it would result in the rejection of the null hypothesis for all values of \({p}_{\mathrm{value}}\), which would mean the performances of all algorithms are always statistically different. The value of \({\alpha }_{\mathrm{KS}}\) is typically considered between 0.05 and 0.1 [70]. In this study, the significance level is taken as \({\alpha }_{\mathrm{KS}}=0.05\). The \({p}_{\mathrm{value}}\) obtained from the KS test for each pair of algorithms in different load cases are illustrated in Fig. 6. It is obvious that the diagonal elements in Fig. 6 should be equal to one, which means that each algorithm comes from the same distribution in comparison to itself. Figure 6 illustrates some interesting information about the difference between algorithms in each load case. Firstly, the null hypothesis is accepted for all of the algorithms except DE\best\1 in load cases 2, 5, and 8. This means that there are significant differences between the performances of the DE\best\1 algorithm and others in these load cases. It also implies that there are no significant differences between most of the algorithms in load cases 2, 5, and 8, and their performances are statistically equivalent. In the rest of the load cases which are more challenging than load cases 2, 5, and 8, the null hypotheses are rejected and there are significant differences between the performances of different algorithms.

Fig. 6
figure 6

p-value obtained from the two-sample Kolmogorov-Smirnov (KS) test for each pair of algorithms in different load cases

Finally, Table 13 lists the performance rankings of different algorithms for different load cases. The algorithms with the same rank values in each load case reveal the fact that their performances are statistically equivalent. From this table, it can be concluded that the LCA algorithm performs better or equal in comparison to other algorithms in different load cases, except load case 7. Among different variants of the DE algorithm, the DE\current-to-rand\1 performs better than others as it is ranked as the first algorithm in load cases 6 and 7. The last column of Table 13 shows the overall ranking of algorithms calculated based on the overall scores obtained in different load cases, in which the LCA and OIO algorithms are ranked as the two most efficient algorithms between the investigated nine algorithms.

Table 13 Ranking of different algorithms obtained by the DSC approach for different load cases

5 Concluding remarks

The performance of nine meta-heuristic algorithms, including PSO, different variants of DE, CA, BBO, OIO and LCA, were assessed for the optimum layup problem of laminated composite plates. The buckling capacity maximisation of a 64-layer laminated composite plate under various load cases has been investigated as the benchmark problem, in which the design variables are the stacking sequences of layers. The performances of algorithms in finding maximum buckling load factors were evaluated in terms of the basic statistical parameters, including best, mean, standard deviation, and worst results. The numerical results revealed that the ranges and variances of the results obtained by the PSO, OIO, and LCA are significantly smaller than those for other algorithms, which show the stability of these algorithms in finding maximum buckling load factors.

To provide a fair comparison between the algorithms, a Deep Statistical Comparison (DSC) method was employed to rank the performance of different algorithms. The DSC approach uses a nonparametric two-sample Kolmogorov-Smirnov (KS) test for pair-wise performance comparison between each pair of algorithms. The KS test assumes the null hypothesis that the performances of each pair of algorithms are statistically equivalent. The results obtained from the KS test with the significance level of \({\alpha }_{\mathrm{KS}}=0.05\) revealed that there are significant differences between the performances of different algorithms for most of the load cases. The performance rankings obtained from the DSC method suggested that the LCA algorithm performs better or equal in comparison to other algorithms in most of the load cases. The overall ranking of algorithms calculated based on the overall scores obtained from different load cases was as follows: LCA > OIO > PSO > BBO > DE\current-to-rand\1 > DE\rand-to-best\1 > CA > DE\current-to-best\1 > DE\best\1. The convergence diagrams obtained from 30 independent runs revealed that the PSO and LCA exhibit faster convergence rates than other algorithms. This may reflect the fact that the trade-offs between the exploration and exploitation phases in LCA and PSO are more adequately implemented. Despite its remarkable performance in terms of final results, the convergence rate of OIO is slower than other algorithms. It was observed that OIO switches from the exploration phase to the exploitation phase with a significant delay in comparison to the LCA and PSO.