1 Introduction

Safety is central to structural engineering (Elms, 1999). Structural reliability describes the level of structural safety. Structural reliability analysis usually involves many random variables, such as the geometry, the material properties of a structure, and the applied loads. Obviously, the contribution of different parameters to structural reliability is different; some are very important, whereas others may be insignificant. Uncertainty in parameters should be taken into account in the reliability analysis. The most probable point (MPP)-based reliability method is one of the most important first-order reliability methods (FORMs). MPP is a particular point in the design space that can be related (at least approximately) to the probability of a system failure, defined by a limit state. This point is often referred to as the most probable point or the design point which consists of a number of system parameters. Moreover, the time consumption in computing failure probability increases rapidly with more variables. How to reduce the insignificant random variables and thus to improve computational efficiency is one of the important issues in the sensitivity analysis (SA) of structural reliability. SA is advantageous in making consistent decisions about the relative significance of system parameters to reliability. SA has played a key role in structural reliability design (Xiao et al., 2011). The system parameters which are not sensitive for reliability can be considered as constants and the randomness of those parameters can be neglected. Thus, the time consumption in reliability analysis can be significantly reduced.

Great efforts have been made in the field of SA of structural reliability (Madsen, 1988; Bjerager and Krenk, 1989; Karamchandani and Cornell, 1992). Almost all studies are based on the gradient of the limit-state function. In certain cases the methods based on the gradient are not available or are computationally cumbersome, for example, the cases when the limit-state function is complicated or it is very difficult to obtain the derivative. Recently, there are many SA techniques that are available for multidisciplinary analysis, in chemical engineering, environmental sciences, and risk analysis (Du et al., 2008; Xu and Gertner, 2008; Zhang and Huang, 2010; Zhang et al., 2010; Chakraborty et al., 2012).

In this study, particles swarm optimization (PSO) is employed to calculate the Hasofer-Lind reliability index. PSO is an evolutionary computation technique based on simulating social behaviors of flocks of birds and schools of fishes (Kennedy and Eberhart, 1995a; 1995b). As a relatively new member of the evolutionary algorithm family, PSO shares many similarities with other evolutionary computation techniques. PSO is a zero-order optimization algorithm which does not require the derivative of the objective function. PSO has been used to solve a range of optimization problems, such as neural network training and function minimization (Eberhart and Hu, 1999; Engelbrecht and Ismail, 1999; Shi and Eberhart, 1999; van den Bergh and Engelbrecht, 2000). Elegbede (2005) used PSO to calculate the Hasofer-Lind reliability index which is the minimum distance from the origin to the limit-state function in standard normal space (a constrained optimization problem). The study indicated that PSO can be considered as an additional efficient algorithm to those existing in the literature based on gradient methods which do not ensure the detection of the global optimum.

This study demonstrates that the convergence rate of a random variable during the optimum evolution process using PSO reflects the sensitivity of the objective function with respect to that variable. The origin and specific algorithms of PSO are elaborated. Furthermore, a novel SA method, namely, the relative convergence rate based on PSO, is proposed. The fluctuation of convergence rate of a variable during the optimum process is selected by a refined optimized group. Then, the detailed calculation process for the relative convergence rate method is illustrated. Finally, three examples are employed to verify the availability of the relative convergence rate method.

2 Structural reliability and PSO

A fundamental problem in structural reliability theory is the computation of the multi-fold probability integral

$${P_{\rm{f}}} = {\rm{Prob}}[g({{X}}) \leq 0] = \int\nolimits_{g({{X}}) \leq 0} {\bar f({{X}}){\rm{d}}{{X}}},$$
(1)

where Pf is the probability of failure; X=[X1, X2, …, X n ]T is a vector of random variables representing uncertain structural quantities; \(\bar f(X) \) denotes the joint probability density function of X; g(X) is the performance function such that g(X)<0, g(X)=0, and g(X)>0 represent the failure state, the limit state, and the safe state of the structure system, respectively, and g(X)≤0 (the domain of integration) denotes the failure set. The difficulty in computing this probability has led to the development of various approximation methods, of which the FORM is considered to be one of the most reliable computational methods (Zhao and Ono, 1999).

In FORM, the Hasofer-Lind reliability index is extensively used in the structural reliability field and defined as the minimum distance from the origin to the limit-state surface in standard normal space. Therefore, the reliability analysis is transformed to a constrained optimization problem, that is

$$\begin{array}{*{20}c} {\underset {u}{{{\rm{Minimize}}\quad }}f = {{\left({\sum\limits_{i = 1}^n {u_i^2} } \right)}^{1/2}},\quad \quad \quad \quad \quad \quad \;\;}\\ {{\rm{Subject}}\;{\rm{to}}\quad g({{X}}) = g({F^{ - 1}}({{U}})) = G({{U}}) = 0,} \end{array}$$
(2)

where u i is the particular instantiation of the corresponding standard normal variables, U=F(X), F is the transformation from the original space to standard normal space, and G(U) is the limit-state function in standard normal space. Three main transformation methods to solve Eq. (2) have been summarized by Elegbede (2005). The solution u* of Eq. (2) is the MPP and enables the calculation of the reliability index, βHL, as

$${\beta _{{\rm{HL}}}} = \left\Vert {{{{u}}{\ast}}} \right\Vert {.}$$
(3)

The Hasofer-Lind reliability index enables a first-order approximation of the reliability by the relationship of Pf≈Φ(−βHL), which becomes exact as Pf=Φ(−βHL) when the limit-state function is linear in standard normal space, where Φ is the standard normal distribution function.

The PSO has been employed to solve constrained optimization problems successfully (Ray and Liew, 2001; Hu and Eberhart, 2002; Parsopoulos and Vrahatis, 2002; He and Wang, 2007; Zahara and Hu, 2008; Sun et al., 2011). Accordingly, numerous studies have been undertaken to implement PSO to solve actual civil engineering problems (Perez and Behdinan, 2007; Jansen and Perez, 2011; Khajehzadeh et al., 2011). Therefore, PSO is employed to solve the constrained optimization problem and find the MPP in this study. The optimization procedure of PSO is initialized with a population of uniform random candidate solutions which cover the entire search space, namely particles. Each particle has its own position and velocity, and a fitness value is assigned by the objective function. According to a few simple rules the population adaptively updates their positions and velocities to travel around the solution space, and then searches for optima iteratively. When a particle calculates its new position, two prior values are taken into account: the best position the particle itself has achieved so far, p id , and the global best position the population has obtained so far, pgd. The core concept of the PSO algorithm is: at each iteration step, changing the velocity of each particle utilizing the independently randomly weighted p id and pgd information, then updating the particles’ positions. Three features are involved in this algorithm: (1) particles are initialized with a population of uniform random solutions, (2) particles search for the optimum by updating generations, and (3) population evolves based on previous generations.

The update of the particles is accomplished by Eqs. (4) and (5) as

$$\begin{array}{*{20}c} {{V_{id}} = w \times {V_{id}} + {c_1} \times {\rm{rand}}() \times ({p_{id}} - {x_{id}})}\\ { + {c_2} \times {\rm{Rand}}() \times ({p_{{\rm{g}}d}} - {x_{id}}),\,\,} \end{array}$$
(4)
$${x_{id}} = {x_{id}} + {V_{id}},$$
(5)

where the subscript d (d=1, 2, …, n) is the dimension of the solution space and subscript i (i=1, 2, …, N) denotes the ith particles in the swarm population (the size of the swarm population is N). Eq. (4) calculates a new velocity for each particle (potential solution) based on its previous velocity, V id , the particle’s location p id at which the best fitness has been achieved so far, and the population global location pgd at which the best fitness has been also achieved so far. Each particle’s position, x id , in the solution hyperspace is updated by Eq. (5). The two uniform random numbers within the range (0, 1), rand() and Rand(), are independently generated at each iteration, and c1 and c2 are learning factors which are positive. The use of the inertia weight, w, can provide improved performance (Shi and Eberhart, 1998).

To ensure the convergence of PSO, Eberhart and Shi (2000) proposed to multiply the right side of Eq. (4) by a constriction factor K:

$$\begin{array}{*{20}c} {{V_{id}} = K \times \left[{V_{id}} + {c_1} \times {\rm{rand}}() \times ({p_{id}} - {x_{id}})\right.}\\ { \left. + {c_2} \times {\rm{Rand}}() \times ({p_{{\rm{g}}d}} - {x_{id}})\right],\;} \end{array}$$
(6)
$$K = {2 \over {\left\vert {2 - \varphi - \sqrt {{\varphi ^2} - 4\varphi } } \right\vert }},$$
(7)

where φ=c1+c2 (φ>4), and K is a function of c1 and c2 as reflected in Eq. (7). Fig. 1 shows the typical procedure of PSO. For different optimization problems, the population size may be different. According to Eberhart and Shi (2000) and Elegbede (2005), the population size is selected to be 70, and the learning factors c1 and c2 are all set as 2.05 in this study.

Fig. 1
figure 1

Flow chart of PSO

To solve constrained optimization problems, a preserving feasibility strategy, proposed by Hu and Eberhart (2002), is employed to deal with constraints, in which the following two modifications are made to the PSO algorithm: (1) all the particles only keep feasible solutions in their memory when updating the memories (p id and pgd), and (2) all the particles are started with feasible solutions during the initialization process (Hu et al., 2003).

In a reliability analysis in civil engineering, since the origin in standard normal space is always in the safe domain, Eq. (2) can be written as

$$\begin{array}{*{20}c} {\underset {u}{{{\rm{Minimize}}}}f = {{\left({\sum\limits_{i = 1}^n {u_i^2} } \right)}^{{1 \over 2}}},\quad \quad \quad \quad \quad }\\ {{\rm{Subject}}\;{\rm{to}}\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;}\\ {\quad g({{X}}) = g({F^{ - 1}}({{U}})) = G({{U}}) \leq 0{.}} \end{array}$$
(8)

Eqs. (2) and (8) lead to equivalent solutions when the origin in standard normal space is in the safe domain. The initial particles are generated in a failure domain G(U)≤0 and all the particles keep only the feasible solutions in their memories when updating the memories (p id and pgd). This approach is relatively fast and simple compared with other constraint handling techniques. The fitness function and constraint are handled separately without complicated manipulation. This algorithm deals with the constraint by checking if a solution satisfies the constraint.

In this study, the modified PSO with preserving feasible strategy is employed to solve constrained optimization problems. The sequence of successive population generations is usually stopped according to one of the following criteria: (1) when the mean fitness of the individuals in the population is larger than an assigned convergence value; (2) when the fitness of the best individual in the population exceeds an assigned convergence value (this criterion guarantees that at least one individual is good enough); (3) when the assigned number of population generations is reached.

3 Relative convergence rate

Variance-based methods are those most often used for SA techniques. The main idea is to express the sensitivity through the variance, and to evaluate how the variance of such an input or a group of inputs contributes to the variance of the output (Homma and Saltelli, 1996; Jacques et al., 2006). Applications of SA are model calibration or model validation, and the decision making process, where they are generally useful for knowing which variables contribute most to the output variability. Inspired by variance-based SA methods, the COVs for candidate particles of random variables in PSO solution space are used as the measurement of relative sensitivity. During the optimization process, it is observed that the convergence rates of various random variables are different. This phenomenon implies that it has the potential to obtain the relative sensitivity between various random variables. The smaller the COV for the candidate particles of a variable, the more sensitive the fitness function with respect to this variable. Because PSO is a stochastic global optimization algorithm, the convergence rates of various variables may fluctuate during the optimization process. To solve this problem, the COVs for the candidate particles of the random variables in PSO solution space are obtained in an optimized group, rather than in each population. The procedure for obtaining the COVs for candidate particles is depicted in detail as follows.

An optimized group is a set of particles with relatively better fitness values, and these particles are selected from successive S generations. Suppose that the size of the swarm population is N and there are L particles in one optimized group. The product of S and N should be larger than L. The first successive S generations are employed to build the first optimized group Θ(1). For concision, four kinds of sets are defined here.

  1. (i)

    \({\bar X_j} \) is the set of the jth generation of the swarm population, and \({\bar X_j} = \{ {\bar x_{i,j}}\vert i = 1, 2, \cdots, N\}\) with the ith particle \({\bar x_i} = \{ {x_{id}}\vert d = 1, 2, \cdots, n\} \). As mentioned above, d is the dimension of the solution space.

  2. (ii)

    J j is the set of fitness values of \({\bar X_j} \), i.e., \({J_j} = \{ f({\bar x_{i,j}})\vert i = 1,\;2,\; \cdots, N\}\) where \(f({\bar x_{i,j}})\) denotes the fitness function of the ith particle in the jth generation of the swarm population.

  3. (iii)

    Θ(l) is the set of the lth optimized group, and \({\Theta ^{(l)}} = \{ \bar x_m^{(l)}\vert m = 1, 2, \cdots, L\}\)

  4. (iv)

    \(J_\Theta ^{(l)}\) is the set of fitness values of Θ(l), i.e., \(J_\Theta ^{(l)} = \{ f(\bar x_m^{(l)})\vert m = 1, 2, \cdots, L\}\)

For the union set of swarm population of the first successive S generations, \({\bar X_1} \bigcup {\bar X_2} \bigcup \cdots \bigcup {\bar X_S} = {\{ {\bar x_{1,1}},{\bar x_{2,1}}, \cdots, {\bar x_{N,1}},{\bar x_{1,2}}, \cdots, {\bar x_{N,2}}, \cdots, {\bar x_{1,S}}, \cdots, {\bar x_{N,S}}\} _{N \times S}}\) the corresponding set of the fitness values is

$$\begin{array}{*{20}c} {{J_1} \cup {J_2} \cup \cdots \cup {J_S}\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad }\\ { = {{\left\{ {\begin{array}{*{20}c} {f({{\bar x}_{1,1}}),\;f({{\bar x}_{2,1}}),\; \cdots, \;f({{\bar x}_{N,1}}),}\\ {f({{\bar x}_{1,2}}),\;f({{\bar x}_{2,2}}),\; \cdots, \;f({{\bar x}_{N,2}}),}\\ { \cdots, \quad \quad \quad \quad \quad \quad \quad \quad \quad \;\;}\\ {f({{\bar x}_{1,S}}),\;f({{\bar x}_{2,S}}),\; \cdots, \;f({{\bar x}_{N,S}})} \end{array}} \right\}}_{N \times S}}{.}} \end{array}$$
(9)

The set of the fitness values is sorted in ascending order, and thus \({J_1} \bigcup{J_2} \bigcup \cdots \bigcup {J_S}\) is rewritten as

$$\begin{array}{*{20}c} {{J_1} \cup {J_2} \cup \cdots \cup {J_S}\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad }\\ { = {{\left\{ {\begin{array}{*{20}c} {f(\bar x_1^{(1)}),\;f(\bar x_2^{(1)}),\; \cdots, \;f(\bar x_L^{(1)}),}\\ {f(\bar x_{L + 1}^{(1)}),\; \cdots, \;f(\bar x_{N \times S}^{(1)})\quad \quad \quad } \end{array}} \right\}}_{N \times S}}{.}} \end{array}$$
(10)

The first optimized group Θ(1) can be built by picking the first L elements from \({\{ \bar x_1^{(1)},\bar x_2^{(1)}, \cdots, \bar x_L^{(1)},\bar x_{L + 1}^{(1)}, \cdots, \bar x_{N \times S}^{(1)}\} _{N \times S}}\) as

$${\Theta ^{(1)}} = {\left\{ {\bar x_1^{(1)},\;\bar x_2^{(1)},\; \cdots, \;\bar x_L^{(1)}} \right\}_L}{.}$$
(11)

The corresponding fitness value set \(J_\Theta ^{(1)}\) is

$$J_\Theta ^{(1)} = {\left\{ {f(\bar x_1^{(1)}),\;f(\bar x_2^{(1)}),\; \cdots, \;f(\bar x_L^{(1)})} \right\}_L}{.}$$
(12)

The COVs for the L particles of the n variables in Θ(1) can be calculated as

$$\delta _d^{(1)} = \sqrt {1/(L - 1)\sum\limits_{k = 1}^L {{{(\bar x_{k,d}^{(1)} - \bar {\bar x}_d^{(1)})}^2}} } \Bigg/\bar{\bar x}_d^{(1)},$$
(13)

where \(\bar{\bar x} _d^{(1)}\;(d = 1,\;2,\; \cdots, \;n)\) is the mean value of the candidate particles of the dth variable in the optimized group.

Next, the particle population moves to the (S+1)th generation, and N new particles are obtained. The second optimized group Θ(2) can be built through the union set \({\Theta ^{(1)}} \bigcup {\bar X_{S + 1}} = {\{ \bar x_1^{(1)},\bar x_2^{(1)}, \cdots, \bar x_L^{(1)},{\bar x_{1,S + 1}},{x_{2,S + 1}}, \cdots, {x_{N,S + 1}}\} _{L + N}} \).

The fitness value set of \({\Theta ^{(1)}}\bigcup {{{\bar X}_{S + 1}}}\) is

$$\begin{array}{*{20}c} {J_\Theta ^{(1)} \cup {J_{S + 1}}\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad }\\ { = {{\left\{ {\begin{array}{*{20}c} {f(\bar x_1^{(1)}),\;f(\bar x_2^{(1)}),\; \cdots, \;f(\bar x_L^{(1)}),\quad \quad }\\ {f({{\bar x}_{1{.}S + 1}}),\;f({{\bar x}_{2,S + 1}}),\; \cdots, \;f({{\bar x}_{N,S + 1}})} \end{array}} \right\}}_{L + N}}{.}} \end{array}$$
(14)

In the same way, the ascending sorted set \(J_\Theta ^{(1)}\bigcup {{J_{S + 1}}}\) can be written as

$$J_\Theta ^{(1)} \cup {J_{S + 1}} = {\left\{ {f(\bar x_1^{(2)}),\;f(\bar x_2^{(2)}),\; \cdots, \;f(\bar x_{L + N}^{(2)})} \right\}_{L + N}}{.}$$
(15)

Similarly, the elements in set \({\Theta ^{(1)}} \bigcup {\bar X_{S + 1}}\) are correspondingly rearranged according to the order of their counterpart fitness values in the sorted union set \(J_\Theta ^{(1)} \bigcup {J_{S + 1}}\). The rearranged set \({\Theta ^{(1)}} \bigcup {\bar X_{S + 1}}\) can be written as \({\Theta ^{(1)}}\bigcup {{{\bar X}_{S + 1}}} = {\{ \bar x_1^{(2)},\bar x_2^{(2)}, \cdots, \bar x_{L + N}^{(2)}\} _{L + N}}\), and its first L elements are picked to build the second optimized group \({\Theta ^{(2)}}\). The COVs of the n variables of the L particles in Θ(2), \(\delta _d^{(2)}\), can be calculated using Eq. (13), with only the superscript (1) replaced by (2).

When the particle population evolves to the (S+l−1)th generation, the lth optimized group Θ(l) is then built based on the union set of the optimized group Θ(l-1) and N new particles \({\bar X_{S + l - 1}}\):

$$\begin{array}{*{20}c} {{\Theta ^{(l - 1)}} \cup {{\bar X}_{S + l - 1}}\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;}\\ { = {{\left\{ {\begin{array}{*{20}c} {\bar x_1^{(l - 1)},\;\bar x_2^{(l - 1)},\; \cdots, \;\bar x_L^{(l - 1)},\quad \quad }\\ {{{\bar x}_{1,S + l - 1}},\;{{\bar x}_{2,S + l - 1}},\; \cdots, \;{{\bar x}_{N,S + l - 1}}} \end{array}} \right\}}_{L + N}}{.}} \end{array}$$
(16)

The fitness value set of \({\Theta ^{(l - 1)}} \bigcup {\bar X_{S + l - 1}}\) can be written as

$$\begin{array}{*{20}c} {J_\Theta ^{(l - 1)} \cup {J_{S + l - 1}}\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;}\\ { = {{\left\{ {\begin{array}{*{20}c} {f(\bar x_1^{(l - 1)}),\;f(\bar x_2^{(l - 1)}),\; \cdots, \;f(\bar x_L^{(l - 1)}),\quad \quad }\\ {f({{\bar x}_{1,S + l - 1}}),\;f({{\bar x}_{2,S + l - 1}}),\; \cdots, \;f({{\bar x}_{N,S + l - 1}})} \end{array}} \right\}}_{L + N}}{.}} \end{array}$$
(17)

\(J_\Theta ^{(l - 1)} \bigcup {J_{S + l - 1}}\) is sorted in an ascending order as

$$\begin{array}{*{20}c} {J_\Theta ^{(l - 1)} \cup {J_{S + l - 1}}\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;\quad }\\ { = {{\left\{ {f(\bar x_1^{(l)}),\;f(\bar x_2^{(l)}),\; \cdots, \;f(\bar x_{L + N}^{(l)})} \right\}}_{L + N}}{.}} \end{array}$$
(18)

Finally, the elements in set \({\Theta ^{(l - 1)}} \bigcup {\bar X_{S + l - 1}}\) are rearranged as the same order of their counterpart fitness values in the sorted set \(J_\Theta ^{(l - 1)} \bigcup {J_{S + l - 1}}\) and its first L elements are picked to fill in the lth optimized group \({\Theta ^{(l)}} = {\left\{ {\bar x_1^{(l)},\bar x_2^{(l)}, \cdots, \bar x_L^{(l)}} \right\}_L}\). The COV for candidate particles of the n variables in Θ(l) can be defined by

$$\delta _d^{(l)} = \sqrt {1/(L - 1)\sum\limits_{k = 1}^L {{{(\bar x_{k,d}^{(l)} - \bar {\bar{x}}_d^{(l)})}^2}} } \Bigg/\bar{\bar{x}}_d^{(l)}{.}$$
(19)

The construction process for the optimized group is continuously repeated until the optimization criterion is satisfied and the optimal solution is obtained. Meanwhile, a series of COVs \(\delta _d^{(1)},\delta _d^{(2)}, \cdots, \delta _d^{{\rm{final}}}\) for candidate particles are obtained, where \(\delta _d^{(1)},\delta _d^{(2)}, \cdots, \delta _d^{{\rm{final}}}\) is the COV for candidate particles of the dth random variable in the last generation. For the first L particles with the best fitness values are selected to construct the optimized group, COV curves for candidate particles converge consistently in the solution hyperspace.

The relative convergence rate which is used to evaluate the sensitivity of the limit-state function with respect to random variables is defined as

$${\eta _d} = {{1/\delta _d^{{\rm{final}}}} \over {\sqrt {\sum\limits_{d = 1}^n {{{(1/\delta _d^{{\rm{final}}})}^2}} } }} = {1 \over {\delta _d^{{\rm{final}}}\sqrt {\sum\limits_{d = 1}^n {{{(1/\delta _d^{{\rm{final}}})}^2}} } }},$$
(20)

where ηd is the relative convergence rate of the dth random variable at the design point.

Therefore, the relative sensitivity of the objective function with respect to random variables can be obtained by the PSO technique. This method can not only be used in structural reliability analysis but also in other optimization analyses.

4 Numerical studies

Three examples are used to demonstrate the methodology of SA using the PSO technique. The first example is a numerical example with strong nonlinearity to illustrate the effectiveness and feasibility of PSO in the reliability solution and SA. Finally, two practical examples are adopted in this study to illustrate the feasibility of the proposed relative convergence rate.

4.1 Example 1

The limit-state function is

$$\begin{array}{*{20}c} {g({X_1},{X_2},{X_3},{X_4},{X_5},{X_6})\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;}\\ { = 2{.}5 - {X_1}\left({{X_2}/(1 + {X_3})} \right){X_4}\log \left\vert {({X_5} + {X_6})/{X_5}} \right\vert,} \end{array}$$
(21)

where X i (i=1, 2, …, 6) are six independent normal random variables. It can be seen that the limit-state surface is a nonlinear function. The mean and COV of the random variables are listed in Table 1. The MPP and reliability index obtained using PSO are listed in Table 2, where x* is the corresponding MPP in the original space, and there are approximately 4.84×104 evaluation function calls by PSO. The reliability index simulated by the directional Monte Carlo sampling (MCS) method with 2×106 samples is 2.497 (vs. 2.439 from PSO) and there are 2×106 evaluation function calls. Therefore, the PSO technique is more efficient in obtaining results with good accuracy.

Table 1 Means and COVs of random variables in example 1
Table 2 Results of example 1 calculated by PSO

The minimum, average, and maximum values of fitness in each generation are shown in Fig. 2. It can be seen that the minimum value of fitness of the objective function converges with good accuracy after several generations, and then the average and maximum values of the fitness converge to the same value in succession as the generations increase.

Fig. 2
figure 2

Convergence history of the minimum, average, and maximum fitness in each generation (example 1)

During the optimization process, it is observed that the convergence rates of the various random variables are different. In this example, there are six variables in each particle. u2 and u3 are selected to describe the convergence process. Only 10 populations are selected in the solution hyperspace to find the design point. u2 and u3 are randomly generated at an interval of (−5, 5) in standard normal space with feasible solutions at the beginning. After several iterations, the particles converge at the MPP. The positions of particles in the u2-u3 plane are shown in Fig. 3, in which the symbols of triangle, ring, and “+” denote the positions of all particles in their initial state, the 30th generation, and MPP, respectively. It can be observed that the range of u2 between the two broken lines is much less than that of u3 between the two dotted lines in the 30th generation. In other words, the particles move more rapidly in the dimension of the random variable u2 compared with that of u3, implying that the variable u2 is more sensitive than u3 to the objective function.

Fig. 3
figure 3

Diagrammatic sketch of relative positions of all particles in the u 2 -u 3 plane

The convergence curves of COV for candidate particles of random variables during the optimization process are shown in Fig. 4. It can be seen that the COV for candidate particles of random variables begin to converge after the 10th generation. Different random variables stabilize to their final best values at different times in accordance with their sensitivity. Variable u2 converges at the MPP with the fastest rate. As a result, the COV for candidate particles tends towards zero.

Fig. 4
figure 4

Convergence curves of COV for candidate particles of random variables during optimum process (example 1)

For comparison, the method of the sensitivity coefficient based on the gradient of the limit-state function in standard normal space is also given here. Usually, it is defined as

$${\alpha _d} = {{\partial {\beta _{{\rm{HL}}}}} \over {\partial {u_d}}}\left\vert {u\ast} \right.{.}$$
(22)

Substituting Eq. (3) into Eq. (22) gives

$${{\partial {\beta _{{\rm{HL}}}}} \over {\partial {u_d}}} = {1 \over {{\beta _{{\rm{HL}}}}}}{u^\ast}^{\rm{T}}{{\partial u \ast } \over {\partial {u_d}}}.$$
(23)

Considering that G(u*+du*) approximately equals zero, and (u*+du*) and G(u*+du*) are mutually orthogonal, Eq. (23) is simplified as (Karamchandani and Cornell, 1992)

$${{\partial {\beta _{{\rm{HL}}}}} \over {\partial {u_d}}} = {1 \over {\left\Vert {\nabla G(u\ast)} \right\Vert }} \cdot {{\partial G(u\ast)} \over {\partial {u_d}}},$$
(24)
$$\nabla = \left\{ {{\partial \over {\partial {u_d}}}} \right\}{.}$$
(25)

Relative convergence rates and sensitivity coefficients of the random variables are shown in Fig. 5. The comparison indicates that the sensitivity of the variables obtained using PSO and the gradient of limit-state function are almost the same in the quantitative sense. u2 is the most sensitive variable, whereas u5 is the most insensitive variable. The results indicate that the proposed relative convergence rate can be used as a sensitivity measurement. The proposed relative convergence rate can directly obtain the sensitivity of variables by simple algebraic operation from the COV for the candidate particles in the optimized group without using the gradient or derivative information of the objective function during the optimization process.

Fig. 5
figure 5

SA based on the PSO and the gradient of limitstate function (example 1)

4.2 Example 2

Next, the behavior of a tower of a cable-stayed bridge along the longitudinal direction is employed as a numerical example in this study (Shen and Gao, 1994). The schematic model of the tower is shown in Fig. 6. A coupled axial force and moment are applied to the tower. Consequently, the limit-state equation of the bridge tower’s bending resistance is expressed as

$$g = M - Pe - W{(h/l)^2}e - Ql = 0,$$
(26)

where l is the height of the resultant force of the stay cables in one cable-plane, h is the height of the center of gravity of the tower and h=0.4H, H is the height of the bridge tower, P and Q are the vertical and horizontal components of the forces of all stay cables, respectively, W denotes the deadweight of the bridge tower, M is defined as the moment resistance of a section at the bridge tower foot, and e is the eccentricity which can be obtained by

$$e = {{Q{l^3}} \over {3EI\left({1 - {2 \over 5}{{P{l^2}} \over {EI}}} \right)}},$$
(27)

where E is the elastic modulus of the bridge tower, and I is the equivalent inertia moment of the variable cross-section of the bridge tower.

Fig. 6
figure 6

Schematic model of the tower of a cable-stayed bridge along longitudinal direction

Additionally, vehicle loads and temperature variation may lead to lateral deformation of the tower, and additional moment is added to the section at the foot of the tower. Here, the additional moment is equivalent to an additive horizontal force, i.e., Q is replaced by Q′, and thus Eq. (26) is rewritten as

$$g = M - {{PQ\prime{l^3}} \over {3EI\left({1 - {2 \over 5}{{P{l^2}} \over {EI}}} \right)}} - {{WQ\prime l{h^2}} \over {3EI\left({1 - {2 \over 5}{{P{l^2}} \over {EI}}} \right)}} - Q\prime l,$$
(28)

where Q′ is the total equivalent horizontal force.

The parameters of a tower of the Brotonne Bridge constructed in France in 1977 are used here (Girmscheid, 1987; Shen and Gao, 1994), i.e., H= 70.5 m, l=47.4 m, W=1170 kN, and h=0.4H=28.2 m. I, E, P, Q′, and M are considered as random variables and denoted by X1, X2, X3, X4, and X5, respectively. The random variables are mutually independent. Finally, the limit-state function is

$$\begin{array}{*{20}c} {g({X_1},\;{X_2},\;{X_3},\;{X_4},\;{X_5})\quad \quad \quad \quad }\\ { = {X_5} - {{5{X_3}{X_4}{l^3}} \over {3(5{X_1}{X_2} - 2{X_3}{l^2})}}}\\ {\quad \quad - {{5W{X_4}l{h^2}} \over {3(5{X_1}{X_2} - 2{X_3}{l^2})}} - {X_4}l{.}} \end{array}$$
(29)

The distribution types, means, and COVs of random variables are listed in Table 3.

Table 3 Distribution types, means, and COVs of the random variables in example 2

The minimum, average, and maximum values of fitness in each generation are shown in Fig. 7. The final results are listed in Table 4. The convergence curves of COV for candidate particles of random variables are shown in Fig. 8. Relative convergence rates and sensitivity coefficients of the random variables are then obtained using PSO and the gradient of limit-state function at MPP, respectively, as shown in Fig. 9.

Fig. 7
figure 7

Convergence history of minimum, average, and maximum fitness in each generation (example 2)

Fig. 8
figure 8

Convergence curves of COV for candidate particles of random variables during optimum process (example 2)

Fig. 9
figure 9

SA based on the PSO and the gradient of limitstate function (example 2)

Table 4 Final results of example 2

Results similar to those of the first example can be observed from Figs. 8 and 9. Additionally, it can be seen from Figs. 8 and 9 that the reliability indices with respect to random variables u1 (I), u2 (E), and u3 (P) are very insensitive. If the three variables are fixed at their means and the initial particles of u4 (Q′) and u5 (M) are generated based on the former results, the reliability index β is equal to 3.502 which changes only 0.2% from the result shown in Table 4.

Simultaneously, there is a substantial increase in computational efficiency. In fact, the three variables have almost no impact on the reliability index. However, u4 (Q′) and u5 (M) are much more sensitive to the objective function. As a consequence, these two variables have a dramatic influence on the reliability index.

4.3 Example 3

The third example is a linear frame structure of 12-storey and 3-bay as shown in Fig. 10. Different cross sectional areas Ai and horizontal load P are treated as independent random variables. The sectional moments of inertia are expressed as \({I_i} = {\alpha _i}{A_i}^2\) (α1=α2=α3=0.08333, α4=0.2667, and α5=0.2) (Cheng and Xiao, 2005). The elastic modulus, E= 2.0×107 kN/m2, is treated as deterministic. Element types are indicated in Fig. 10. Of interest is the probability that the horizontal displacement at node A (u A ) exceeds the limit value [u]=H/500=0.096 m (H is the height of the 12-storey frame). A1, A2, A3, A4, A5, and P are denoted by X1, X2, X3, X4, X5, and X6, respectively. Thus, the limit-state function is expressed as

$$\begin{array}{*{20}c} {g({X_1},\;{X_2},\;{X_3},\;{X_4},\;{X_5},\;{X_6})\quad \quad \quad \quad \quad \quad \quad \;\;}\\ { = 0{.}096 - {u_A}({X_1},\;{X_2},\;{X_3},\;{X_4},\;{X_5},\;{X_6}){.}} \end{array}$$
(30)
Fig. 10
figure 10

Schematic model of 12-storey frame structure

Obviously, the limit-state function is implicit. The distribution types, means, and COVs of random variables are listed in Table 5.

Table 5 Distribution types, means, and COVs of the random variables in example 3

The final results are listed in Table 6. There are approximately 4.15×104 evaluation function calls and CPU time is about 5.32×102 s using PSO. The reliability index simulated by the directional Monte Carlo sampling method with 2×106 samples is 1.439 (vs. 1.454 from PSO) and CPU time is about 2.33×104 s for 2×106 evaluation function calls. Obviously, the PSO technique is more efficient.

Table 6 Final results of example 3

The minimum, average, and maximum values of fitness in each generation are shown in Fig. 11. The convergence curves of COV for candidate particles of random variables are shown in Fig. 12. Relative convergence rates and sensitivity coefficients of the random variables are then obtained using PSO and the gradient of limit-state function at MPP, respectively, as shown in Fig. 13. It can be seen that the load variable u6 (P) is the most sensitive with respect to the objective function.

Fig. 11
figure 11

Convergence history of the minimum, average, and maximum fitness in each generation (example 3)

Fig. 12
figure 12

Convergence curves of COV for candidate particles of random variables during optimum process (example 3)

Fig. 13
figure 13

SA based on the PSO and the gradient of limitstate function (example 3)

As a matter of fact, PSO as a global optimization technique offers advantages of simplicity in implementation, ability to quickly converge to a reasonably good solution, and robustness against local minima. Simultaneously, there are no extra evaluation function calls for the proposed SA method during the optimization process, and the COV for candidate particles of random variables in PSO solution space can be used as the measurement of relative sensitivity. The relative convergence rate of random variables is calculated by a simple algebraic operation from the COV for the candidate particles in the optimized group. The simulation studies involving solutions on the reliability index and SA confirm that the proposed approach is accurate and has a fast convergence rate. The results demonstrate that PSO offers a viable tool for reliability index calculation and SA.

5 Conclusions

A novel reliability-based SA method using PSO, namely the relative convergence rate, is proposed in this paper. The relative convergence rate of the random variable during an evolution optimization process is related to the sensitivity of the objective function with respect to the random variable. To avoid fluctuation of the convergence rate, an optimized group strategy is proposed to ensure that the COV curve for candidate particles converges consistently in the solution hyperspace. The convergence rate of a random variable is replaced by the COV for candidate particles. The COV for candidate particles of a variable is statistically operated on the selected optimized group for several generations and is regarded as the measurement of sensitivity of the variable. The smaller the COV for the candidate particles of a variable, the more sensitive the objective function with respect to that variable. Numerical studies indicate that PSO is more efficient for reliability index calculation and SA, particularly in solving complicated problems.