A two-phase approach based on sequential approximation for reliability-based design optimization

The original problem of reliability-based design optimization (RBDO) is mathematically a nested two-level structure that is computationally time consuming for real engineering problems. In order to overcome the computational difficulties, many formulations have been proposed in the literature. These include SORA (sequential optimization and reliability assessment) that decouples the nested problems. SLA (single loop approach) further improves efficiency in that reliability analysis becomes an integrated part of the optimization problem. However, even SLA method can become computationally challenging for real engineering problems involving many reliability constraints. This paper presents an enhanced version of SLA where the first phase is based on approximation at nominal design point. After convergence of first iterative phase is reached the process transitions to a second phase where approximations of reliability constraints are carried out at their respective minimum performance target point (MPTP). The solution is implemented in Altair OptiStruct, where adaptive approximation and constraint screening strategies are utilized in the RBDO process. Examples show that the proposed two-phase approach leads to reduction in finite element analyses while preserving equal solution quality.


Introduction
The original problem of reliability-based design optimization (RBDO) is mathematically a nested two-level structure (Tu et al. 1999), consisting of a design optimization loop which repeatedly calls reliability analyses in a series of inner loops. This approach is computationally inefficient and therefore has limited applicability for real engineering problems. In order to overcome the computational difficulties, many formulations have been proposed in the literature. Du and Chen (2004) decouples the RBDO process into a sequence of deterministic design optimization followed by a set of reliability assessment loops. The sequential optimization and reliability assessment (SORA) method uses the reliability information from the previous cycle to shift the deterministic constraints. The process continues until convergence of both sub-problems is reached. SORA still contains a structure of two iterative loops. However, the two processes are decoupled, hence it is significantly more efficient than the nested two-level process. It has been shown that SORA can achieve high solution accuracy and robustness (Aoues and Chatteauneuf 2010;Chen et al. 2013a, b). Zou and Mahadevan (2006) have proposed a decoupled approach based on efficient simulation procedure, where the system reliability is included in the RBDO formulation. Chan et al. (2007) have transformed the nonlinear programming RBDO problem to a sequence of deterministic linear programming sub-problems. Torii et al. (2016) have proposed a general RBDO decoupling approach that is compatible with different reliability analysis methods including Monte Carlo simulation, first order reliability method, second order reliability method etc. Many RBDO algorithms belong to the decoupled approach (Royset et al. 2001;Wu and Wang 1998;Qu and Haftka 2004). Since the reliability analysis of each performance function contains a separate optimization loop, and a sequence of deterministic optimization is processed before converging to the optimal design, the decoupled approach still involves extensive computation (Lim and Lee 2016).
Various single-loop methods have been proposed to further improve computational efficiency. Single-loop approach (SLA), which is initiated by Chen et al. (1997) and has been further developed by Liang et al. (2008), collapses the nested optimization loops into an equivalent single-loop optimization process by incorporating the KKT (Karush-Kuhn-Tucher) optimality conditions of the inner reliability loops within the outer design optimization loop. The process becomes essentially a deterministic optimization problem wherein constraints are shifted by approximation of MPTP points resolved from the KKT conditions of the reliability analyses. When the process converges smoothly, the MPTP points derived from the KKT conditions typically converge to accurate solutions. However, for highly nonlinear problems it has been shown that this simultaneous convergence behavior is not guaranteed (Lim and Lee 2016). The possible failure in achieving accurate RBDO solution is one of the key potential limitations of SLA. Cheng et al. (2006) and Yi et al. (2008) extended the well-known approximation concept (Schmit and Farshi 1974) to RBDO and developed a sequential approximate programming (SAP) method, where the RBDO problem is solved by a sequence of approximate explicit problems. In each subprogramming problem, rather than relying on direct linear Taylor expansion of the reliability index or probabilistic performance measure (PPM), they developed a formulation for approximate reliability index or approximate PPM at the current design point and used its linearization instead. The approximate reliability index or approximate PPM and their sensitivities are obtained from a recurrence formula drawn from the reliability optimality conditions. In general, single-loop methods are more efficient than the two-level approach including unnested SORA, with potential drawbacks of reduced solution accuracy and robustness (Aoues and Chatteauneuf 2010). In more recent work Lim and Lee (2016) proposed robustness enhancements to SLA method by (1) monitoring the validity of SLA process; (2) adopting an alternative reliability analysis process when accuracy of SLA is not met. Single-loop methods are generally dependent of the usage of FORM (First Order Reliability Method), which can result in relatively large error in reliability analysis when the performance function is highly nonlinear. Although the single-loop methods are much more efficient than the two-level approaches, they can still be computationally challenging for real engineering problems involving many reliability constraints. Li et al. (2013) proposed using reliable design space (RDS) (Shan and Wang 2008) to completely convert the RBDO problem to a deterministic optimization problem. The RDS is defined through sensitivities at the current design point, instead of the sensitivities at the MPP of each limit-state function. The transformation of probability constraint to deterministic constraint is done in only one step. This results in a significant reduction of computational effort since iterative search for MPP is removed. However, the authors pointed out clearly that this approach, in general, does not preserve optimality of the original RBDO problem.
This paper presents an enhanced version of SLA where the first phase is based on approximation of performance functions at nominal design point. After convergence of first iterative phase is reached the process transitions to a second phase where approximation of reliability constraints are carried out at their respective minimum performance target point (MPTP) at the current iterative stage. The desired behavior of this process is: (1) to achieve an approximate RBDO solution with the first phase that is as efficient as solving the corresponding deterministic optimization; (2) to reduce the number of iterations in the full SLA process when starts at a near optimum, and achieve reasonable solution accuracy. The two-phase approach is carried out based on a sequential approximation framework introduced by Schmit and Farshi in 1974 and further expanded by other researchers (see, e.g., Fleury and Braibant 1986;Vanderplaats and Salajegheh 1989;Canfield 1990;Zhou and Thomas 1993). We implemented the solution in Altair OptiStruct (Zhou et al. 2004;OptiStruct 2017) in a MPI (message passing interface) process where FEA solutions at MPTP points are carried out in parallel. The approximation and constraint screening strategies from Altair OptiStruct are adopted in the RBDO process to enhance computational efficiency and reduce memory usage. An overview of OptiStruct implementation can be found in Zhou et al. (2004).
This paper is structured in the following sections. Section 2 gives an overview of the RBDO problems. In Section 3 the overview of the approximation and constraint screening strategies implemented in Altair OptiStruct is summarized. The details of the proposed two-phase approach based on sequential approximation are presented in Section 4. Four examples with explicit formulations and four examples with finite element models are given in Section 5. Section 6 offers concluding remarks and suggests topics for future study.
2 Overview of RBDO problems RBDO problem can be generally formulated as where d is the vector of deterministic design variables with lower and upper bounds d L and d U , respectively; X is the vector of random design variables, also called random control factors, of which their mean value v x is to be designed with lower and upper bounds v L x and v U x , respectively; P is the vector of random parameters, also called noise factors; C is the objective function; G i is the ith performance function (or limit state function); Pr[.] is the probability operator; F T i is the admissible failure probability; m is the number of performance functions. The probabilistic constraints define the feasible region by restricting the probability of violating the performance function G i to the admissible failure probability F T i . An example of a typical objective function for structural optimization is the weight of the structure. An example of performance functions is the stress of a structural component. A typical random variable can be the thickness of sheet metal that has manufacturing variances, and a random parameter can be the allowable stress of the material.
The failure probability is given by the following integral where f(X, P) is the joint probability density function of X and P; F G i *; d ð Þ is the cumulative distribution function of G i . In general, F T i can be represented by the target reliability index The optimization problem in (1) defines the so-called double-loop (also known as two-level) RBDO problem. It consists of two nested optimization loops: the outer loop aims at solving the optimization problem in terms of design variables d and v x whilst the inner loop aims at solving the reliability problem in terms of random variables X and random parameters P. In order to reduce the computational effort, two main formulations are widely used to deal with probabilistic constraints: the reliability index approach (RIA) and the performance measure approach (PMA).
The classical RBDO formulation is based on RIA, which is defined as: where β i and β T i are, respectively, the structural and the target reliability indexes for the ith performance function. By transforming the random factors X and P into uncorrelated normalized variables u X and u P , the reliability index is computed by solving the following constrained optimization problem: The solution u * X ; u * P is called the most probable failure point (MPFP). The reliability index is given by . X * and P * are the image of u * X and u * P in the physical space. ‖.‖ is Euclidean norm.
The PMA is based on the observation that minimizing a complex function under simple constraints is more efficient than minimizing simple function under complex constraints. Therefore, the PMA is given by an inverse reliability analysis: where G p i is the performance measure corresponding to the target reliability β T i evaluated by an inverse reliability analysis, by searching for the failure point corresponding to the lowest performance that satisfies the target reliability index: The solution u * X ; u * P is called minimum performance target point (MPTP). The performance measure is given as Youn and Choi (2004) showed that PMA is more efficient, stable and less dependent on probabilistic distribution types than RIA.
Various optimization methods such as SLP (sequential linear programming), SQP (sequential quadratic programming) and MFD (method of feasible directions) (Haftka et al. 1990) can be applied to solve this sphere constrained optimization problem. Based on the KKT conditions of (6), the following iterative formulae can be derived using the advanced mean value (AMV) method (Yi et al. 2008): where ∂g i ∂u X i are the partial derivatives of g i to u X i . ∂g i ∂u P i are the partial derivatives of g i to u P i . The superscript k refers to the quantity at kth iterative step. The MPTP u * X i ; u * P i h i is updated based on (7a) and the probabilistic performance measure G p i ¼ g i d; u * X i ; u * P i is updated based on (7b). Using (7a-7b), the two-level RBDO problem in (5) can be transformed to the following single-loop optimization problem (Yi et al. 2008): where and T(.) is the function to transform the value from the original random parameter space (X-space) into the independent standard normal space (U-space). During the RBDO process, the transformation between X-space and U-space at a given design point must be carried out to estimate the probabilistic constraint. Youn and Choi (2004) listed the transformation functions of five different distribution types including Normal, Lognormal, Weibull, Gumbel and Uniform distributions.
In the single-loop RBDO formulation (8), there is no iterative search steps to find the MPTP of each performance function at each iteration k. Thus, computational effort is significantly reduced. Yi et al. (2008) extended the well-known approximation concept (Schmit and Farshi 1974) to RBDO and developed a sequential approximate programming (SAP) method. The RBDO problem is solved by a sequence of approximate explicit problems. The reliability constraints are replaced by the first order Taylor expansion based on the approximate probabilistic performance measures and their sensitivities at the starting design of the current iteration. The approximate probabilistic performance measure is obtained from a recurrence formula drawn from the reliability optimality conditions. where, and C k − 1 is the objective function value at the final design of (k-1)th approximate problem d k−1 ; v k−1 x À Á .
∂C k−1 ∂d and ∂C k−1 ∂v x are the sensitivity of objective function evaluated at d k−1 ; v k−1 x À Á . The approximate probabilistic measure is the performance function G i evaluated at . d Lk and d Uk are move limits to limit the design change during the k th iteration step. Please note that, the sub-programming problem (9a) is to solve a deterministic optimization problem and obtain the approximate optimal design. In kth iteration, performance functions are disposed by (9b) for only one iteration step to get approximate MPTP (Yi et al. 2008). SAP is one of the most efficient SLA methods as demonstrated by Aoues and Chatteauneuf (2010), Cheng et al. (2006), Yi et al. (2008) and Chan et al. (2007).

Overview of approximation and constraint screening strategies in OptiStruct
Let's consider a general deterministic optimization problem with inequality constraints where G i denotes inequality constraint. For most structural design problems, the objective function and constraints are implicit functions of the design variables and are usually calculated by finite element analysis (FEA). Since the computational cost of FEA is usually high, the number of analyses carried out during optimization has a direct impact on efficiency of the algorithm. Approximation concept was first introduced by Schmit and Farshi in 1974, and has become the de facto standard for structural optimization. With approximation concept approach, the original optimization problem can be solved by a sequence of approximate optimization problems. The focus of approximation is placed on converting time consuming implicit functions requiring finite element analyses to explicit functions utilizing sensitivity information. The approximate optimization problem can be expressed as: where m k r is the number of retained constraints at kth iteration. Constraints are retained based on a threshold that defines the closeness to constraint bound (OptiStruct 2017). The concept of constraint screening was also introduced in the important paper by Schmit and Farshi (1974). The basic logic is that at each iteration the solution only needs to focus on constraints that are potentially active so that we can save costly computational effort on performing sensitivity analysis of constraints that are further away from their bounds. When an ignored constraint becomes violated at the (k + 1)th design point they will be retained in the (k + 1)th approximate optimization problem. The objective and constraint functions in (11) are the local approximations of the original functions in (10). Since these local approximations are only valid in the vicinity of the current design point, a trust region defined by move limits d Lk and d Uk are widely used to limit the design change in each iteration step. With proper management of trust region, the sequence of approximate optimization problems can generally converge to optimal design of the original problem.
Typical approximation formulations used in structural optimization are linear approximation (also known as first-order Tayler expansion), shown in (12), reciprocal approximation in (13), and convex approximation in (14) (Haftka and Starnes 1976;Fleury and Braibant 1986). where Altair OptiStruct adopts an adaptive approximation scheme introduced by Thomas (1996). In this approach one of the approximation formulation (12-14) is chosen for the first iterative step k = 1. At the (k + 1)th design point obtained from solving (11) the approximate functions according to all three formulations (12-14) are calculated. These approximation values are compared to true values obtained from FEA analysis of the (k + 1)th design point. Then the formulation of a given constraint with least error is used for the (k + 1)th approximate problem. This approach has shown to improve approximation quality of constraints during the iterative process, resulting in reduction in overall iterations (Thomas 1996;Zhou et al. 2004). In OptiStruct implementation, direct or adjoint analytical sensitivity analysis are selected automatically for best computational efficiency (Zhou et al. 2004). Overview of sensitivity analysis can be found in Haftka et al. (1990). The RBDO solution discussed in this paper is implemented in OptiStruct within the same approximation framework.

Two-phase approach based on sequential approximation
Let's consider the PMA-based RBDO formulation in (5). Following the approximation and constraint screening strategies outlined in Section 3, the approximate RBDO problem can be formulated as: The above sub-optimization problem can be solved using the iterative scheme in (8): x and the superscript t refers to the quantity at tth iterative step in the kth sub-optimization problem. In Altair OptiStruct, SQP algorithm (Boggs and Tolle 1995;Haftka et al. 1990) is used to solve the corresponding deterministic optimization problem. The kth sub-optimization problem is solved until convergence, and the approximate optimal design d k* ; v k* x À Á together with the approximate MPTP of each retained reliability constraint X k* j ; P k* j are obtained. Please note that, d k* ; v k* x À Á and X k* j ; P k* j belong to the kth sub-optimization problem (15) instead of the original problem (5). It is crucial to construct the approximate performance func-tionG p j . Yi et al. (2008) proposed to construct the approximate performance function based on the response value and sensitivity on the approximate MPTP achieved by the iterative formulation (8). In structural FEA, one analysis yields all relevant response information involved in the optimization formulation when uncertainty is not considered. Hence the approximation concept approach has proven to be highly efficient when analytical sensitivity of responses can be obtained efficiently. Unfortunately for a reliability constraint the analysis must be carried out at its corresponding MPTP that is, in general, different from the nominal design point. Therefore, the number of FEA at each iteration increases quickly proportional to the number of reliability constraints. This performance characteristic represents a major performance limitation of the SAP (Yi et al. 2008) and other single loop RBDO approaches (Chen et al. 1997;Liang et al. 2008).
A two-phase strategy is introduced in this paper to enhance the computational efficiency of RBDO. In the first phase, the approximate expansions of all reliability constraints are formulated with sensitivities at the kth start design (random factors are at their mean value).
Using linear approximation as an example, the approximation ofG p j is formulated as where G pk j is the performance function G j evaluated at kth start ; P n is the mean value of the random parameters; are the sensitivity of performance function G j evaluated at d k ; v k x ; P n À Á . In OptiStruct, one FE analysis on the nominal design can, in general generate all related responses. Efficient sensitivity analysis are carried out utilizing the already decomposed stiffness matrix (Haftka et al. 1990). Thus, the approximate expansions of all responses on d k ; v k x À Á can be achieved with only one analysis no matter how many reliability constraints are involved. The first phase is essentially as efficient as the corresponding deterministic optimization. However, since the approximate expansion of a reliability constraint is based on d k ; v k x À Á , and X k j ; P k j is never evaluated on the original problem, the reliability assessment in the first phase is approximate in nature. To achieve a final solution with accurate reliability assessment, the iterative process continues with a second phase once the first phase has converged. In the second phase, each of the retained reliability constraint and its related sensitivity are evaluated at its corresponding ap- at the start design of the kth approximate problem. Using linear approximation as an example, the approximation of G p j is formulated as where G pk j* is the performance function G j evaluated at are the sensitivity of performance function G j evaluated at (d k ; X k j ; P k j ). In essence the second phase outlined above differs with the SAP approach from Yi et al. (2008) in that (1) constraint screening and adaptive approximation are incorporated; (2) instead of calculating MPTP only once at the beginning of the kth approximation problem, they are updated repeatedly based on (18) during the solution process of the current approximate problem. The condition for SLA methods to achieve optimum RBDO solution is concurrent convergence of both optimization and reliability assessment. In other words, when d k* ; v k* x À Á converges to the optimum design d * ; v * x À Á of the original problem in (5), X k* j ; P k* j would converge to the MPTP X * j ; P * j at the same time. In literature discussed earlier SLA methods have been shown to be effective for many engineering problems, although concurrent convergence could fail for highly nonlinear problems. Lim and Lee (2016) showed that SLA solution robustness can be improved by introducing additional convergence monitoring and back up process. In this paper the implementation is limited to straightforward SLA approach, with a single focus on enhancing computational efficiency. Note that, by utilizing constraint screening, reliability analyses in the second phase are not carried out for potentially inactive constraints. This is important for engineering applications where typically many constraints remain inactive. The flowchart of the proposed two-phase approach is shown in Fig. 1. In summary the goal of the proposed approach is to increase overall computational efficiency by: (1) quickly zooming in to a near optimal RBDO solution at highly reduced computational cost based on approximate reliability assessment; (2) then refining the solution with accurate, albeit much costlier, full SLA reliability analyses. The two-phase approach is implemented in OptiStruct 2017.2. The effectiveness of the proposed approach is demonstrated on a set of standard RBDO benchmark problems from the literature in the next section.

Numerical examples
In OptiStruct the convergence criteria are defined by the relative changes in the objective function between consecutive iterations, supplemented by constraint status verification. Default convergence tolerance 0.5% is used for all examples. As can be seen in literature (Valdebenito and Schueller 2010), it is rather difficult to compare results from different research papers. Thus many authors chose to compare results of different RBDO methods based on their own implementation (Du and Chen 2004;Yi et al. 2008;Aoues and Chatteauneuf 2010). This paper compares the two-phase approach with SAP, which is one of the most efficient SLA methods as demonstrated in the literature (Aoues and Chatteauneuf 2010;Cheng et al. 2006;Yi et al. 2008;Chan et al. 2007). In Altair OptiStruct, analytical sensitivity analysis does not require additional finite element analysis. Therefore, the number of function evaluations listed in this paper did not include the ones for sensitivity calculation. The results of SAP are from the paper by Yi et al. (2008), with additional information on the number of function evaluations provided by Yi who is also a co-author of this paper.
In addition, two different iterative processes are implemented in the same code OptiStruct 2017.2 for comparison: (1) Start directly with the second phase of the two-phase approach (denoted as Phase_2_only); (2) A modified version of the Two-Phase approach where first phase solves the Fig. 1 Flowchart of the twophase approach equivalent deterministic optimization problem by ignoring reliability requirements (denoted as DO_phase_2). In essence the Phase_2_only process represents direct implementation of SAP in OptiStruct. The main differences to the original SAP implementation by Yi et al. (2008) are: (1) constraint screening is applied; (2) adaptive approximation outlined in Section 3 is adopted. The DO_phase_2 process aims to provide a reference where deterministic optimization is carried out first before starting RBDO with the SAP approach.
Example 1. Problem with mean of random variables as design variables (Yi et al. 2008) Considering the following mathematical model with two random design variables where v x j are the mean values of random variables X, and and β t i ¼ 2:0 (corresponds to 2.28% failure probability). X 1 and X 2 follow five different types of random distributions with standard deviation 0.6: normal, lognormal, Weibull, Gumbel, and uniform. The initial design is v 0 x ¼ 5:0; 5:0 ð Þ. The results of this example are summarized in Table 1. For the two-phase approach, in the first phase one function evaluation at each iteration can generate all the needed response values (performance functions are all evaluated at the same point). So the total number of function evaluations is much less compared to evaluations during second phase. The values in the middle row of each column, such as 7.268 (3.609, 3.659), are the final objective function and optimal design. The values in the bottom row of each column, such as 1.950 (2.561%), are the reliability index and failure probability of the most critical performance function of the optimal design, which is evaluated by direct MCS (Monte Carlo Simulation) with 10 6 sampling points.
In this example, the two-phase approach implemented in Altair OptiStruct is much more efficient than SAP. The distribution types of the random variables have very little influence on the computational efficiency of the two-phase approach while SAP performs quite differently for the different distribution types. The two-phase approach consumes the same computational effort for all the five distribution types except Gumbel, in which number of function evaluations is slightly less than others. For Gumbel distribution, SAP consumes more than 1.6 times of the computational effort of that with Normal distribution.
In this example, Phase_2_only consumes more computational effort than the two-phase approach (by 15% on average up to 63%). Additionally, different distribution type has large influence to the performance of Phase_2_only. For Gumbel distribution, Phase_2_only consumes 35% more effort than others. DO_Phase_2 consumes more computational effort than the two-phase approach as well (by 21% on average up to 37%).
It should be noted that, for uniform distribution, the result of SAP by Yi et al. (2008), i.e. 6.869 (3.521, 3.348), violates the constraints when assessed by direct MCS. The validated reliability index 1.648 is obviously smaller than the target 2.0. The two-phase approach achieves reasonable solution accuracy on reliability analysis for all the five distributions. Also the SAP implementation in OptiStruct, Phase_2_only, was able to achieve feasible design due to adaptive approximation scheme.
Example 2. Problem with random parameters independent of design variables (Yi et al. 2008) The considered cantilever beam is shown in Fig. 2. The deterministic design variables are d = [w, t] T . The random parameters P = [X, Y, y, E] T consist of loads and Young's modulus, of which the stochastic parameters are given in Fig. 2. The optimization problem is defined as and β t i ¼ 3:012 (corresponds to 0.13% failure probability). The optimization results of three different initial designs are listed in Table 2. This example also shows that the two-phase approach is more efficient than SAP. It can also be observed that the different initial designs have less influence on the computational efficiency of the proposed approach.
In this example, Phase_2_only consumes more computational effort than the two-phase approach (by 40% on average up to 49%). DO_Phase_2 consumes more computational effort than the two-phase approach as well (by 50% on average up to 61%).
In this example, all the four approaches achieve reasonable solution accuracy on reliability analysis.
Example 3. Optimal column design with both deterministic and probabilistic constraints (Yi et al. 2008) This example considers a short column with a rectangular cross section of dimensions b and h, which is subjected to biaxial bending moments M 1 and M 2 and an axial force F. Deterministic design variables are d = [b, h] T and random parameters are P = [M 1 , M 2 , F, Y] T , where Y denotes the yield strength of the material. All random parameters are of lognormal distribution. Their means and coefficients of variation are listed in Table 3. Assuming an elastic-perfectly plastic material, the reliability of the column is defined by the limit state function The column is to be designed for minimum cross sectional area A = bh, subject to 0.5 ≤ b/h ≤ 2, b, h ≥ 0, and β t = 3.0 (corresponds to 0.135% failure probability).
The initial designs (0.3, 0.6) and (0.5, 0.5) are used respectively for testing. The optimization results are listed in Table 4. It can be seen from the results that the two-phase approach performs more efficiently than SAP, and the different initial designs have less influence to the computational efficiency of the proposed approach. In this example, Phase_2_only consumes more computational effort than the two-phase approach (by 60% on average up to 114%). DO_Phase_2 consumes more computational effort than the two-phase approach as well (by 115% on average up to 200%).
It should be noted that, for all the four approaches, the validated reliability index of the optimal design is smaller than the target. This precision issue is introduced by the high nonlinearity of the transformation between X-and U-spaces in probability constraint evaluation, and the approximation error of the first-order reliability method. This is a known weakness of single-loop approach based on FORM (Lim and Lee 2016).
Example 4. Problem with highly nonlinear performance function (Chen et al. 2013a) This example has two random design variables and one highly nonlinear probabilistic constraint. The optimization problem is stated as below.
where v x j are the mean values of random variables X, and with β t = 2.0 (corresponds to 2.28% failure probability). X 1 and X 2 are statistically independent and follow normal distributions with standard deviation 0.1. The initial design is v 0 x ¼ 2:97; 3:40 ð Þ . The optimization results are listed in Table 5. Since there is no published SAP results of this example, SAP is not referenced here. The SORA results by Chen et al. (2013a) are compared instead. It can be seen from the results that the two-phase approach is much more efficient than SORA.
In this example, Phase_2_only consumes less computational effort than the two-phase approach by 15%. DO_Phase_2 consumes more computational effort than the two-phase approach by 30%. In this example, the starting design point is very close to the deterministic optimum. Since the first phase relies on approximation formed at nominal design, which can be relatively distant from the MPTP points because of the highly nonlinear performance function, the final solution of the first phase is not close to the optimal RBDO. Consequently the second phase still take considerable iteration steps to achieve convergence. As a result, the twophase approach performs slightly less efficient than Phase_2_only.
For this example, SLSV (single-loop single vector) (Chen et al. 2013a) reached a design with reliability index −0.803 because of the highly nonlinear performance function. Phase_2_only, DO_Phase_2 and SORA achieve similar solution accuracy on reliability analysis, which is slightly smaller than the reliability target. The two-phase approach achieves a design with solution accuracy higher than SORA. This is, however, only due to effects of different starting points since reliability analyses in the two-phase approach, Phase_2_only and DO_Phase_2 follow same formulation and process. It should be noted that, if the initial point is changed to (3.50, 3.50), all the three methods including the two-phase approach, Phase_2_only and DO_Phase_2 cannot find a feasible solution. This shows again the weakness of single-loop approach based on FORM. SORA implemented in Altair HyperStudy (2017) reaches the optimal 1.304 (2.816, 3.277) with 157 function evaluations.
Example 5. Optimal design of a 10-bar truss (Yi et al. 2008) A 10-bar plane truss subject to two vertical random forces is shown in Fig. 3. The 10-bar cross sectional areas are deterministic design variables. Their initial value is 10 in 2 , with lower and upper bound 0.01 in 2 and 25 in 2 respectively. The magnitudes of the two forces are independent random parameters and both follow lognormal distribution (1.0E5, 5.0E3) lb. All members are made of the same material with Young's modulus 1.0E7 Psi. Material yield stress is also a random parameter following normal distribution (2.5E4, 2.5E3) Psi. The first limit state function is defined as nodal maximum vertical displacement, which should be less than 4.5 in. Additional limit state functions are such that the maximum tensile and compressive stress in each element should not be greater than the material yield stress. The acceptable reliability index is 3.0 (corresponds to 0.135% failure probability). The objective is to minimize the total volume. The iteration history of bar cross sectional areas is shown in Fig. 4. The two-phase approach is significantly more efficient than SAP for the 10-bar truss example. Note that the much higher FE analyses by SAP is partially due to not performing constraint screening in the implementation by Yi et al. (2008).
To further evaluate the performance, a different initial design (0.1, 0.1, …, 0.1) is tested. This design is far away from the optimal design and has large constraint violation. The twophase approach converges to 27,606 in 3 (11.56, 7.56, 14.97, 0.01, 0.01, 0.01, 10.92, 8.27, 0.01, 10.89) after 31 iterations with 79 FE analyses in total. Phase_2_only converges to 27,913 in 3 (11.76, 7.55, 15.19, 0.55, 0.01, 0.55, 10.70, 8.53, 0.69, 9.72) after 32 iterations with 212 FE analyses in total. DO_Phase_2 converges to 27,618 in 3 (11.56, 7.77, 14.97, 0.01, 0.01, 0.01, 10.68, 8.27, 0.01, 11.00) after 37 iterations with 86 FE analyses in total. The two-phase approach is much more efficient than Phase_2_only, whilst just slightly better than DO_Phase_2. Results show that starting with different initial design has little influence on the performance of the two-phase approach, while it can significantly affect the performance of Phase_2_only. If the initial design is far away from the optimal design and constraints are largely violated, Phase_2_only consumes much more computational effort than the two-phase approach.
Example 6. Optimal design of a square plate (Yi et al. 2008) A square plate of 10 × 12 in, shown in Fig. 5, is simply supported at four points along the two diagonals of the plate. A distributed load 150lb/in is applied along the plate edges and another distributed loads 200lb/in is applied along two symmetrical lines. A quarter of the plate is used to construct the finite element model because of symmetry. The finite element model is shown in Fig. 6, which contains 49 four-node shell elements. All members are made of the same material with density 7.71E-4lb/in 3 . Material Young's modulus is a random parameter with mean 2.8E7Psi. The element thicknesses are linked as seven random variables shown in Fig. 6. All random variables follow normal distribution and their variation factors are 0.01. The mean values of the seven random thickness variables are selected as design variables with initial value of 0.71in, and 0.1in and 2.0in for lower bound and upper bound, respectively. The limit state function is defined as such that the vertical displacement of the midpoint should be less than 0.006in. The acceptable reliability index is 3.1 (corresponds to 0.1% failure probability). The total mean weight is to be minimized.
A different initial design (0.1, 0.1, …, 0.1) is tested to further evaluate the performance. This design is far away from the optimal design and is in large constraint violation. The two-phase approach converges to 1. 071E-2 lb (1.221, 0.996, 0.812, 0.560, 0.152, 0.713, 0.100) after 21 iterations with 25 FE analyses. Phase_2_only converges to 1.071-2 lb (1. 112, 0.940, 0.850, 0.551, 0.149, 0.728, 0.100) after 20 iterations with 41 FE analyses. DO_Phase_2 converges to 1.060E-2 lb (1. 061, 0.823, 0.601, 0.370, 0.152, 0.970, 0.100) after 22 iterations with 30 FE analyses. The two-phase approach is much more efficient than Phase_2_only, and is slightly more efficient than DO_Phase_2. Since this example includes only one reliability constraint, the performance difference between the different methods is not as significant as in example 5. However, it still shows that different starting designs have little influence to the performance of the two-phase approach, but they have large effect on the performance of Phase_2_only. If the initial design is far from the optimal design and constraints are largely violated, Phase_2_only consumes much more computational effort than the two-phase approach.
Example 7. Cantilever L-beam shape optimization This example includes both size and shape design variables and is derived from Altair OptiStruct tutorial OS-5010 (OptiStruct 2017). Both deterministic optimization and RBDO are carried out with the purpose to show that the proposed two-phase approach based on sequential approximation can solve a variety of optimization problems, and the computational effort for solving RBDO problem is reasonable compared to deterministic optimization. Figure 8 shows the schematic of this example. The length of the cantilever is 100mm. The width and height of the Lbeam section are both 20mm. A vertical force 1000N is applied at the suspended end. All DOFs of the other end of the beam are fixed. The vertical deflection at point N should be limited to 1.5mm while minimizing the amount of material required. Aluminum material is used, with Young's modulus 7.0E4MPa and Poison ratio 0.3. The finite element model is shown in Fig. 9, which contains 1000 four-node shell elements. This example includes four shape variables which are shown in Fig. 10. The design ranges of the four shape variables are that, the maximum node movement is ±10mm. Two size variables are used to design the thickness of the flange and web of the L-beam, respectively, with initial value 3.0mm, lower bound 1.0mm, and upper bound 5.0mm.
For the above deterministic optimization problem, OptiStruct converges to 5276mm 3 (−10.000, −0.001, −0.004, −10.000, 1.000, 1.092) after 9 iterations with 10 FEA in total. The shape of the optimal design is shown in Fig. 11. Figure 12 shows the iteration history of the six design variables. Now random factors are introduced to the above model. All the shape variables and size variables are assumed to be random. The four shape variables follow uniform distribution with standard deviation 1.0. The two size variables follow normal distribution with standard deviation 0.1. Failure probability of the constraint should be less than 1.0%. The twophase approach converges to 5832mm 3 (−10.000, 10.000, 10.000, −10.000, 1.000, 1.611) after 11 iterations with 16 FEA in total. The shape of the optimal design, which is shown  Fig. 13, is different from the result of deterministic optimization. The iteration history of the six design variables are shown in Fig. 14. Comparing to the corresponding deterministic optimization, the two-phase approach consumes additional 6 FEA to solve the RBDO problem, which is a small increase considering that reliability assessment is carried out.
With the same model, failure probability requirement is changed to be 0.023% to experiment how the proposed method performs on problems with higher reliability requirement. The two-phase approach converges to 6138mm 3 (−10.000, 10.000, 10.000, −10.000, 1.000, 1.713) after 11 iterations with 16 FEA in total. Both Phase_2_only and DO_Phase_2 converge to the same design with respectively 13 iterations 27 FEA and 17 iterations 26 FEA.
Example 8. Automobile instrument panel assembly optimization NVH (Noise, Vibration and Harshness) performances are one of the most important design criteria for cars. In this example, RBDO is applied to an automobile instrument panel with frequency constraints reflecting NVH requirements. The finite element model (FEM) is created by Altair HyperMesh (2017). There are 49,709 elements in total including 2994 CHEXA elements, 12 CTETRA elements, 150 CPENTA elements, 43,947 CQUAD4 elements, 2600 CTRIA3 elements and 6 CBUSH elements. Total number of degrees of freedom is 307,101. The model has two types of materials with Young's modulus 2.8E3MPa and 2.1E5MPa respectively, and Poison ratio 0.3. The initial volume of the instrument panel is 2.700E7mm 3 . Figure 15 shows the FEM of this example. The objective is to minimize the total volume of the instrument panel, while subject to constraints on the first five frequencies. The limit state functions are defined as below. Fig. 7 Iteration history of the thicknesses of plate elements Fig. 8 Cantilever L-beam Fig. 9 The FEM of the cantilever L-beam in HyperMesh Where f i is the ith frequency; target reliability index β t = 2.33 (corresponds to 1.0% failure probability).
The thickness of thirty shell parts are to be designed. Normal distribution is applied to all of the design variables. Fig. 10 The four shape variables in the cantilever L-beam Fig. 11 The shape of the L-beam achieved by deterministic optimization Fig. 12 Deterministic optimization iteration history of the six design variables in example 7 The details of the random design variables are listed in Phase_2_only and DO_Phase_2 consume much more computational effort than the two-phase approach.

Conclusion
In this paper, a two-phase approach based on sequential approximation is proposed for reliability based structural optimization within the general framework of single loop approach (SLA). During the first phase reliability analysis is based on approximation of performance functions involved at nominal design point. This phase allows the iterative process to converge to a near optimal solution, typically within a dozen of FE analyses. Then the process continues with the second phase, where approximation of reliability constraints are carried out at their respective MPTP, to improve the precision of reliability analysis. Constraint screening strategy is adopted in the RBDO process to enhance computational efficiency and reduce memory usage.
The two-phase approach is implemented in Altair  functions and two involve simple FE models, are studied. Results and performances are compared with studies carried out by Yi et al. (2008), except example 4 which are compared with results from Chen et al. (2013a). It is shown that the proposed approach is superior in performance, while achieving comparable solution accuracy. It is further observed that different initial designs and different distribution types have less influence on the computational efficiency of the proposed approach. In addition, one example out of Altair OptiStruct tutorial (OptiStruct 2017), and a practical  To better understand the effect of starting with the first phase with approximate reliability analysis, the two-phase approach is also compared with two reference processes: (1) Phase_2_only bypasses the first phase; (2) DO_phase_2 replaces the first phase with pure deterministic optimization. Numerical examples have shown that, in general, the twophase approach is more efficient than these two reference processes. Especially, when the initial design is infeasible and far from the optimum, the two-phase approach is more efficient than the process running second phase only. This advantage is also observed on the effect of different random distribution types. Comparisons also showed that while constraints are approximated at nominal design point during the first phase, including approximate reliability analysis during this phase is advantageous compared to a pure deterministic first optimization phase.
It should be noted that computational cost can increase quickly during the second phase for problems involving a large number of RBDO performance functions since at each iteration FE analyses need to be carried out at the MPTP points of the retained performance functions. In order to further reduce computer run time we implemented the solution process in a MPI parallel setting so that the multiple FE analyses can be carried out in parallel.
It should be noted that the proposed two-phase approach is derived from the SLA methods. Therefore, it inherits the following potential shortcomings of SLA methods: (1) concurrent convergence of optimization and reliability assessment is not guaranteed; (2) FORM based methods for reliability analysis have limitations in terms of solution accuracy. Insufficient solution accuracy is observed in the study of Example 3 and 4. For Example 4 infeasible solution is obtained when the initial design is further away from optimum. Furthermore, as the first phase relies on approximation formed at nominal design, the final solution of the first phase may not be close to the optimal RBDO solution when MPTP points are relatively distant from the nominal design point. This could happen when the performance function is highly nonlinear, and/or uncertainties involved have large variances. In such cases the proposed method could experience slow convergence, possibly to a less favorable local optimum. Therefore, more studies are needed for deeper understanding of the limitations of the proposed approach. Also some recent papers have proposed measures to enhance SLA solution robustness (e.g., Lim and Lee 2016).
Effects of such enhancement measures should be studied in future work.