1 Introduction

As simulation techniques are rapidly developed and extensively applied, surrogate models are widely employed to design and optimization of the engineering products whose simulations are computationally expensive (Forrester and Keane 2009). A typical design optimization problem can be formulated as follows,

$$\begin{gathered} {\text{min}}\quad f({\mathbf{x}}) \hfill \\ s.t.:\quad g_{{\text{j}}} ({\mathbf{x}}) \le 0\;,\quad \forall j = 1,2,...,N_{{\text{C}}} \hfill \\ \quad \quad \,\,Lb_{{\text{i}}} \le x_{i} \le Ub_{{\text{i}}} \;,\quad \forall i = 1,2,...,d, \hfill \\ \end{gathered}$$
(1)

where \({\mathbf{x}} = (x_{1} ,x_{2} ,...,x_{d} )\) is the design variable, \(d\) is the dimensionality of \({\mathbf{x}}\); \(f({\mathbf{x}})\) and \(g_{j} ({\mathbf{x}})\) are the objective and the constraint performance of the problem, which are usually evaluated by analysis models. \(N_{{\text{C}}}\) indicates the number of constraints; \(Lb_{{\text{i}}}\) and \(Ub_{{\text{i}}}\) are the lower and upper bounds of the design variable. Generally, there are multiple analysis models with different accuracy and efficiency that can be employed to engineering optimization problems, and these models can be differentiated by fidelity (Jin 2011). The high-fidelity (HF) analysis model refers to the one that contains physics or details that do not exist or are not accounted for in the low-fidelity (LF) model (Gano et al. 2005). In this context, the HF analysis models usually have acceptable accuracy with respect to the real behavior of a system for applications intended, for example, a highly refined grid RANS or DNS (Giselle Fernández-Godino et al. 2019). The LF analysis models are typically cheap to evaluate but less accurate, and examples for LF are dimensionality reduction (Robinson et al. 2008), coarser discretization (Biehler et al. 2015), partially converged condition (Jonsson et al. 2015). The fidelity level for an analysis model is problem-dependent, and it can be decided based on the cost and accuracy against other fidelities available, which depends on the accuracy being sought (Giselle Fernández-Godino et al. 2019). The surrogate models built by data from the LF analysis models are referred to LF surrogates. Similarly, the surrogate models constructed from HF analysis data are called HF surrogates.

To relieve the above conflict, variable-fidelity (VF) surrogate models are developed to hold the promise of achieving high accuracy with low computation expense (Park et al. 2017). VF surrogates utilize plenty of LF data to grasp the variation tendency of the responses of the analysis model and a handful of HF data to increase the accuracy in finding the extrema. Due to the advantage, VF surrogate models have gained much attention in computationally expensive black-box global optimization (Han et al. 2020; Wang et al. 2020). In variable-fidelity surrogate model-based optimization (VFSBO), a VF surrogate model is constructed first. After that, it is critical to determine a promising point through an infill sampling criterion [also known as the acquisition function in Bayesian optimization (Shahriari et al. 2015)]. Differing from the single-fidelity surrogate model, the infill sampling criteria for VF surrogates need to not only determine the location of potential sample point but consider the computation budget of different fidelity. Lower confidence Bound (LCB) criterion (Cox and John 1992; Sasena 2002) is a widely used criterion in the single-fidelity global optimization for its capability of balancing global exploration and local exploitation. To extend its usability, variant LCB criteria are developed for VF-based optimization (Jiang et al. 2019; Xiong et al. 2008). These criteria are mainly aimed at unconstrained optimization, but most of engineering design and optimization problems contain various constraints. To solve these computationally expensive constrained optimization problems (COPs), the probability of feasibility (PoF) function is a useful way to transform the original problem into unconstrained form (Chaudhuri and Haftka 2014; Parr et al. 2012). But the main drawback is that it might affect the quality of optimal solution since feasible optimum usually appears on the boundary. To fill this gap, a novel variable-fidelity constrained lower confidence bound (VF-CLCB) criterion is presented in this research to deal with computationally expensive COPs involving two levels of fidelity.

When optimizing computationally expensive engineering problems, parallel computing structure is advantageous since it can significantly shorten the design cycle (Haftka et al. 2016). A parallel surrogate model-based optimization (SBO) method needs a parallel infill sampling criterion to select several sample points at each iteration and then simultaneously simulate them at different computers or processors. The existing VF infill sampling criteria are unable to be directly used in parallel computing, and specific parallel VF infill sampling criteria are scarce. To this end, a parallel variable-fidelity constrained lower confidence bound (PVF-CLCB) criterion is proposed to parallelize the developed VF-CLCB method.

New contributions of this work are summarized as follows.

  1. (1)

    The currently popular VFSBO methods mostly employ the PoF function to deal with the constraints in COPs, which significantly leave out the improvement of the prediction accuracy of the constraints. To address this issue, this paper presents a novel VF infill sampling criterion to improve the efficiency and effectiveness for optimizing computationally expensive COPs with two levels of fidelity. The proposed VF-CLCB criterion defines a variable-fidelity constrained lower confidence bound (CLCB) function. The function plays the role of searching for feasible solution when all existed samples are infeasible and improving the accuracy of the surrogates of the constraints after finding a feasible point. Besides, a variable-fidelity penalized lower confidence bound (PLCB) function is derived based on the VF surrogate model of the objective to search for optimal feasible points. Additionally, to enhance the efficiency of the optimization, a selection algorithm is developed to adaptively determine the final update point.

  2. (2)

    To fill the gap that the parallel VFSBO method is scarce, the proposed VF-CLCB criterion is extended to a parallel criterion to accelerate the optimization process. Specifically, the proposed PVF-CLCB criterion introduces a VF influence function to approximately calculate the estimation error of the surrogates. Based on the approximate estimation error, multiple update points can be further selected through the VF-CLCB criterion. Besides, an allocation algorithm is developed to adaptively allocate the multiple computation resources between objective-oriented variable-fidelity PLCB function and constraint-oriented variable-fidelity CLCB function. Finally, the performance of the VF-CLCB and PVF-CLCB criteria are verified through 12 benchmark numerical COPs and an engineering example. The effectiveness and efficiency are demonstrated through the comparisons between the proposed methods and currently developed methods.

The remainder of this paper is organized as follows. Section 2 reviews some start-of-the-art work for VFSBO. Section 3 elaborates the proposed VF-CLCB method, and then it is extended to a parallel algorithm named PVF-CLCB in the next section. Section 5 compares the proposed methods with the popular methods through numerical examples. Then in Sect. 6, an engineering case, the optimization of stiffened cylindrical shell with variable stiffness, is tested to verify the performance of the proposed methods. Finally, conclusions and future work are provided in Sect. 7.

2 Related work

A lot of effort and progress has been made in constructing VF surrogate models. Generally, the existing VF surrogate models are generally categorized into three types. The first type is scaling function-based VF surrogate models (Haftka 1991; Lewis and Nash 2000; Zhou et al. 2016), which uses a scaling function to catch the discrepancy between HF and LF models. The second one is space-mapping VF surrogate models (Koziel et al. 2006), in which a transformation operator is employed to map the LF design space to HF design space and then the optimal sample point in HF space can be determined. The last category is Kriging-based VF surrogate models, for instance, Co-Kriging model (Kennedy and O’Hagan 2000) and hierarchical Kriging model (Han and Görtz 2012). Due to their ability to provide appropriate HF estimation error and high prediction accuracy, Kriging-based VF surrogate models have been widely applied in engineering field (Mukhopadhyay et al. 2017; Singh et al. 2017). Co-Kriging model is built based on the information of covariance between HF and LF sample points. Therefore, the HF sample set is required to be nested of the LF sample set, which limits its application in optimization to a generic extent. In hierarchical Kriging model, the LF prediction is assumed to be the trend function of VF surrogate without the requirement of nested sample sets. There are many extensions of Co-Kriging model and hierarchical Kriging model, for example, the improved hierarchical Kriging (Hu et al. 2018).

For single-fidelity SBO, there are three famous infill sampling criteria, expected improvement (EI) (Jones et al. 1998), LCB (Cox and John 1992), and probability of improvement (PI) (Jones 2001). Besides, Nakayama et al. (Nakayama et al. 2002) introduced a radial basis function (RBF) network assisted optimization method, where two update points are determined in parallel for global exploration and local exploitation, respectively. It was further enhanced by developing the adaptive scaling techniques and density function using RBF network (Kitayama et al. 2011). A constrained efficient global optimization method was then proposed by introducing an SVM-based probability of feasibility using a Probabilistic SVM model (Basudhar et al. 2012); however, these methods are only designed for the problems with single fidelity. To extend those methods to the VF scenario, VF infill sampling criteria have to be developed to balance global exploration and local exploitation with consideration of the computational cost and the correlation between different fidelities. For example, Huang et al. (2006) first proposed a sequential Co-Kriging-based optimization method, in which an augmented expected improvement (AEI) was developed to adaptively determine the location and the fidelity level of promising sample points. To combine LCB criterion and VF surrogate models, Xiong et al. (2008) put forward a periodical switching LCB criterion for VFSBO; however, the computation cost of LF analysis model was ignored. Recently, VFSBO is extensively researched and developed. Zhang et al. (2018) proposed a VF-EI criterion that used the prediction error to quantify the prediction uncertainty to avoid the empirical parameter. Serani et al. (2019) investigated the performance of four type of VFSBO strategies based on the stochastic RBF for CFD computer simulations. Yi et al. (2020b) presented a multi-fidelity RBF surrogate-based optimization framework, where the LF and HF surrogate models are sequentially exploited. Apart from RBF, support vector regression (SVR) model was also employed to build VF surrogate models (Shi et al. 2020a). To accelerate the VFSBO, researchers begins to develop the parallel VF optimization methods, for instance, He et al. (2021) extended the VF-EI approach to a parallel process in consideration of simulation failures. Guo et al. (2021) presented a novel infill criterion, Filter-GEI, to determine multiple HF and LF samples at each iteration while keeping a good balance between the local and global search. However, these methods do not address the problems with constraints.

To deal with COPs with multiple levels of fidelity, most of work considers assisting the PoF function with the variable-fidelity EI function. Liu et al. (2018) improved AEI criterion by considering a full correlation Co-Kriging model and the sample cluster issue. Afterward, Jiang et al. (2019) extended the standard LCB function to a VF-LCB criterion by considering the cost ratio and the estimated errors. Similarly, it was further extended to handle constraints with the use of the PoF function.

Discussions: the aforementioned research has made efforts to deal with unconstrained optimization problems. However, for COPs, most of them employed the PoF function to handle the computationally expensive constraints. The PoF function may affect the quality of optimal feasible solution since it uses the probability to determine promising points. Besides, the parallel VFSBO method is very scarce because these existing non-parallel methods are unable to be directly parallelized.

3 The proposed VF-CLCB method

3.1 Motivation

Most developed VFSBO methods applied the PoF function to handle the constraints in COPs; however, it may affect the effectiveness and efficiency of the optimization process. The PoF function may underestimate the quantity of interests around the region of the constraint boundary and further impair the quality of the optimized solution. Besides, the PoF function neglects the improvement of the uncertainty of the constraint surrogates; therefore, it may delay the efficiency of the optimization. To address these issues, the proposed method presents a criterion which separately deals with the surrogates of the objective and the constraints and evaluates the quality of the constraint surrogates at every cycle to improve the computation efficiency.

3.2 Variable-fidelity constrained lower confidence bound criterion

Multiple types of surrogate model, for example, RBF, SVR, have been used in VFSBO; nonetheless, the Kriging model can reasonably quantify the estimation uncertainty which is significantly useful in the optimization process. Therefore, instead of variable-fidelity RBF or SVR models, the hierarchical Kriging model is used in this work to build the VF surrogates due to its flexibility in sampling. Brief details about constructing hierarchical Kriging models and hyperparameter tuning can refer to Appendix A (Han and Görtz 2012; Toal et al. 2008). The proposed VF-CLCB criterion in this paper contains three parts to select promising sample points.

3.2.1 Formulation of variable-fidelity PLCB

To search for optimum with two levels of fidelity, a variable-fidelity LCB function is defined as,

$${\text{lcb}}_{{{\text{vf}}}} ({\mathbf{x}},l) = \hat{y}_{{{\text{hf}}}} ({\mathbf{x}}) - \left[ {2 + \ln (flag)} \right] \cdot {\text{CR}}(l) \cdot \hat{s}_{{\text{y}}} ({\mathbf{x}},l)$$
(2)

Equation (2) is comprised of a linear combination of two parts. The first part \(\hat{y}_{{{\text{hf}}}} ({\mathbf{x}})\) denotes the HF prediction of the surrogate for the objective function at point \({\mathbf{x}}\), and it is used to concentrate on the region around the predicted optimum of the HF surrogate. The optimum of HF surrogate is qualified because we aim at solving the optimum of HF analysis model. The second part includes three factors. The first one \(\left[ {2 + \ln (flag)} \right]\) is the weighting parameter for balance between global exploration and local exploitation, and \(flag\) means an index that represents the repetition times of the current optimal solution. By adding \(\ln (flag)\), the algorithm can get rid of the local optimum by the emphasis on global exploration. \({\text{CR}}(l)\) is the analysis cost ratio function (Giselle Fernández-Godino et al. 2019; Huang et al. 2006) that is defined as the computation cost between the HF analysis model and the \(l\) level fidelity model, as shown in Eq. (3); \(cr\) is the cost ratio value between HF model and LF model, which refers to the proportion between the computation cost of evaluating an HF analysis model (obtaining both the objective and the constraint responses) and that of an LF analysis model. If evaluating objective and constraints needs multiple analyses, the cost ratio is determined through the whole process of the analyses. The last item \(\hat{s}_{{\text{y}}} ({\mathbf{x}},l)\), presented in (Zhang et al. 2018), represents the prediction uncertainty of the hierarchical Kriging model of the objective function. It is defined as Eq. (4), where \(\beta_{0}\) denotes the scaling factor of the hierarchical Kriging model, as shown in Eq. (A-3) of Supplementary Information, and \(\hat{s}_{{\text{y,lf}}} ({\mathbf{x}})\), \(\hat{s}_{{\text{y,hf}}} ({\mathbf{x}})\) are the estimated error functions for the LF and the HF surrogate, respectively. By querying the prediction uncertainty, the criterion is going to judge the model quality and decide the fidelity level to be focused on.

$${\text{CR}}(l) = \left\{ {\begin{array}{*{20}l} {cr} \hfill & {l = 1} \hfill \\ 1 \hfill & {l = 2} \hfill \\ \end{array} } \right.$$
(3)
$$\hat{s}_{{\text{y}}} ({\mathbf{x}},l) = \left\{ {\begin{array}{*{20}l} {\beta_{0} \hat{s}_{{\text{y,lf}}} ({\mathbf{x}})\;,\;\;l = 1} \hfill \\ {\hat{s}_{{\text{y,hf}}} ({\mathbf{x}})\;,\;\;l = 2} \hfill \\ \end{array} } \right.$$
(4)

When aiming at the objective, the paper employs the penalty method to handle the constraints. After building the hierarchical Kriging model for each constraint function, the variable-fidelity PLCB function is derived by assuming a very large constant to the original unconstrained function when the constraints are violated, which can be expressed as,

$${\text{plcb}}_{{{\text{vf}}}} ({\mathbf{x}},l) = {\text{lcb}}_{{{\text{vf}}}} ({\mathbf{x}},l) + \alpha \cdot \max (0,\;\hat{g}^{\prime }_{{{\text{hf}}}} ({\mathbf{x}}))$$
(5)

where \({\text{plcb}}_{{{\text{vf}}}} ({\mathbf{x}},l)\) is the variable-fidelity PLCB value at point \({\mathbf{x}}\) under fidelity level \(l\); \(\alpha\) is a penalty factor; \(\hat{g}^{\prime }_{{{\text{hf}}}} ({\mathbf{x}})\) refers to the maximum constraint value. The constraint with maximum constraint value is considered as the active constraint, and it dominates the feasibility of the solution, which is evaluated by,

$$\hat{g}^{\prime }_{{{\text{hf}}}} ({\mathbf{x}}) = \max \left\{ {\;\hat{g}_{{\text{hf,j}}} ({\mathbf{x}})\;} \right\}{,}\;\;j = 1,2,...,N_{{\text{C}}}$$
(6)

where \(\hat{g}_{{\text{hf,j}}} ({\varvec{x}})\) is the prediction of the \({\text{jth}}\) constraint by hierarchical Kriging model, and \(N_{{\text{C}}}\) is the number of constraints.

By minimizing Eq. (5), the algorithm can select the points with low HF prediction and high prediction uncertainty. Besides, due to the different costs between HF and LF analysis models, the algorithm is apt to select the lower level of fidelity for reducing the computation cost. By minimizing Eq. (5) with HF and LF level respectively, two candidate points, \({\mathbf{x^{\prime}}}_{{{\text{hf}}}}\) and \({\mathbf{x^{\prime}}}_{{{\text{lf}}}}\), can be obtained.

3.2.2 Formulation of variable-fidelity CLCB

The variable-fidelity PLCB function is derived mainly based on the objective, which ignores the improvement of the models' accuracy. Therefore, a variable-fidelity CLCB function is developed to take care of the prediction accuracy around constraint boundaries during the optimization process, which is a natural extension of our previous work for single-fidelity (Yi et al. 2020a). The developed function is expressed as,

$${\text{clcb}}_{{{\text{vf}}}} ({\mathbf{x}},l) = \left| {\hat{g}^{\prime }_{{{\text{hf}}}} ({\mathbf{x}})} \right| - {\text{CR}}(l) \cdot \hat{s}^{\prime }_{{\text{g}}} ({\mathbf{x}},l),$$
(7)

where \(\hat{g}^{\prime }_{{{\text{hf}}}} ({\mathbf{x}})\) is the maximum constraint value defined in Eq. (6), and \(\hat{s}^{\prime }_{{\text{g}}} ({\mathbf{x}},l)\) is the uncertainty of the prediction.

To give more effort to the HF analysis models, the prediction of the HF surrogates are employed in Eq. (7). Minimizing the item \(\left| {\hat{g}^{\prime }_{{{\text{hf}}}} ({\mathbf{x}})} \right|\) can lead to a search for a feasible solution when none of initially sampled solutions is feasible and the accuracy improvement of the constraint boundaries. Meanwhile, the second item can eliminate the prediction uncertainty and improve the predictions of the constraints. The proposed criterion judges the model quality through the uncertainty and then chooses the model fidelity. By minimizing Eq. (7) with respect to the HF and the LF level respectively, two candidate points, \({\mathbf{x^{\prime\prime}}}_{{{\text{hf}}}}\) and \({\mathbf{x^{\prime\prime}}}_{{{\text{lf}}}}\), can be obtained.

3.2.3 Selection strategy for determining update points

After four candidate points are selected by the variable-fidelity PLCB and the variable-fidelity CLCB functions, two promising points for the objective and the constraint can be determined according to the corresponding function values. The promising point for the objective can be confirmed through Eq. (8), and the promising point for the constraint can be determined by Eq. (9).

$$({\mathbf{x^{\prime}}},l^{\prime}) = \mathop {\arg \min }\limits_{{({\mathbf{x}},l) = ({\mathbf{x^{\prime}}}_{{{\text{hf}}}} ,2)\;or\;({\mathbf{x^{\prime}}}_{{{\text{lf}}}} ,1)}} \;\;{\text{plcb}}_{{{\text{vf}}}} ({\mathbf{x}},l)$$
(8)
$$({\mathbf{x^{\prime\prime}}},l^{\prime\prime}) = \mathop {\arg \min }\limits_{{({\mathbf{x}},l) = ({\mathbf{x^{\prime\prime}}}_{{{\text{hf}}}} ,2)\;or\;({\mathbf{x^{\prime\prime}}}_{{{\text{lf}}}} ,1)}} \;\;{\text{clcb}}_{{{\text{vf}}}} ({\mathbf{x}},l)\;$$
(9)

The goal of updating promising points for the constraints is to improve the prediction accuracy of the constraints; therefore, the algorithm can give up the point for feasibility if the VF surrogates of the constraints are accurate enough. The feasibility of the selected points is evaluated by the estimation uncertainty provided in Eq. (4). As the predictions of the constraints obey norm distribution, the condition whether a point satisfies the constraints can not be judged through the single mean value. Therefore, the possibilities is calculated through the prediction and the uncertainty. Considered the three-sigma rule, if the selected points have more than 95% possibility to determine their signs, these points will be throwed away and not updated. In other words, if the point \({\mathbf{x^{\prime\prime}}}\) has larger than 5% probability of wrong sign, the surrogate models of the constraints are assumed to be not accurate enough, and the point needs to be updated. The point for optimality, \({\mathbf{x^{\prime}}}\), is always updated and evaluated during the optimization process. Through the mechanism, the proposed criterion can reach the balance between feasibility and optimality.

The whole selection process of the proposed VF-CLCB criterion is summarized in Algorithm 1.

figure a

3.3 Steps for the VF-CLCB method

The framework of the VF-CLCB optimization method is shown in Fig. 1. The specific detail for each step is provided as follows.

  • Begin

  • Step 1: Generate initial sample points by the Latin hypercube design (LHD) method (Helton and Davis 2003) for HF and LF, respectively. Then evaluate the responses by the HF/LF analysis model.

  • Step 2: Build/re-build the hierarchical Kriging models of the objective and each constraint function based on the current sample set.

  • Step 3: Determine the update points and corresponding fidelity levels based on the proposed VF-CLCB criterion. The particle swarm optimizer (PSO) (Kennedy and Eberhart 1995) is employed to minimize the Eq. (5) and (7) with 100 swarms searching 100 generations. Differential optimizers usually have respective pros and cons, and other optimizers, like the differential evolution (DE) (Price et al. 2006) or the grey wolf optimizer (GWO) (Mirjalili et al. 2014), can also be employed instead of PSO.

  • Step 4: Simulate the responses at the update points with the corresponding analysis model. If a better solution is found, the best feasible solution is then updated, and the parameter \(flag\) is assigned by 1; if not, \(flag\) updates with \(flag = flag + 1\).

  • Step 5: Judge the termination condition at the end of each iteration. In general, the number of function evaluation is used as the metric for efficiency. Since there are multiple levels of fidelity, the cost of LF is transformed into the equivalent HF cost based on the cost ratio between different fidelity. The number of equivalent function evaluation (NEFE) is evaluated by,

    $$NEFE = HFE + \frac{LFE}{{cr}},$$
    (10)

    where HFE and LFE denote the number of added sample points for HF and LF, respectively.

  • In numerical cases, the maximum NEFE and the optimal solution are used as the termination condition to test the efficiency and effectiveness of the method, which is expressed as follow.

    $$\begin{gathered} \left| {y_{\min }^{(n)} - y_{{{\text{actual}}}} } \right| \le \varepsilon \hfill \\ NEFE \ge NEFE_{\max } , \hfill \\ \end{gathered}$$
    (11)

    where \(y_{\min }^{(n)}\) means the optimal solution at the iteration \(n\) during the optimization process, and \(y_{{{\text{actual}}}}\) is the best-known feasible optimum. \(\varepsilon\) is a predefined error for stopping, and \(NEFE_{\max }\) is the maximum NEFE defined by users for the optimization process. For engineering problems, only the maximum NEFE is set as the termination condition as the real feasible optimum is unknown.

  • If the condition is met, the algorithm proceeds to step 6; otherwise, it turns back to step 2 and enters another iteration after the parameter \(flag\) in Eq. (2) updates. As inspired by reference (Srinivas et al. 2009), \(flag\) increases one if the current optimal solution is the same with that of the last iteration.

  • Step 6: Once the termination condition is satisfied, the algorithm outputs the best feasible solution and the values of corresponding design variable.

  • End

Fig. 1
figure 1

Flowchart of the proposed VF-CLCB method

4 The proposed PVF-CLCB method

To parallelize the proposed VF-CLCB method, two issues should be considered: (1) how to select multiple promising sample points, and (2) how to allocate computation resources between the objective and constraint. To this end, a PVF-CLCB method is introduced in this section, in which a parallel criterion is proposed to select multiple promising sample points and an allocation algorithm is proposed to properly distribute the computation resources.

4.1 Parallel variable-fidelity constrained lower confidence bound criterion

In a Kriging model-assisted parallel approach, an influence function was introduced in (Zhan et al. 2017) to approximately evaluate the EI function value after determining several update points. Influence function is defined by the correlation function of the Kriging model as follows,

$${\text{IF}}({\mathbf{x}},{\mathbf{x}}^{{\text{u}}} ) = 1 - {\text{R}}({\mathbf{x}},{\mathbf{x}}^{{\text{u}}} ),$$
(12)

where \({\mathbf{x}}^{{\text{u}}}\) is the promising point to be updated, \({\text{R}}( \cdot )\) is the correlation function of the Kriging model. The value of influence function approaches to zero if the point is really close to the selected update points, and approaches to one with the Euclidean distance between the point and selected points increases. The influence function is originally developed to alter the value of EI function, but the EI function has no direct relationship with the Euclidean distance of two points.

In the proposed VF-CLCB criterion, the factor that determines which fidelity to choose is the prediction uncertainty of the VF surrogate models. Moreover, the estimation error function in the hierarchical Kriging model depends on the Euclidean distance-based correlation function. Therefore, it is more rational to approximately evaluate the estimated error of the prediction through the influence function after determining update points as follows,

$$\hat{s}_{{{\text{estimate}}}} ({\mathbf{x}}) = \hat{s}({\mathbf{x}}) \cdot {\text{IF}}({\mathbf{x}},{\mathbf{x}}^{{\text{u}}} ),$$
(13)

where \(\hat{s}({\mathbf{x}})\) denotes the estimation error provided by the Kriging model.

Considering multiple levels of fidelity, the influence function can be defined as,

$${\text{IF}}_{l} ({\mathbf{x}},{\mathbf{x}}^{{\text{u}}} ) = 1 - {\text{R}}_{l} ({\mathbf{x}},{\mathbf{x}}^{{\text{u}}} ),$$
(14)

where \({\mathbf{x}}^{{\text{u}}}\) is a update point for the fidelity \(l\). \({\text{R}}_{l} ( \cdot )\) is the correlation function of the Kriging model for the fidelity \(l\).

Assumed that the algorithm has determined HF update points \({\mathbf{x}}_{{{\text{hf}}}}^{n + 1} ,...,{\mathbf{x}}_{{{\text{hf}}}}^{{n + p_{{{\text{hf}}}} }}\) and LF update points \({\mathbf{x}}_{{{\text{lf}}}}^{n + 1} ,...,{\mathbf{x}}_{{{\text{lf}}}}^{{n + p_{{{\text{lf}}}} }}\) at iteration \(n\). The approximate uncertainty functions for objective and constraint can be expressed as,

$$\hat{s}_{{\text{y}}} \left( {{\mathbf{x}},l,{\mathbf{x}}_{{{\text{hf}}}}^{n + 1} ,...,{\mathbf{x}}_{{{\text{hf}}}}^{{n + p_{{{\text{hf}}}} }} ,{\mathbf{x}}_{{{\text{lf}}}}^{n + 1} ,...,{\mathbf{x}}_{{{\text{lf}}}}^{{n + p_{{{\text{lf}}}} }} } \right) = \left\{ {\begin{array}{*{20}l} {\beta_{0} \hat{s}_{{\text{y,lf}}} ({\mathbf{x}}) \cdot \prod\limits_{i = 1}^{{p_{{{\text{lf}}}} }} {{\text{IF}}_{1} ({\mathbf{x}},{\mathbf{x}}_{{{\text{lf}}}}^{n + i} )} } \hfill & {,\;l = 1} \hfill \\ {\hat{s}_{{\text{y,hf}}} ({\mathbf{x}}) \cdot \prod\limits_{i = 1}^{{p_{{{\text{hf}}}} }} {{\text{IF}}_{2} ({\mathbf{x}},{\mathbf{x}}_{{{\text{hf}}}}^{n + i} )} } \hfill & {,\;l = 2} \hfill \\ \end{array} } \right.$$
(15)
$$\begin{gathered} \hat{s}_{{\text{g,j}}} \left( {{\mathbf{x}},l,{\mathbf{x}}_{{{\text{hf}}}}^{n + 1} ,...,{\mathbf{x}}_{{{\text{hf}}}}^{{n + p_{{{\text{hf}}}} }} ,{\mathbf{x}}_{{{\text{lf}}}}^{n + 1} ,...,{\mathbf{x}}_{{{\text{lf}}}}^{{n + p_{{{\text{lf}}}} }} } \right) = \left\{ {\begin{array}{*{20}l} {\beta_{{\text{0,j}}} \hat{s}_{{\text{g,lf,j}}} ({\mathbf{x}}) \cdot \prod\limits_{i = 1}^{{p_{{{\text{lf}}}} }} {{\text{IF}}_{1} ({\mathbf{x}},{\mathbf{x}}_{{{\text{lf}}}}^{n + i} )} } \hfill & {,\;l = 1} \hfill \\ {\hat{s}_{{\text{g,hf,j}}} ({\mathbf{x}}) \cdot \prod\limits_{i = 1}^{{p_{{{\text{hf}}}} }} {{\text{IF}}_{2} ({\mathbf{x}},{\mathbf{x}}_{{{\text{hf}}}}^{n + i} )} } \hfill & {,\;l = 2} \hfill \\ \end{array} } \right.\;\;, \hfill \\ j = 1,2,...,N_{{\text{C}}} \hfill \\ \end{gathered}$$
(16)

Based on the approximate uncertainty functions, the algorithm can select desired number of promising update points at each iteration. After the first HF and LF update points are determined, the following update points can be determined through Eqs. (17) and (18).

$${\text{plcb}}_{{{\text{vf}}}} ({\mathbf{x}},l) = \hat{y}_{{{\text{hf}}}} ({\mathbf{x}}) - (2 + \ln (flag)) \cdot {\text{CR}}(l) \cdot \hat{s}_{{\text{y}}} \left( {{\mathbf{x}},l,{\mathbf{x}}_{{{\text{hf}}}}^{n + 1} ,...,{\mathbf{x}}_{{{\text{hf}}}}^{{n + p_{{{\text{hf}}}} }} ,{\mathbf{x}}_{{{\text{lf}}}}^{n + 1} ,...,{\mathbf{x}}_{{{\text{lf}}}}^{{n + p_{{{\text{lf}}}} }} } \right) + \alpha \cdot \max (0,\;\hat{g}^{\prime }_{{{\text{hf}}}} ({\mathbf{x}}))$$
(17)
$${\text{clcb}}_{{{\text{vf}}}} ({\mathbf{x}},l) = \left| {\hat{g}^{\prime }_{{{\text{hf}}}} ({\mathbf{x}})} \right| - CR(l) \cdot \hat{s}^{\prime }_{{\text{g}}} ({\mathbf{x}},l,{\mathbf{x}}_{{{\text{hf}}}}^{n + 1} ,...,{\mathbf{x}}_{{{\text{hf}}}}^{{n + p_{{{\text{hf}}}} }} ,{\mathbf{x}}_{{{\text{lf}}}}^{n + 1} ,...,{\mathbf{x}}_{{{\text{lf}}}}^{{n + p_{{{\text{lf}}}} }} ),$$
(18)

where \(\hat{s}^{\prime }_{{\text{g}}} ({\mathbf{x}},l,{\mathbf{x}}_{{{\text{hf}}}}^{n + 1} ,...,{\mathbf{x}}_{{{\text{hf}}}}^{{n + p_{{{\text{hf}}}} }} ,{\mathbf{x}}_{{{\text{lf}}}}^{n + 1} ,...,{\mathbf{x}}_{{{\text{lf}}}}^{{n + p_{{{\text{lf}}}} }} )\) is the approximate uncertainty function of the activated constraint.

In non-parallel VF-CLCB criterion, a selection algorithm is proposed to discard redundant update points for constraints if the surrogate models of constraints are accurate enough. But in the parallel method, all the update points for objective and constraint can all be simulated and archived in the sample set because multiple computers are available. In addition, due to the difference in the computational cost between HF and LF simulation models, several LF points which cost the same with one HF point can be selected and allocated to one computer simulating together. The pseudo code of the proposed PVF-CLCB criterion is provided in Algorithm 2.

figure b

4.2 Allocation strategy for computation resources

The PVF-CLCB criterion can select multiple promising points, but the computation resources for objective and constraint should be determined beforehand. One direct way is to equally distribute existing resources for objective and constraint. That may lead to infeasible solution when the surrogate models of the constraints are inaccurate and affect the optimization efficiency. This paper proposes an allocation strategy for the trade-off between the adequacy of optimality and the accuracy of feasibility.

Assumed that there are \(q\) computers occupied in an optimization problem, the number of computation resources for objective and constraint at iteration \(n\) are \(q_{{{\text{obj}}}}^{(n)}\) and \(q_{{{\text{con}}}}^{(n)}\), respectively. The proposed strategy equally allocates the resources at the first iteration due to limited information at this cycle. Then after the update points are simulated at each iteration, the number of points whose feasibility is wrong expected by the hierarchical Kriging models of the constraints are evaluated. If the ratio of that number exceeds a predefined threshold \(\gamma\), which means the accuracy of the surrogates of the constraints should be improved, and an extra computation resource will be added for updating the constraints at the next iteration. Otherwise, the surrogate models of the constraints are considered to be accurate and the optimal feasible solution is then evaluated to allocate the resources for the objective. The allocation strategy is provided in Algorithm 3.

figure c

4.3 Steps for the PVF-CLCB method

The framework of the PVF-CLCB optimization method is shown in Fig. 2. Code is available through the webpage.Footnote 1

Fig. 2
figure 2

Flowchart of the proposed PVF-CLCB method

In Step 7, the number of iteration is used to rate the efficiency instead of the NEFE metric since several computers are computing at the same time. The number of equivalent iteration (NEI) is defined based on the NEFE metric, which is calculated as,

$$NEI = \frac{NEFE}{q} = \left( {HFE + \frac{LFE}{{cr}}} \right)/q$$
(19)

The termination condition for the numerical examples is expressed as,

$$\begin{gathered} \left| {y_{\min }^{(n)} - y_{{{\text{actual}}}} } \right| \le \varepsilon \hfill \\ NEI \ge NEI_{\max } , \hfill \\ \end{gathered}$$
(20)

where \(NEI_{\max }\) is the predefined maximum NEI. Similarly, only the maximum NEI is employed in the engineering problems.

5 Numerical experiments

5.1 Experiment settings

  1. 1)

    Test problems: Twelve benchmark numerical COPs are selected according to the recent publications, including eight CEC2006 cases (Liang et al. 2006), two famous cases Gano and Hesse (Gano et al. 2005), and two classical engineering applications (Coello et al. 2007; Kazemi et al. 2011). The detailed formulas of these cases are provided in Appendix B, and the specific information is listed in Table 1. These functions are treated as the HF models. Referred to the reference (Shi et al. 2020b), the LF analysis models are directly formulated by scaling the associated HF functions and adding a constant bias according to Eq. (21), where \(f_{{{\text{lf}}}}^{{\text{b}}}\) and \(f_{{{\text{hf}}}}^{{\text{b}}}\) are the objective values for LF and HF models, respectively. \(g_{{{\text{lf}}}}^{{\text{b}}}\) and \(g_{{{\text{hf}}}}^{{\text{b}}}\) represent the constraint responses. The scaling factor and constant deviations are directly extracted in the reference paper as \(\eta = 0.9,\delta_{{\text{f}}}^{{\text{b}}} = 0.5,\) and \(\delta_{{\text{g}}}^{{\text{b}}} = - 0.05.\)

    $$\begin{gathered} f_{{{\text{lf}}}}^{{\text{b}}} = \eta f_{{{\text{hf}}}}^{{\text{b}}} + \delta_{{\text{f}}}^{{\text{b}}} \hfill \\ g_{{{\text{lf}}}}^{{\text{b}}} = \eta g_{{{\text{hf}}}}^{{\text{b}}} + \delta_{{\text{g}}}^{{\text{b}}} \hfill \\ \end{gathered}$$
    (21)
  2. 2)

    Design of experiment: The numbers of initial sample points for HF and LF are set to \(3d\) and \(6d\), respectively. \(d\) represents the dimensionality of the problem.

  3. 3)

    Number of runs: All experiments are repeated 10 times to test the performance with the stochastic influence of initial sampling and sub-optimizer. The initial designs are the same for all the compared methods, but different for the 10 independent runs.

Table 1 Specific characteristics of numerical cases

5.2 Comparison for non-parallel methods

To study the performance of the proposed VF-CLCB optimization method, four alternative methods are tested, which are described as follows.

  1. 1)

    CEI (Schonlau 1997): the CEI method is the benchmark single-fidelity constrained optimization method that only uses the HF model. The comparison with it can show the efficiency of the usage of the VF surrogate model.

  2. 2)

    AEI (Huang et al. 2006): the AEI method uses the Co-Kriging model, and the cost ratio between HF and LF models is considered.

  3. 3)

    VF-EI (Zhang et al. 2018): the VF-EI method employs the hierarchical Kriging model, which excludes all the empirical parameters including the cost ratio.

  4. 4)

    VF-LCB (Jiang et al. 2019): the VF-LCB method is the authors’ previous work that handles constraints by the PoF function, and it is compared to test the effectiveness of the proposed constraint-handling strategy.

The cost ratio is first set to four, and the statistical results are summarized in Table 2. In Table 2, SR denotes the successful ratio to find a feasible solution in ten runs, and OS is the finally obtained optimal solution. The item “Mean error” indicates the discrepancy between the mean OS and the best-known optimum. W-test refers to the Wilcoxon rank sum test (Dong et al. 2021), specifically, the significance level of \(\alpha = 0.05\) is conducted, and the symbol + means the proposed method significantly outperforms the compared methods—indicates that the proposed method is significantly worse than the other methods, and the symbol ≈ represents that there exists no significant difference between the compared methods.

Table 2 Statistical results on numerical cases (\(cr = 4\)) for non-parallel methods

In Table 2, the best results are marked in bold. Intuitively, the proposed VF-CLCB method can always find a feasible solution in all these experiments. The VF-LCB method exhibits unstable performance on problems G5MOD and G8 as it failed to find a feasible solution under prescribed budgets. This demonstrates that the variable-fidelity CLCB function can lead to a feasible solution when no sample point is available. From the results of LFE and HFE, it can be observed that VF-CLCB method tends to select more LF sample points due to their lower computation cost. This is opposite to the VF-EI method which does not consider the effect of cost ratio. For the metric of the efficiency, VF-LCB outperformed the other methods in most cases, whereas the VF-CLCB method converged the fastest on the problems that are highly constrained, e.g., G5MOD, G6, and G7. Since the proposed method considers the accuracy of the constraint's predictions and selects sample points for the improvement of the accuracy, the optimization process may consume more computation cost at early stages. From the results of the obtained OS, VF-CLCB outperformed the other methods for the item “Best OS” on all the problems. For the mean results, VF-CLCB was better than the others on all the problems except for G9. Moreover, it shows better performance according to the worst OS. The results of the W-test for OS indicate that the proposed method shows superior performance on the effectiveness to optimize highly constrained problems.

In engineering problems, the cost ratio is critical for deciding the computation budget, and it is usually problem-dependent. Therefore, the numerical cases are tested on the other two values of cost ratio, \(cr = 10\) and \(cr = 25\). We assigned different cost ratios to test the approaches under different conditions. The statistical results are listed in Tables C3 and C4 in Appendix. It should be noted that a mass of LF sample points are usually selected when the value of cost ratio is 25. The construction of the LF Kriging models may waste too much time due to the computation complexity of the Kriging model. To relieve the burden of computation, the incremental Kriging model (Zhan and Xing 2021) is employed to re-build the LF Kriging model. The statistical results in Tables C3 and C4 indicate that the VF-CLCB significantly outperformed the compared methods on the convergence ability and optimization effectiveness.

Figures 3 and 4 provide the comparison results under the three different levels of cost ratio for the NEFE metric and the obtained OS, respectively. From Fig. 3, it can be seen that the proposed VF-CLCB method is more robust than the other methods. The NEFE metric reduces as the cost ratio increases, which demonstrates the rationality of considering the cost ratio function in VF-CLCB. In Fig. 4, the VF-CLCB also shows superior stability on the obtained OS, and the results outperform those of other methods on all the problems except for G9.

Fig. 3
figure 3

Comparisons for the NEFE metric between three levels of cost ratio

Fig. 4
figure 4

Comparisons for the obtained OS under three levels of cost ratio

To illustrate the effectiveness of the employment of VF surrogate model, the optimization cost ratio, the ratio between the computational cost of the proposed VFSBO and that of single-fidelity optimization, is compared and showed in the Fig. 5. The results for the adopted single-fidelity optimization is directly cited from the research (Cheng et al. 2021). Eight numerical problems are selected for the comparison. Each marker represents the comparison for one problem under specific analysis cost ratio. The region below the cost-savings lines means that the saving cost by using VF surrogates is larger than the analysis cost ratio. With the LF/HF analysis cost ratio decreasing, the corresponding cost reduction achieved by using the proposed VFSBO method is increasing.

Fig. 5
figure 5

Comparison of optimization cost ratio and analysis cost ratio for the proposed approach and the single-fidelity optimization method

5.3 Comparison for parallel methods

This section offers the experiments of the proposed PVF-CLCB method to test its acceleration capacity. Two compared parallel methods are described as follows.

  1. 1)

    PCEI (Qian et al. 2021): PCEI is a Kriging-assisted parallel optimization method that uses the benchmark CEI criterion and the influence function. The comparison can demonstrate the effectiveness of the proposed variable-fidelity method.

  2. 2)

    PVF-CLCBc: the PVF-CLCBc carries out the optimization with an equal allocation of computation resources between objective and constraint. It is tested to show effectiveness of the proposed allocation strategy.

The experiment was first conducted under the condition that the cost ratio \(CR = 4\) and the total parallel resources \(q = 5\). The statistical results are provided in Table 3. The proposed PVF-CLCB found feasible solutions on all experiments, but the single-fidelity PCEI method fails on problems G6 and G7. From the NEI metric, the proposed method used less computation budget to find the target solutions compared with the PCEI, and the results show significant differences in most problems. Moreover, the obtained OSs of the proposed method are remarkably better than those of the PCEI, which demonstrates the advantage of the usage of the VF surrogate model. Besides, the PCEI performs the best on problem G9. By comparing with the PVF-CLCBc, the proposed allocation strategy leads to a significant enhancement of the efficiency and effectiveness on problems G6 and G7, but a slight improvement of the efficiency in the other cases.

Table 3 Statistical results on numerical cases (\(cr = 4\)) for parallel methods (\(q = 5\))

To test the ability of the proposed parallel method, the condition of more resources (\(q = 10\)) is experimented and the statistical results are provided in Table D1 in Appendix. At this time, only the single-fidelity PCEI method is compared, and the results demonstrate that the proposed VF parallel method can significantly accelerate the optimization process with more computation resources.

Appendix E provides the convergence histories of the proposed methods, and the numbers in parentheses indicate the times for failing to find a feasible solution in repeated 10 experiments. It can be seen that the compared PCEI method converges faster at the first few iterations, but it always fails to find the target solutions, which is apparently due to the drawback of the PoF function. The proposed VF optimization methods tend to improve the accuracy of the LF surrogate first; therefore, they always converge slowly at the first few iterations. After that, they can directly converge to the required solutions. Besides, it is apparent that the proposed parallel method converges faster than the proposed non-parallel method, which verifies the effectiveness of the parallelization.

To further quantify the parallelization capacity of the proposed method, the results of the efficiency under the three scenarios of different computation resources (q = 1, 5, and 10) are concluded in Table 4. The results of TCS are excluded for the comparison because of the fact that all the repeated runs fail to converge. Commonly, an ideal parallel optimization structure is that the reduction in iterations equals the number of parallel resources. However, due to the estimation errors and other uncertain effects, the parallel method is hard to be a linear speedup algorithm. As shown in Table 4, with the number of parallel resources increases, the performance of the proposed PVF-CLCB method improves gradually. On average, the PVF-CLCB can speed up the optimization process by 2.83 times with five computers, and 4.00 times with ten computers. In sum, the proposed parallel method can effectively and efficiently reduce the computation time when solving COPs.

Table 4 Comparison of the PVF-CLCB under different computation resources

6 Engineering case: global stability optimization of stiffened cylindrical shell

In this section, the global stability optimization of the stiffened cylindrical shell with variable ribs is investigated to demonstrate the effectiveness and efficiency of the introduced (P)VF-CLCB methods on solving engineering issues. The structural profile of the stiffened cylindrical shell is depicted in Fig. 6, in which two big ribs divide the shell into three isometric parts. The length of the stiffened cylindrical shell is controlled by 24 spans whose length is 500 mm, and the diameter of this shell is 6000 mm.

Fig. 6
figure 6

The structural profile of the stiffened cylindrical shell

The goal of this optimization problem is to maximize the global stability of the stiffened cylindrical shell considering the constraints of strength, local stability, structural parameter requirements of T-sections, and weight limitation. To this end, the optimization mathematical model could be given by,

$$\begin{gathered} {\text{Find}}\,x{\mathbf{x}}\,{ = }\,\left[ {x_{1} ,x_{2} ,...,x_{9} } \right]^{{\text{T}}} \hfill \\ {\text{Maxmize}}\,P_{{{\text{cr}}}2} \hfill \\ {\text{Subject}}\,{\text{to }} \hfill \\ g_{1} ({\mathbf{x}}) = \frac{{\sigma _{1} }}{{k_{1} \sigma _{s} }} - 1 \le 0\, \le \,g_{2} ({\mathbf{x}}) = \frac{{\sigma _{2} }}{{k_{2} \sigma _{s} }} - 1 \le 0\, \le \,g_{3} ({\mathbf{x}}) = \frac{{\sigma _{3} }}{{k_{3} \sigma _{s} }} - 1 \le 0 \hfill \\ g_{4} ({\mathbf{x}}) = 1 - \frac{{P_{{cr1}} }}{{P_{c} }} \le 0\, \le \,g_{5} ({\mathbf{x}}) = \frac{{h_{1} }}{{23t_{2} }} - 1 \le 0\, \le \,g_{6} ({\mathbf{x}}) = \frac{{b_{1} }}{{6t_{1} }} - 1 \le 0 \hfill \\ g_{7} ({\mathbf{x}}) = \frac{{h_{2} }}{{23t_{4} }} - 1 \le 0\, \le \,g_{8} ({\mathbf{x}}) = \frac{{b_{2} }}{{6t_{3} }} - 1 \le 0\, \le \,g_{9} ({\mathbf{x}}) = \frac{{w(x)}}{{w_{0} }} - 1 \le 0, \hfill \\ \end{gathered}$$
(22)

where \({\mathbf{x}}\) is design variables whose detailed information and valued space are listed in Table 5; \(P_{{{\text{cr2}}}}\) is the global buckling pressure where a larger value indicates better global stability of the shell. \(\sigma_{1} ,\,\sigma_{2} ,\,{\text{and}}\,\sigma_{3}\) are the mid-span mid-plane stress of the shell, longitudinal stress at the rib, and rib stress respectively. Moreover, \(k_{1} \sim k_{3}\) are three factors to determine critical stress associated with the yield limit of the material \(\sigma_{s} = 650{\text{MPa}}\), whose values are 0.80, 1.15, and 0.60 respectively in line with the ship norm (Zhou et al. 2021). \(P_{{{\text{cr1}}}}\) is the local buckling pressure and \(P_{{\text{c}}} = 3{\text{MPa}}\) is the computational pressure to simulate the underwater working condition with 300 m. \(w(x)\) is the weight function and \(w_{0}\) is the marginal weight of the stiffened cylindrical shell with \(w_{0} = 76{\text{ ton}}\). In this case, the software ANSYS 18.2 is employed to obtain the simulated responses of \(\sigma_{1} ,\sigma_{2} ,\sigma_{3}\) and \(P_{{{\text{cr2}}}}\) separately, in which the meshing schemes for strength simulation and global stability simulations are: more than 137,600 elements for HF strength simulation, about 14,300 elements for LF strength simulation; more than 35,000 elements for HF global stability simulation, about 2,800 elements for LF global stability simulation. It is demonstrated that the convergence of the finite element analysis could be guaranteed for both HF strength and stability analyses (Yi et al. 2018). In addition, the mesh models and corresponding simulation results of the stability analysis are depicted in Fig. 7 as an example for better illustration.

Table 5 Value space of the design variables
Fig. 7
figure 7

The mesh models and corresponding results of different fidelity simulations of global stability

It could be found from Fig. 7 that although there is a great discrepancy in the number of mesh grids, the simulation results of both the HF and LF models have similar contour distribution, which reflects the coarse model is an excellent simplification. All the simulations are executed on the computational platform with a 3.30 GHz AMD Ryzen 95,900HX Eight-Core Processor and 16 GB RAM. The simulation times for an HF model and an LF model are approximately 60 s and 19 s; therefore, the cost ratio is set to 3 in this case.

To optimize the structure, 20 HF samples and 30 LF samples are initially determined through the LHD method. Then the optimization is conducted by the five non-parallel optimization methods introduced in the last section. The maximum NEFE of this case is set as 50, and the optimization results are provided in Table 6. As the results shown, the proposed VF-CLCB obtained the shell that has the best global stability compared with the other methods. The simulation results for global stability are illustrated in Fig. 8. It can be seen that the global stability of the stiffened cylindrical shell is enhanced with the rational design of the shape parameters. This also indicates that the VF-CLCB can effectively handle the time-consuming VF engineering optimization problem.

Table 6 Optimization results of the non-parallel methods for the engineering case
Fig. 8
figure 8

Simulation results of the optimum stiffened cylindrical shell by the proposed VF-CLCB method

The proposed PVF-CLCB approach is then used to optimize this engineering problem with the same initial sample, and the maximum NEI is set to 20. The optimization results are provided in Table 7, from which it can be seen that the PVF-CLCB can obtain better structures compared with the PCEI method in two parallel scenarios. However, the optimal structures are a little bit worse than those obtained by the VF-CLCB. Moreover, the PCEI and PVF-CLCB with \(q = 10\) outperform those with \(q = 5\), which indicates that the parallel optimization method can enhance its performance by adding the parallel computation resource.

Table 7 Optimization results of the parallel methods for the engineering case

7 Conclusion

In this work, a VF-CLCB criterion is proposed to solve the computationally expensive COPs with bi-level fidelity. Specifically, a constraint-oriented variable-fidelity CLCB function and an objective-oriented variable-fidelity PLCB function are defined based on the VF surrogate models. Based on these two functions, the promising sample points and corresponding fidelity levels can be determined. Moreover, an adaptive selection strategy is developed to further enhance the optimization efficiency. To verify the effectiveness of the proposed VF-CLCB method, 12 benchmark numerical examples and an engineering case were tested, the results are summarized: (1) the proposed VF-CLCB method is more efficient and robust compared with the popular optimization methods; (2) VF-CLCB shows obvious superiority for different cost ratios; and (3) for the engineering case, global stability optimization of the stiffened cylindrical shell, the proposed method obtained the structure with the best performance on global stability.

To reduce the design cycle, the VF-CLCB is further extended to a PVF-CLCB method which can optimize with several computers computing simultaneously. Concretely, a VF influence function is defined to approximately calculate the estimation error. Additionally, an allocation strategy is developed to adaptively distribute the computation resources for objective and constraint. The results of the numerical test cases indicate that the proposed PVF-CLCB method can speed up the optimization process by 2.83 times with five computers working simultaneously, and 4.00 times with ten computers. For the engineering problem, the results indicate that the PVF-CLCB can significantly reduce the design cycle compared with non-parallel methods.

This work improves the effectiveness of using VF surrogate models to solve computationally expensive COPs. However, since the computation complexity of VF modeling is commonly unbearable when facing high-dimensional problems, the VF surrogate model is hardly employed in this situation. In future work, the fast modeling of VF surrogate models will be researched to enhance its applicability in engineering optimization. Besides, the proposed approach will be extended to a more general method by considering multiple (more than two) levels of fidelity and applied to more complicated engineering problems.