Skip to main content
Log in

A kriging-based active learning algorithm for contour estimation of integrated response with noise factors

  • Original Article
  • Published:
Engineering with Computers Aims and scope Submit manuscript

Abstract

Contours have been commonly employed to gain insights into the influence of inputs in designing engineering systems. Estimating a contour from computer experiments via sequentially updating kriging [also called Gaussian process (GP) models] has received increasing attention for obtaining an accurate prediction within a limited simulation budget. In many engineering systems, there are often two types of inputs: control factors specified by design engineers and uncontrollable noise factors due to manufacturing errors or environmental variations. To mitigate undesirable effects of noise factors, the integrated response, which is an expectation of the response with respect to noise factors, is a widely used robust performance measure. Predicting a contour of the integrated response is an important task to identify sets of control factors that maintain the integrated response at a desirable level. However, most of the existing literature focuses on estimating contours with only control factors and ignores inevitable noise factors. In this article, we propose an efficient active learning algorithm for estimating a contour of the integrated response from time-consuming computer models based on GP models. Two acquisition functions (AFs) are proposed to determine the next design points of both control factors and noise factors for updating GP models to better estimate a contour. Closed-form expressions are developed to compute the AFs for facilitating optimization. Three numerical examples with different types of contours and a real aerodynamic airfoil example are used to demonstrate that more accurate contour estimates are obtained with the proposed active learning algorithm efficiently.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Availability of data and materials

The data and materials used or analyzed during the current study are available from the corresponding author on reasonable request.

Code availability

The code used during the current study is available from the corresponding author on reasonable request.

References

  1. Han M, Liu X, Huang M, Tan MHY (2020) Integrated parameter and tolerance optimization of a centrifugal compressor based on a complex simulator. J Qual Technol 52:404–421. https://doi.org/10.1080/00224065.2019.1611358

    Article  Google Scholar 

  2. Gavin HP, Yau SC (2008) High-order limit state functions in the response surface method for structural reliability analysis. Struct Saf 30:162–179. https://doi.org/10.1016/j.strusafe.2006.10.003

    Article  Google Scholar 

  3. Ouyang L, Zheng W, Zhu Y, Zhou X (2020) An interval probability-based FMEA model for risk assessment: a real-world case. Qual Reliab Eng Int 36:125–143. https://doi.org/10.1002/qre.2563

    Article  Google Scholar 

  4. Qian J, Yi J, Cheng Y et al (2020) A sequential constraints updating approach for Kriging surrogate model-assisted engineering optimization design problem. Eng Comput 36:993–1009. https://doi.org/10.1007/s00366-019-00745-w

    Article  Google Scholar 

  5. Marques AN, Opgenoord MMJ, Lam RR et al (2020) Multifidelity method for locating aeroelastic flutter boundaries. AIAA J 58:1772–1784. https://doi.org/10.2514/1.J058663

    Article  Google Scholar 

  6. Wu CF, Hamada MS (2009) Experiments: planning, analysis, and optimization. Wiley, Hoboken

    MATH  Google Scholar 

  7. Han M, Tan MHY (2016) Integrated parameter and tolerance design with computer experiments. IIE Trans 48:1004–1015. https://doi.org/10.1080/0740817X.2016.1167289

    Article  Google Scholar 

  8. Williams BJ, Santner TJ, Notz WI (2000) Sequential design of computer experiments to minimize integrated response functions. Stat Sin 10:1133–1152

    MathSciNet  MATH  Google Scholar 

  9. Xie J, Frazier PI, Sankaran S et al (2012) Optimization of computationally expensive simulations with Gaussian processes and parameter uncertainty: application to cardiovascular surgery. In: 2012 50th annual allerton conference on communication, control, and computing (Allerton), pp 406–413

  10. Xiong S (2020) Personalized optimization and its implementation in computer experiments. IISE Trans 52:528–536. https://doi.org/10.1080/24725854.2019.1630866

    Article  Google Scholar 

  11. Toscano-Palmerin S, Frazier PI (2018) Bayesian optimization with expensive integrands. arXiv:1803.08661 [cs, stat]

  12. Janusevskis J, Le Riche R (2013) Simultaneous kriging-based estimation and optimization of mean response. J Glob Optim 55:313–336. https://doi.org/10.1007/s10898-011-9836-5

    Article  MathSciNet  MATH  Google Scholar 

  13. Bellary SAI, Samad A, Couckuyt I, Dhaene T (2016) A comparative study of kriging variants for the optimization of a turbomachinery system. Eng Comput 32:49–59. https://doi.org/10.1007/s00366-015-0398-x

    Article  Google Scholar 

  14. Bouhlel MA, Martins JRRA (2019) Gradient-enhanced kriging for high-dimensional problems. Eng Comput 35:157–173. https://doi.org/10.1007/s00366-018-0590-x

    Article  Google Scholar 

  15. Chang PB, Williams BJ, Santner TJ et al (1999) Robust optimization of total joint replacements incorporating environmental variables. J Biomech Eng 121:304–310. https://doi.org/10.1115/1.2798325

    Article  Google Scholar 

  16. Apley DW, Liu J, Chen W (2006) Understanding the effects of model uncertainty in robust design with computer experiments. J Mech Des 128:945. https://doi.org/10.1115/1.2204974

    Article  Google Scholar 

  17. Tan MHY (2015) Robust parameter design with computer experiments using orthonormal polynomials. Technometrics 57:468–478. https://doi.org/10.1080/00401706.2014.969446

    Article  MathSciNet  Google Scholar 

  18. Jones DR, Schonlau M, Welch WJ (1998) Efficient global optimization of expensive black-box functions. J Glob Optim 13:455–492. https://doi.org/10.1023/A:1008306431147

    Article  MathSciNet  MATH  Google Scholar 

  19. Schonlau M, Welch WJ, Jones DR (1998) Global versus local search in constrained optimization of computer models. New Dev Appl Exp Des. https://doi.org/10.1214/lnms/1215456182

    Article  Google Scholar 

  20. Picheny V, Ginsbourger D, Roustant O et al (2010) Adaptive designs of experiments for accurate approximation of a target region. J Mech Des 132:071008–071017. https://doi.org/10.1115/1.4001873

    Article  Google Scholar 

  21. Emmerich GKC, Naujoks B (2006) Single- and multiobjective evolutionary optimization assisted by Gaussian random field metamodels. IEEE Trans Evolut Comput 10:421–439. https://doi.org/10.1109/TEVC.2005.859463

    Article  Google Scholar 

  22. Zhan D, Xing H (2020) Expected improvement for expensive optimization: a review. J Glob Optim 78:507–544. https://doi.org/10.1007/s10898-020-00923-x

    Article  MathSciNet  MATH  Google Scholar 

  23. Shahriari B, Swersky K, Wang Z et al (2016) Taking the human out of the loop: a review of Bayesian optimization. Proc IEEE 104:148–175. https://doi.org/10.1109/JPROC.2015.2494218

    Article  Google Scholar 

  24. Frazier PI (2018) Bayesian optimization. In: Recent advances in optimization and modeling of contemporary problems. INFORMS, pp 255–278

  25. Yue X, Wen Y, Hunt JH, Shi J (2021) Active learning for Gaussian process considering uncertainties with application to shape control of composite fuselage. IEEE Trans Autom Sci Eng 18:36–46. https://doi.org/10.1109/TASE.2020.2990401

    Article  Google Scholar 

  26. Deng X, Joseph VR, Sudjianto A, Wu CJ (2009) Active learning through sequential design, with applications to detection of money laundering. J Am Stat Assoc 104:969–981. https://doi.org/10.1198/jasa.2009.ap07625

    Article  MathSciNet  MATH  Google Scholar 

  27. Chen J, Kang L, Lin G (2021) Gaussian process assisted active learning of physical laws. Technometrics 63:329–342. https://doi.org/10.1080/00401706.2020.1817790

    Article  MathSciNet  Google Scholar 

  28. Inatsu Y, Sugita D, Toyoura K, Takeuchi I (2020) Active learning for enumerating local minima based on Gaussian process derivatives. Neural Comput 32:2032–2068. https://doi.org/10.1162/neco_a_01307

    Article  MathSciNet  MATH  Google Scholar 

  29. Ranjan P, Bingham D, Michailidis G (2008) Sequential experiment design for contour estimation from complex computer codes. Technometrics 50:527–541. https://doi.org/10.1198/004017008000000541

    Article  MathSciNet  Google Scholar 

  30. Chen R-B, Hung Y-C, Wang W, Yen S-W (2013) Contour estimation via two fidelity computer simulators under limited resources. Comput Stat 28:1813–1834. https://doi.org/10.1007/s00180-012-0380-7

    Article  MathSciNet  MATH  Google Scholar 

  31. Liu J, Yi J, Zhou Q, Cheng Y (2020) A sequential multi-fidelity surrogate model-assisted contour prediction method for engineering problems with expensive simulations. Eng Comput. https://doi.org/10.1007/s00366-020-01043-6

    Article  Google Scholar 

  32. Bichon BJ, Eldred MS, Swiler LP et al (2008) Efficient global reliability analysis for nonlinear implicit performance functions. AIAA J 46:2459–2468. https://doi.org/10.2514/1.34321

    Article  Google Scholar 

  33. Marques A, Lam R, Willcox K (2018) Contour location via entropy reduction leveraging multiple information sources. In: Advances in neural information processing systems, pp 5217–5227

  34. Groot P, Birlutiu A, Heskes T (2010) Bayesian Monte Carlo for the global optimization of expensive functions. In: Proceedings of the 19th European conference on artificial intelligence. Ios Press, Amsterdam, pp 249–254

  35. Tan MHY (2020) Bayesian optimization of expected quadratic loss for multiresponse computer experiments with internal noise. SIAM/ASA J Uncertain Quantif 8:891–925. https://doi.org/10.1137/19M1272676

    Article  MathSciNet  MATH  Google Scholar 

  36. Santner TJ, Williams BJ, Notz WI (2003) The design and analysis of computer experiments. Springer, New York

    Book  MATH  Google Scholar 

  37. Ranjan P, Bingham D, Michailidis G (2011) Errata. Technometrics 53:109–110. https://doi.org/10.1198/TECH.2011.10192

    Article  MathSciNet  Google Scholar 

  38. Yang F, Lin CD, Ranjan P (2020) Global fitting of the response surface via estimating multiple contours of a simulator. J Stat Theory Pract 14:9. https://doi.org/10.1007/s42519-019-0077-0

    Article  MathSciNet  MATH  Google Scholar 

  39. Mathai AM, Provost SB (1992) Quadratic forms in random variables: theory and applications. Marcel Dekker, New York

    MATH  Google Scholar 

  40. Picheny V, Ginsbourger D, Richet Y, Caplin G (2013) Quantile-based optimization of noisy computer experiments with tunable precision. Technometrics 55:2–13. https://doi.org/10.1080/00401706.2012.707580

    Article  MathSciNet  Google Scholar 

  41. Press W, Teukolsky S, Vetterling W, Flannery B (2007) Numerical recipes: the art of scientific computing, 3rd edn. Cambridge University Press, New York

    MATH  Google Scholar 

  42. Namura N, Shimoyama K, Obayashi S (2017) Kriging surrogate model with coordinate transformation based on likelihood and gradient. J Glob Optim 68:827–849. https://doi.org/10.1007/s10898-017-0516-y

    Article  MathSciNet  MATH  Google Scholar 

  43. FoilSim Student JS. https://www.grc.nasa.gov/WWW/K-12/airplane/foil3.html. Accessed 20 Aug 2020

  44. Benson T (1997) Interactive educational tool for classical airfoil theory. In: 35th AIAA, aerospace sciences meeting & exhibit. New York

  45. Shen W (2017) Robust parameter designs in computer experiments using stochastic approximation. Technometrics 59:471–483. https://doi.org/10.1080/00401706.2016.1272493

    Article  MathSciNet  Google Scholar 

Download references

Funding

This work was supported by the National Natural Science Foundation of China (No. 71902089, No. 72072089, No. 71702072, No. 71931006, No. 71801126), the Natural Science Foundation of Jiangsu Province (No. BK20190389), the Fundamental Research Fund for the Central Universities (No. NR2019014), the start-up grant of Nanjing University of Aeronautics and Astronautics (No. YAH18091), ShuangChuang Program of Jiangsu Province, and Nanjing’s Science and Technology Innovation Project.

Author information

Authors and Affiliations

Authors

Contributions

Not applicable.

Corresponding author

Correspondence to Mei Han.

Ethics declarations

Conflict of interest

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Closed-form expression for the integral in Eq. (16)

In this appendix, we provide the closed-form expression for the integral \(\mathop \smallint \nolimits_{{a - \alpha s_{L,n} \left( {\varvec{x}} \right)}}^{{a + \alpha s_{L,n} \left( {\varvec{x}} \right)}} \left( {y - \hat{L}_{n} \left( {{\varvec{x}}_{c} } \right)} \right)^{2} \phi \left( {\frac{{y - \hat{L}_{n} \left( {{\varvec{x}}_{c} } \right)}}{{s_{L,n} \left( {{\varvec{x}}_{c} } \right)}}} \right){\text{d}}y\) in Eq. (16). It is given by

$$\begin{gathered} \mathop \smallint \limits_{{a - \alpha s_{L,n} \left( {\varvec{x}} \right)}}^{{a + \alpha s_{L,n} \left( {\varvec{x}} \right)}} \left( {y - \hat{L}_{n} \left( {{\varvec{x}}_{c} } \right)} \right)^{2} \phi \left( {\frac{{y - \hat{L}_{n} \left( {{\varvec{x}}_{c} } \right)}}{{s_{L,n} \left( {\varvec{x}} \right)}}} \right){\text{d}}y \hfill \\ = s_{L,n}^{2} \left( {{\varvec{x}}_{c} } \right)\left( {\hat{L}_{n} \left( {{\varvec{x}}_{c} } \right) - a} \right)\phi_{\Delta } \left( {u_{1} ,u_{2} } \right) - \alpha s_{L,n}^{3} \left( {{\varvec{x}}_{c} } \right)\left[ {\phi \left( {u_{1} } \right) + \phi \left( {u_{2} } \right)} \right] + s_{L,n}^{3} \left( {{\varvec{x}}_{c} } \right){\Phi }_{\Delta } \left( {u_{1} ,u_{2} } \right), \hfill \\ \end{gathered}$$
(28)

where \(u_{1} = \left( {a - \hat{L}_{n} \left( {{\varvec{x}}_{c} } \right)} \right)/s_{L,n} \left( {\varvec{x}} \right) + \alpha\) and \(u_{2} = u_{1} - 2\alpha\).

Appendix 2: Proof of Eq. (20)

In the appendix, we prove Eq. (20) in Sect. 3.2 to derive the closed-form expression of the proposed EVSD criterion. Note that the vector of posterior means \(\hat{\varvec{y}}_{C,n + 1} \left( {{\varvec{x}}_{c} } \right)\) is linear in the training data \({\varvec{Y}}_{n + 1} = \left( {{\varvec{Y}}_{n}^{T} ,Y\left( {{\varvec{x}}_{c}^{n + 1} ,{\varvec{x}}_{e} } \right)} \right)^{T}\), which is given by

$$\hat{\varvec{y}}_{C}^{n + 1} \left( {{\varvec{x}}_{c}^{n + 1} } \right) = \varvec{1}_{m} \hat{\beta } + {\varvec{r}}_{n + 1,m} \varvec{^{\prime}R}_{n + 1}^{ - 1} \left( {{\varvec{Y}}_{n + 1} - \varvec{1}_{n + 1} \hat{\beta }} \right),$$
(29)

where \({\varvec{r}}_{n + 1,m} = \left[ {{\varvec{r}}_{n + 1} \left( {{\varvec{x}}_{c}^{n + 1} ,{\varvec{x}}_{e,1} } \right), \ldots ,{\varvec{r}}_{n + 1} \left( {{\varvec{x}}_{c}^{n + 1} ,{\varvec{x}}_{e,m} } \right)} \right]\), \({\varvec{r}}_{n + 1} \left( {{\varvec{x}}_{c} ,{\varvec{x}}_{e,i} } \right)\) is a \(\left( {n + 1} \right) \times 1\) vector of correlations between \(Y\left( {{\varvec{x}}_{c} ,{\varvec{x}}_{e,i} } \right)\) and \({\varvec{Y}}_{n + 1}\), and \({\varvec{R}}_{n + 1}\) denotes the covariance matrix of \({\varvec{Y}}_{n + 1}\). Then, the posterior mean of \(l\left( {{\varvec{x}}_{c} } \right)\), i.e., \(\hat{L}_{n + 1}^{{x_{e} }} \left( {{\varvec{x}}_{c}^{n + 1} } \right) = {\varvec{w}}^{T} \hat{\varvec{y}}_{C}^{n + 1} \left( {{\varvec{x}}_{c}^{n + 1} } \right){ }\), is also linear in \({\varvec{Y}}_{n + 1}\), which is given by

$$\hat{L}_{n + 1}^{{x_{e} }} \left( {{\varvec{x}}_{c}^{n + 1} } \right) = {\varvec{w}}^{T} \varvec{1}_{m} \hat{\beta } + {\varvec{w}}^{T} {\varvec{r}}_{n + 1,m} \varvec{^{\prime}R}_{n + 1}^{ - 1} \left( {{\varvec{Y}}_{n + 1} - \varvec{1}_{n + 1} \hat{\beta }} \right).$$
(30)

We further rewrite the \(\hat{L}_{n + 1}^{{x_{e} }} \left( {{\varvec{x}}_{c}^{n + 1} } \right)\) as a linear function of \(Y\left( {{\varvec{x}}_{c}^{n + 1} ,{\varvec{x}}_{e} } \right)\), by rewriting \({\varvec{R}}_{n + 1}^{ - 1}\) as follows:

$$\varvec{ R}_{n + 1}^{ - 1} = \left( {\begin{array}{*{20}c} {{\varvec{R}}_{n} } & {{\varvec{r}}_{0} } \\ {{\varvec{r}}_{0}^{T} } & 1 \\ \end{array} } \right)^{ - 1} = \left( {\begin{array}{*{20}c} {{\varvec{R}}_{n}^{ - 1} \left( {1 + v^{ - 1} {\varvec{r}}_{0} {\varvec{r}}_{0}^{T} {\varvec{R}}_{n}^{ - 1} } \right)} & { - v^{ - 1} {\varvec{R}}_{n}^{ - 1} {\varvec{r}}_{0} } \\ { - v^{ - 1} {\varvec{r}}_{0}^{T} {\varvec{R}}_{n}^{ - 1} } & {v^{ - 1} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {{\varvec{a}}_{1} } & {{\varvec{a}}_{2} } \\ {{\varvec{a}}_{2}^{^{\prime}} } & {a_{3} } \\ \end{array} } \right),$$
(31)

where \(v = 1 - {\varvec{r}}_{0} \varvec{^{\prime}R}_{n}^{ - 1} {\varvec{r}}_{0}\). Then, \(\hat{L}_{n + 1}^{{x_{e} }} \left( {{\varvec{x}}_{c}^{n + 1} } \right)\) given in Eq. (30) is rewritten by

$$\hat{L}_{n + 1} \left( {{\varvec{x}}_{c} } \right) = l_{1} + l_{2} (Y^{n + 1} - \hat{\beta }),$$
(32)

where \(l_{1} = {\varvec{w}}^{T} \varvec{1}_{m} \hat{\beta } + {\varvec{w}}^{T} \left\{ {{\varvec{r}}_{n + 1,m}^{1:n} \varvec{^{\prime}a}_{1} + {\varvec{r}}_{n + 1,m}^{n + 1} \varvec{^{\prime}a}_{2}^{\varvec{^{\prime}}} } \right\}\left( {{\varvec{Y}}_{n} - \varvec{1}_{n} \hat{\beta }} \right)\), \(l_{2} = \varvec{w^{\prime}}({\varvec{r}}_{n + 1,m}^{1:n} \varvec{^{\prime}a}_{2} + {\varvec{r}}_{n + 1,m}^{n + 1} \varvec{^{\prime}}a_{3}\)), \({\varvec{r}}_{n + 1,m}^{1:n}\) denotes the first \(n\) rows of \({\varvec{r}}_{n + 1,m}\), and \({\varvec{r}}_{n + 1,m}^{n + 1}\) denotes the \(\left( {n + 1} \right)\)th row of \({\varvec{r}}_{n + 1,m}\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, M., Huang, Q., Ouyang, L. et al. A kriging-based active learning algorithm for contour estimation of integrated response with noise factors. Engineering with Computers 39, 1341–1362 (2023). https://doi.org/10.1007/s00366-021-01516-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00366-021-01516-2

Keywords

Navigation