Skip to main content
Log in

New learning functions for active learning Kriging reliability analysis using a probabilistic approach: KO and WKO functions

  • Research Paper
  • Published:
Structural and Multidisciplinary Optimization Aims and scope Submit manuscript

Abstract

Reducing the cost of calculation without compromising the accuracy of the solution is a recognized challenge for optimizing the reliability analysis, which became possible using surrogate models trained with robust techniques, such as active learning Kriging (AK) reliability methods. In the AK reliability method, a Kriging predictor is built with a small size of design of experiments (DoE) and becomes more accurate in the vicinity of the limit state function (LSF) in a stepwise manner, called the learning process, until a stopping criterion is met. The motivation of the current study is to enhance the accuracy and efficiency of AK reliability analysis by developing new learning functions, new stopping criteria, and a new method of selection of the next candidate for updating the DoE in the learning process. In this paper, two new learning functions named Kriging occurrence (KO) and weighted KO (WKO) are proposed based on a probability-based approach. A hybrid selection for the next candidate is introduced which simultaneously considers the probability of improvement and the density of DoE and a new stopping criterion is recommended based on the relative mean of the learning functions. A thorough study of the literature is conducted where 12 learning functions are summarized and their performances are compared to that of newly developed learning functions through five comparative examples. The result of the study shows that the new learning function can enhance the accuracy and efficiency of the learning process.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  • ACI 318 Committee (2019) Building Code requirements for structural concrete (ACI 318–19). American Concrete Institute, Farmington Hills

    Google Scholar 

  • Arregui-Mena JD, Margetts L, Mummery PM (2016) Practical application of the stochastic finite element method. Arch Comput Methods Eng 23(1):171–190

    MathSciNet  MATH  Google Scholar 

  • Au S-K, Beck JL (2003) Subset simulation and its application to seismic risk based on dynamic analysis. J Eng Mech 129(8):901–917

    Google Scholar 

  • Balesdent M, Morio J, Marzat J (2013) Kriging-based adaptive importance sampling algorithms for rare event estimation. Struct Saf 44:1–10

    Google Scholar 

  • Bartlett FM, Hong HP, Zhou W (2003) Load factor calibration for the proposed edition of the national building code of Canada statistics of loads and load effects. Can J Civ Eng 30(2):429–439

    Google Scholar 

  • Bichon BJ, Eldred MS, Swiler LP, Mahadevan S, McFarland JM (2008) Efficient global reliability analysis for nonlinear implicit performance functions. AIAA J 46(10):2459–2468

    Google Scholar 

  • Bourinet J-M, Deheeger F, Lemaire M (2011) Assessing small failure probabilities by combined subset simulation and support vector machines. Struct Saf 33(6):343–353

    Google Scholar 

  • Cadini F, Santos F, Zio E (2014) An improved adaptive Kriging-based importance technique for sampling multiple failure regions of low probability. Reliab Eng Syst Saf 131:109–117

    Google Scholar 

  • Cadini F, Lombardo SS, Giglio M (2020) Global reliability sensitivity analysis by Sobol-based dynamic adaptive Kriging importance sampling. Struct Saf 87:101998

    Google Scholar 

  • Chai X, Sun Z, Wang J, Zhang Y, Yu Z (2019) A new Kriging-based learning function for reliability analysis and its application to fatigue crack reliability. IEEE Access 7:122811–122819

    Google Scholar 

  • CSA A23.3-19 (2019) Design of concrete structures. Canadian Standard Association

  • Echard B, Gayton N, Lemaire M (2011) AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Struct Saf 33(2):145–154

    Google Scholar 

  • Echard B, Gayton N, Lemaire M, Relun N (2013) A combined importance sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models. Reliab Eng Syst Saf 111:232–240

    Google Scholar 

  • El Haj A-K, Soubra A-H (2021) Improved active learning probabilistic approach for the computation of failure probability. Struct Saf 88:102011

    Google Scholar 

  • François S, Schevenels M, Dooms D, Jansen M, Wambacq J, Lombaert G, Degrande G, De Roeck G (2021) Stabil: an educational Matlab toolbox for static and dynamic structural analysis. Comput Appl Eng Educ 29(5):1372–1389

    Google Scholar 

  • Fuchao LIU, Pengfei WEI, Changcong ZHOU, Zhufeng YUE (2020) Reliability and reliability sensitivity analysis of structure by combining adaptive linked importance sampling and Kriging reliability method. Chin J Aeronaut 33(4):1218–1227

    Google Scholar 

  • Gaspar B, Teixeira AP, Soares CG (2017) Adaptive surrogate model with active refinement combining Kriging and a trust region method. Reliab Eng Syst Saf 165:277–291

    Google Scholar 

  • Gayton N, Bourinet JM, Lemaire M (2003) CQ2RS: a new statistical approach to the response surface method for reliability analysis. Struct Saf 25(1):99–121

    Google Scholar 

  • Griffiths DV, Fenton GA (2008) Risk assessment in geotechnical engineering. Wiley, Hoboken

    Google Scholar 

  • Huang X, Chen J, Zhu H (2016) Assessing small failure probabilities by AK–SS: an active learning method combining Kriging and subset simulation. Struct Saf 59:86–95

    Google Scholar 

  • Huh J, Haldar A (2011) A novel risk assessment for complex structural systems. IEEE Trans Reliab 60(1):210–218

    Google Scholar 

  • Jensen HA, Muñoz A, Papadimitriou C, Millas E (2016) Model-reduction techniques for reliability-based design problems of complex structural systems. Reliab Eng Syst Saf 149:204–217

    Google Scholar 

  • Jing Z, Chen J, Li X (2019) RBF-GA: an adaptive radial basis function metamodeling with genetic algorithm for structural reliability analysis. Reliab Eng Syst Saf 189:42–57

    Google Scholar 

  • Jones DR, Schonlau M, Welch WJ (1998) Efficient global optimization of expensive black-box functions. J Global Optim 13(4):455–492

    MathSciNet  MATH  Google Scholar 

  • Kaymaz I (2005) Application of Kriging method to structural reliability problems. Struct Saf 27(2):133–151

    Google Scholar 

  • Khorramian K, Oudah F (2022) Active learning Kriging-based reliability for assessing the structural safety of infrastructure: theory and application. In: Naser MZ (ed) Leveraging artificial intelligence into engineering, management, and safety of infrastructure. Taylor and Francis (CRC), Cham

    Google Scholar 

  • K. Khorramian, P. Sadeghian and F. Oudah, (2021a) A Preliminary Reliability-Based Analysis for Slenderness Limit of FRP Reinforced Concrete Columns. In 8th International Conference on Advanced Composite Materials in Bridges and Structures (ACMBS), Virtual

  • K. Khorramian, F. Oudah and P. Sadeghian, (2021b) Reliability-Based Evaluation of the Stiffness Reduction Factor for Slender GFRP Reinforced Concrete Columns. In: CSCE Annual Conference, Canadian Society for Civil Engineering, Virtual,

  • K. Khorramian, P. Sadeghian and F. Oudah (2021c) Second-Order Analysis of Slender GFRP Reinforced Concrete Columns using Artificial Neural Network.Virtual.

  • Khorramian K, Sadeghian P, Oudah F (2022) Slenderness limit for glass fiber-reinforced polymer reinforced concrete columns: reliability-based approach. ACI Struct J 119(3):249–262

    Google Scholar 

  • Kim J, Song J (2020) Probability-adaptive Kriging in n-ball (PAK-Bn) for reliability analysis. Struct Saf 85:101924

    Google Scholar 

  • Lee I, Choi KK, Du L, Gorsich D (2008) Inverse analysis method using MPP-based dimension reduction for reliability-based design optimization of nonlinear and multi-dimensional systems. Comput Methods Appl Mech Eng 198(1):14–27

    MATH  Google Scholar 

  • Li F, Liu J, Yan Y, Rong J, Yi J, Wen G (2020) A time-variant reliability analysis method for non-linear limit-state functions with the mixture of random and interval variables. Eng Struct 213:110588

    Google Scholar 

  • Liu Y, Li L, Zhao S (2022) Efficient Bayesian updating with two-step adaptive Kriging. Struct Saf 95:102172

    Google Scholar 

  • Lophaven SN, Nielsen HB, Søndergaard J (2002a) “A Matlab Kriging Toolbox,” Technical University of Denmark, Kongens Lyng by, Technical Report No. IMM-TR-2002a-12.

  • Lophaven SN, Nielsen HB, Søndergaard J (2002b) DACE: a Matlab Kriging toolbox. Vol. 2. IMM Informatics and Mathematical Modelling. The Technical University of Denmark, pp. 1–34.

  • Lv Z, Lu Z, Wang P (2015) A new learning function for Kriging and its applications to solve reliability problems in engineering. Comput Math Appl 70(5):1182–1197

    MathSciNet  MATH  Google Scholar 

  • Moustapha M, Marelli S, Sudret B (2022) Active learning for structural reliability: survey, general framework and benchmark. Struct Saf 96:102174

    Google Scholar 

  • Nowak AS, Collins KR (2000) Reliability of structures. McGraw-Hill

    Google Scholar 

  • Nowak AS, Szerszen MM (2003) Calibration of Design code for buildings (ACI 318): Part 1—statistical models for resistance. ACI Struct J 100(3):377–382

    Google Scholar 

  • Oudah F, El Naggar MH, Norlander G (2019) Unified system reliability approach for single and group pile foundations-theory and resistance factor calibration. Comput Geotech 108:173–182

    Google Scholar 

  • Owen A, Zhou Y (2000) Safe and effective importance sampling. J Am Stat Assoc 95(449):135–143

    MathSciNet  MATH  Google Scholar 

  • Owen AB, Maximov Y, Chertkov M (2019) Importance sampling the union of rare events with an application to power systems analysis. Electron J Stat 13(1):231–254

    MathSciNet  MATH  Google Scholar 

  • Rajashekhar MR, Ellingwood BR (1993) A new look at the response surface approach for reliability analysis. Struct Saf 12(3):205–220

    Google Scholar 

  • Schueremans L, Van Gemert D (2005) Benefit of splines and neural networks in simulation based structural reliability analysis. Struct Saf 27(3):246–261

    Google Scholar 

  • Shi Y, Lu Z, He R, Zhou Y, Chen S (2020) A novel learning function based on Kriging for reliability analysis. Reliab Eng Syst Saf 198:106857

    Google Scholar 

  • Shield CK, Galambos TV, Gulbrandsen P (2011) On the history and reliability of the flexural strength of FRP reinforced concrete members in ACI 440.1 R. Spec Publ 275:1–18

    Google Scholar 

  • Stefanou G (2009) The stochastic finite element method: past, present and future. Comput Methods Appl Mech Eng 198(9–12):1031–1051

    MATH  Google Scholar 

  • Stewart MG, Rosowsky DV (1998) Time-dependent reliability of deteriorating reinforced concrete bridge decks. Struct Saf 20(1):91–109

    Google Scholar 

  • Sun Z, Wang J, Li R, Tong C (2017) LIF: a new Kriging based learning function and its application to structural reliability analysis. Reliab Eng Syst Saf 157:152–165

    Google Scholar 

  • Teixeira R, Nogal M, O’Connor A (2021) Adaptive approaches in metamodel-based reliability analysis: a review. Struct Saf 89:102019

    Google Scholar 

  • Torre E, Marelli S, Embrechts P, Sudret B (2019) Data-driven polynomial chaos expansion for machine learning regression. J Comput Phys 338:601–623

    MathSciNet  MATH  Google Scholar 

  • Tsagris M, Beneki C, Hassani H (2014) On the folded normal distribution. Mathematics 2(1):12–28

    MATH  Google Scholar 

  • Wang Z, Shafieezadeh A (2019) ESC: an efficient error-based stopping criterion for Kriging-based reliability analysis methods. Struct Multidisc Optim 59(5):1621–1637

    Google Scholar 

  • Wang Z, Shafieezadeh A (2020) Highly efficient Bayesian updating using metamodels: an adaptive Kriging-based approach. Struct Saf 84:101915

    Google Scholar 

  • Wang Z, Almeida J Jr, St-Pierre L, Wang Z, Castro SG (2020) Reliability-based buckling optimization with an accelerated Kriging metamodel for filament-wound variable angle tow composite cylinders. Compos Struct 254:112821

    Google Scholar 

  • Wang Z, Almeida JHS Jr, Ashok A, Wang Z, Castro SG (2022a) Lightweight design of variable-angle filament-wound cylinders combining Kriging-based metamodels with particle swarm optimization. Struct Multidisc Optim 65(5):140

    MathSciNet  Google Scholar 

  • Wang J, Xu G, Li Y, Kareem A (2022b) AKSE: a novel adaptive Kriging method combining sampling region scheme and error-based stopping criterion for structural reliability analysis. Reliab Eng Syst Saf 219:108214

    Google Scholar 

  • Wen Z, Pei H, Liu H, Yue Z (2016) A sequential Kriging reliability analysis method with characteristics of adaptive sampling regions and parallelizability. Reliab Eng Syst Saf 153:170–179

    Google Scholar 

  • Xiang Z, Chen J, Bao Y, Li H (2020) An active learning method combining deep neural network and weighted sampling for structural reliability analysis. Mech Syst Signal Process 140:106684

    Google Scholar 

  • Xiao N-C, Zuo MJ, Zhou C (2018) A new adaptive sequential sampling method to construct surrogate models for efficient reliability analysis. Reliab Eng Syst Saf 169:330–338

    Google Scholar 

  • Xiao S, Oladyshkin S, Nowak W (2020) Reliability analysis with stratified importance sampling based on adaptive Kriging. Reliab Eng Syst Saf 197:106852

    Google Scholar 

  • Xiong B, Tan H (2018) A robust and efficient structural reliability method combining radial-based importance sampling and Kriging. Sci China Technol Sci 61(5):724–734

    Google Scholar 

  • Yang X, Liu Y, Zhang Y, Yue Z (2015a) Probability and convex set hybrid reliability analysis based on active learning Kriging model. Appl Math Model 39(14):3954–3971

    MathSciNet  MATH  Google Scholar 

  • Yang X, Liu Y, Gao Y, Zhang Y, Gao Z (2015b) An active learning Kriging model for hybrid reliability analysis with both random and interval variables. Struct Multidisc Optim 51(5):1003–1016

    MathSciNet  Google Scholar 

  • You X, Zhang M, Tang D, Niu Z (2022) An active learning method combining adaptive Kriging and weighted penalty for structural reliability analysis. Proc Inst Mech Eng, Part o: J Risk Reliab 236(1):160–172

    Google Scholar 

  • Zhang Y-M, Zhu L-S, Wang X-G (2010) Advanced method to estimate reliability-based sensitivity of mechanical components with strongly nonlinear performance function. Appl Math Mech 31(10):1325–1336

    MATH  Google Scholar 

  • Zhang X, Wang L, Sørensen JD (2019a) REIF: a novel active-learning function toward adaptive Kriging surrogate models for structural reliability analysis. Reliab Eng Syst Saf 185:440–454

    Google Scholar 

  • J. Zhang, M. Xiao, L. Gao and Y. Zhang (2019b) MEAK-MCS: Metamodel Error Measure Function based Active Learning Kriging with Monte Carlo Simulation for Reliability Analysis. In IEEE 23rd International Conference on Computer Supported Cooperative Work in Design (CSCWD).

  • Zhang X, Wang L, Sørensen JD (2020) AKOIS: an adaptive Kriging oriented importance sampling method for structural system reliability analysis. Struct Saf 82:101876

    Google Scholar 

  • Zhou T, Peng Y (2020) Structural reliability analysis via dimension reduction, adaptive sampling, and Monte Carlo simulation. Struct Multidisc Optim 62(5):2629–2651

    MathSciNet  Google Scholar 

Download references

Funding

The project was financially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), Dalhousie University, Mathematics of Information Technology and Complex Systems (MITCS), and Norlander Oudah Engineering Ltd. (NOEL).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Koosha Khorramian.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Replication of results

Some or all data, models, or codes generated or used during the study are available from the corresponding author by request.

Ethical approval

Not applicable.

Informed consent

Not applicable.

Additional information

Responsible Editor: Yoojeong Noh

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 116 KB)

Appendix 1: Hc learning function—A revisit to H learning function

Appendix 1: Hc learning function—A revisit to H learning function

During a check on the derivation of the learning function H in the literature (Lv et al. 2015), a mistake was observed. Therefore, the authors independently derived the equation for the H learning function which is presented in the following as Hc or corrected H learning function. Also, at the end of this appendix, in Table 13, the sources of inaccuracy in the derivation of the mistakes in the source for H learning function derivation are presented. Eq. (A1) shows the information entropy of the surrogate response estimator (i.e., \(\widehat{G}\left({\varvec{x}}\right)\)), whose solution is the Hc learning function.

$${\mathrm{H}}_{\mathrm{c}}\left(\widehat{G}\left({\varvec{x}}\right)\right)=\left|-{\int }_{\overline{G }\left(x\right)-\varepsilon }^{\overline{G }\left(x\right)+\varepsilon }\mathrm{ln}\left[f\left(\widehat{G}\left({\varvec{x}}\right)\right)\right]f\left(\widehat{G}\left({\varvec{x}}\right)\right)d\left(\widehat{G}\left({\varvec{x}}\right)\right) \right|=\left|-{\int }_{{G}^{-}}^{{G}^{+}}f\left(\widehat{G}\right)\mathrm{ln}\left[f\left(\widehat{G}\right)\right]d\widehat{G}\right|,$$
(A1)

where \(f\left(\widehat{G}\left({\varvec{x}}\right)\right)\) is the PDF of \(\widehat{G}\left({\varvec{x}}\right)\) which is assumed to be a normal distribution, \(d\left(\widehat{G}\left({\varvec{x}}\right)\right)\) is the differential of \(\widehat{G}\left({\varvec{x}}\right)\), \(\varepsilon\) corresponds to \(2{\sigma }_{\widehat{G}\left(x\right)}\), \({G}^{+}\) is the upper bound of the integration (i.e., \({G}^{+}=\overline{G }\left(x\right)+\varepsilon\)), \({G}^{-}\) is the lower bound of the integration (i.e., \({G}^{-}=\overline{G }\left(x\right)-\varepsilon\)), and x is the desired design site, which is not presented in the rest of the equations for simplicity. By substituting the PDF of \(\widehat{G}\left({\varvec{x}}\right)\) in Eq. (A1) and simplifying the integral equation, the Hc learning function can be written down in terms of two other integrals (i.e., I1 and I2), as presented in Eq. (A2).

$${\mathrm{H}}_{\mathrm{c}}\left(\widehat{G}\right)=\left|-{\int }_{{G}^{-}}^{{G}^{+}}\frac{1}{\sqrt{2\pi }{\sigma }_{\widehat{G}}}\mathrm{exp}\left\{-\frac{{\left(\widehat{G}-{\mu }_{\widehat{G}}\right)}^{2}}{2{\sigma }_{\widehat{G}}^{2}}\right\}\mathrm{ln}\left[\frac{1}{\sqrt{2\pi }{\sigma }_{\widehat{G}}}\mathrm{exp}\left\{-\frac{{\left(\widehat{G}-{\mu }_{\widehat{G}}\right)}^{2}}{2{\sigma }_{\widehat{G}}^{2}}\right\}\right]d\widehat{G} \right|=\left|-{\int }_{{G}^{-}}^{{G}^{+}}\frac{1}{\sqrt{2\pi }{\sigma }_{\widehat{G}}}\mathrm{exp}\left\{-\frac{{\left(\widehat{G}-{\mu }_{\widehat{G}}\right)}^{2}}{2{\sigma }_{\widehat{G}}^{2}}\right\}\left[\mathrm{ln}\left(\frac{1}{\sqrt{2\pi }{\sigma }_{\widehat{G}}}\right)+\mathrm{ln}\left(\mathrm{exp}\left\{-\frac{{\left(\widehat{G}-{\mu }_{\widehat{G}}\right)}^{2}}{2{\sigma }_{\widehat{G}}^{2}}\right\}\right)\right]d\widehat{G} \right|=\left|-{\int }_{{G}^{-}}^{{G}^{+}}\frac{1}{\sqrt{2\pi }{\sigma }_{\widehat{G}}}\mathrm{exp}\left\{-\frac{{\left(\widehat{G}-{\mu }_{\widehat{G}}\right)}^{2}}{2{\sigma }_{\widehat{G}}^{2}}\right\}\left[-\mathrm{ln}\left(\sqrt{2\pi }{\sigma }_{\widehat{G}}\right)-\frac{{\left(\widehat{G}-{\mu }_{\widehat{G}}\right)}^{2}}{2{\sigma }_{\widehat{G}}^{2}}\right]d\widehat{G} \right|=\left|\mathrm{ln}\left(\sqrt{2\pi }{\sigma }_{\widehat{G}}\right){\int }_{{G}^{-}}^{{G}^{+}}\frac{1}{\sqrt{2\pi }}\mathrm{exp}\left\{-\frac{{\left(\widehat{G}-{\mu }_{\widehat{G}}\right)}^{2}}{2{\sigma }_{\widehat{G}}^{2}}\right\}\frac{d\widehat{G}}{{\sigma }_{\widehat{G}}}+\frac{1}{\sqrt{2\pi }{\sigma }_{\widehat{G}}}{\int }_{{G}^{-}}^{{G}^{+}}\frac{{\left(\widehat{G}-{\mu }_{\widehat{G}}\right)}^{2}}{2{\sigma }_{\widehat{G}}^{2}}\mathrm{exp}\left\{-\frac{{\left(\widehat{G}-{\mu }_{\widehat{G}}\right)}^{2}}{2{\sigma }_{\widehat{G}}^{2}}\right\}d\widehat{G}\right|=\left|{I}_{1}+{I}_{2}\right|.$$
(A2)

The solution to the integral I1 is straightforward and is presented in Eq. (A3).

$${I}_{1}=\mathrm{ln}\left(\sqrt{2\pi }{\sigma }_{\widehat{G}}\right){\int }_{{G}^{-}}^{{G}^{+}}\frac{1}{\sqrt{2\pi }}\mathrm{exp}\left\{-\frac{{\left(\widehat{G}-{\mu }_{\widehat{G}}\right)}^{2}}{2{\sigma }_{\widehat{G}}^{2}}\right\}\frac{d\widehat{G}}{{\sigma }_{\widehat{G}}}=\mathrm{ln}\left(\sqrt{2\pi }{\sigma }_{\widehat{G}}\right){\int }_{{G}^{-}}^{{G}^{+}}\phi \left(\frac{\widehat{G}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)d\left(\frac{\widehat{G}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)=\mathrm{ln}\left(\sqrt{2\pi }{\sigma }_{\widehat{G}}\right){\int }_{{G}^{-}}^{{G}^{+}}\phi \left(\frac{\widehat{G}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)d\left(\frac{\widehat{G}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)=\mathrm{ln}\left(\sqrt{2\pi }{\sigma }_{\widehat{G}}\right)\left({\left.\Phi \left(\frac{\widehat{G}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\right|}_{{G}^{-}}^{{G}^{+}}\right.=\mathrm{ln}\left(\sqrt{2\pi }{\sigma }_{\widehat{G}}\right)\left[\Phi \left(\frac{{G}^{+}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)-\Phi \left(\frac{{G}^{-}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\right].$$
(A3)

The solution to integral I2 requires a change of the variables and integral by parts. Eq. (A4) shows the integral part I2.

$${I}_{2}=\frac{1}{\sqrt{2\pi }{\sigma }_{\widehat{G}}}{\int }_{{G}^{-}}^{{G}^{+}}\frac{{\left(\widehat{G}-{\mu }_{\widehat{G}}\right)}^{2}}{2{\sigma }_{\widehat{G}}^{2}}\mathrm{exp}\left\{-\frac{{\left(\widehat{G}-{\mu }_{\widehat{G}}\right)}^{2}}{2{\sigma }_{\widehat{G}}^{2}}\right\}d\widehat{G}.$$
(A4)

To solve I2 a variable z is defined as an auxiliary variable which is the standard normal formulation \(\widehat{G}\) (i.e., \(z=\frac{\widehat{G}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\) and \(dz=\frac{d\widehat{G}}{{\sigma }_{\widehat{G}}}\)), and the corresponding integral boundaries were built (i.e., \({z}^{+}=\frac{{G}^{+}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\) and \({z}^{-}=\frac{{G}^{-}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\)), so that Eq. (A4) can be written down in the form of Eq. (A5).

$${I}_{2}=\frac{1}{2\sqrt{2\pi }}{\int }_{{z}^{-}}^{{z}^{+}}{z}^{2}\mathrm{exp}\left\{-\frac{{z}^{2}}{2}\right\}dz.$$
(A5)

To solve Eq. (A5), integration by parts was used (i.e., \(\int udv=uv-\int vdu\)), where \(u=z\) and \(dv=z\mathrm{exp}\left\{-\frac{{z}^{2}}{2}\right\}dz\) were considered (i.e., \(v= -\mathrm{exp}\left\{-\frac{{z}^{2}}{2}\right\}\) and \(du= dz\)). Thus, the solution to Eq. (A5) can be presented as Eq. (A6), by applying the integration by parts.

$${I}_{2}=\frac{1}{2\sqrt{2\pi }}\left({\left.-z\mathrm{exp}\left\{-\frac{{z}^{2}}{2}\right\}\right|}_{{z}^{-}}^{{z}^{+}}\right.+\frac{1}{2\sqrt{2\pi }}{\int }_{{z}^{-}}^{{z}^{+}}\mathrm{exp}\left\{-\frac{{z}^{2}}{2}\right\}dz=-\frac{1}{2}\left({\left.z\frac{1}{\sqrt{2\pi }}\mathrm{exp}\left\{-\frac{{z}^{2}}{2}\right\}\right|}_{{z}^{-}}^{{z}^{+}}\right.+\frac{1}{2}{\int }_{{z}^{-}}^{{z}^{+}}\frac{1}{\sqrt{2\pi }}\mathrm{exp}\left\{-\frac{{z}^{2}}{2}\right\}dz=-\frac{1}{2}\left({\left.z\phi \left(z\right)\right|}_{{z}^{-}}^{{z}^{+}}\right.+\frac{1}{2}{\int }_{{z}^{-}}^{{z}^{+}}\phi \left(z\right)dz=-\frac{1}{2}\left({\left.z\phi \left(z\right)\right|}_{{z}^{-}}^{{z}^{+}}\right.+\frac{1}{2}\left({\left.\Phi \left(z\right)\right|}_{{z}^{-}}^{{z}^{+}}\right.=-\frac{1}{2}\left[{z}^{+}\phi \left({z}^{+}\right)-{z}^{-}\phi \left({z}^{-}\right)\right]+\frac{1}{2}\left[\Phi \left({z}^{+}\right)-\Phi \left({z}^{-}\right)\right]=-\frac{1}{2}\left[\left(\frac{{G}^{+}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\phi \left(\frac{{G}^{+}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)-\left(\frac{{G}^{-}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\phi \left(\frac{{G}^{-}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\right]+\frac{1}{2}\left[\Phi \left(\frac{{G}^{+}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)-\Phi \left(\frac{{G}^{-}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\right].$$
(A6)

By substituting the solutions to I1 and I2 integrals, from Eq. (A5) and Eq. (A6), into Eq. (A2), the solution for learning function Hc can be presented in Eq. (A7) .

$${\mathrm{H}}_{\mathrm{c}}\left(\widehat{G}\right)=\left|{I}_{1}+{I}_{2}\right|=\left|\mathrm{ln}\left(\sqrt{2\pi }{\sigma }_{\widehat{G}}\right)\left[\Phi \left(\frac{{G}^{+}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)-\Phi \left(\frac{{G}^{-}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\right]+\frac{1}{2}\left[\Phi \left(\frac{{G}^{+}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)-\Phi \left(\frac{{G}^{-}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\right]-\frac{1}{2}\left[\left(\frac{{G}^{+}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\phi \left(\frac{{G}^{+}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)-\left(\frac{{G}^{-}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\phi \left(\frac{{G}^{-}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\right]\right|.$$
(A7)

By factoring in the same statements, Eq. (A7) can be written down in the form of Eq. (A8).

$${\mathrm{H}}_{\mathrm{c}}\left(\widehat{G}\right)=\left|\left(\mathrm{ln}\left(\sqrt{2\pi }{\sigma }_{\widehat{G}}\right)+\frac{1}{2}\right)\left[\Phi \left(\frac{{G}^{+}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)-\Phi \left(\frac{{G}^{-}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\right]-\frac{1}{2}\left[\left(\frac{{G}^{+}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\phi \left(\frac{{G}^{+}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)-\left(\frac{{G}^{-}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\phi \left(\frac{{G}^{-}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\right]\right|.$$
(A8)

By substituting the value of G+ and G boundaries (i.e., \({G}^{-}=\overline{G }-\varepsilon =-2{\sigma }_{\widehat{G}}\) and \({G}^{+}=\overline{G }+\varepsilon =2{\sigma }_{\widehat{G}}\)), using \(\varepsilon =2{\sigma }_{\widehat{G}}\) and \(\overline{G }=0\), Eq. (A8) can be written as Eq. (A9).

$${\mathrm{H}}_{\mathrm{c}}\left(\widehat{G}\right)=\left|\left(\mathrm{ln}\left(\sqrt{2\pi }{\sigma }_{\widehat{G}}\right)+\frac{1}{2}\right)\left[\Phi \left(\frac{2{\sigma }_{\widehat{G}}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)-\Phi \left(\frac{-2{\sigma }_{\widehat{G}}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\right]-\frac{1}{2}\left[\left(\frac{2{\sigma }_{\widehat{G}}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\phi \left(\frac{2{\sigma }_{\widehat{G}}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)-\left(\frac{-2{\sigma }_{\widehat{G}}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\phi \left(\frac{-2{\sigma }_{\widehat{G}}-{\mu }_{\widehat{G}}}{{\sigma }_{\widehat{G}}}\right)\right]\right|.$$
(A9)

Therefore, using Eq. (A9) as the learning function Hc for the desired design site of x, the full format of the equation can be presented as Eq. (A10).

$${\mathrm{H}}_{\mathrm{c}}\left(\widehat{G}(x)\right)=\left|\left(\mathrm{ln}\left(\sqrt{2\pi }{\sigma }_{\widehat{G}}\left(x\right)\right)+\frac{1}{2}\right)\left[\Phi \left(\frac{2{\sigma }_{\widehat{G}}\left(x\right)-{\mu }_{\widehat{G}}\left(x\right)}{{\sigma }_{\widehat{G}}\left(x\right)}\right)-\Phi \left(\frac{-2{\sigma }_{\widehat{G}}\left(x\right)-{\mu }_{\widehat{G}}\left(x\right)}{{\sigma }_{\widehat{G}}\left(x\right)}\right)\right]-\frac{1}{2}\left[\left(\frac{2{\sigma }_{\widehat{G}}\left(x\right)-{\mu }_{\widehat{G}}\left(x\right)}{{\sigma }_{\widehat{G}}\left(x\right)}\right)\phi \left(\frac{2{\sigma }_{\widehat{G}}\left(x\right)-{\mu }_{\widehat{G}}\left(x\right)}{{\sigma }_{\widehat{G}}\left(x\right)}\right)+\left(\frac{2{\sigma }_{\widehat{G}}\left(x\right)+{\mu }_{\widehat{G}}\left(x\right)}{{\sigma }_{\widehat{G}}\left(x\right)}\right)\phi \left(\frac{-2{\sigma }_{\widehat{G}}\left(x\right)-{\mu }_{\widehat{G}}\left(x\right)}{{\sigma }_{\widehat{G}}\left(x\right)}\right)\right]\right|.$$
(A10)

It should be noted that Eq. (A10) is different from the one shown in the referenced study (Lv et al. 2015). Therefore, in the current study, the performance of both learning functions was examined. To further investigate, the mistakes in the referenced study (Lv et al. 2015) are presented in Table 13.

Table 13 Mistakes in the H learning function

which is compatible with the finding in Eq. (A10).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khorramian, K., Oudah, F. New learning functions for active learning Kriging reliability analysis using a probabilistic approach: KO and WKO functions. Struct Multidisc Optim 66, 177 (2023). https://doi.org/10.1007/s00158-023-03627-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00158-023-03627-4

Keywords

Navigation