Skip to main content
Log in

A novel data-driven sparse polynomial chaos expansion for high-dimensional problems based on active subspace and sparse Bayesian learning

  • Published:
Structural and Multidisciplinary Optimization Aims and scope Submit manuscript

Abstract

Polynomial chaos expansion (PCE) has recently drawn growing attention in the community of stochastic uncertainty quantification (UQ). However, the drawback of the curse of dimensionality limits its application to complex and large-scale structures, and the PCE construction needs the complete knowledge of probability distributions of input variables, which may be impractical for real-world problems. To overcome these difficulties, this study proposes an active learning active subspace-based data-driven sparse PCE method (AL-AS-DDSPCE). First, we use the active subspace (AS) theory to reduce the dimension of the original input space, and establish the measure-consistent data-driven polynomial chaos bases in the reduced input space based on the samples of the original input random variables. Subsequently, to bypass the gradient calculation in the traditional AS method, we combine the sparse Bayesian learning with the manifold learning theory and propose an active learning AS method to obtain the subspace mapping matrix. The proposed AL-AS-DDSPCE can find the low-dimensional subspace of the original input space with the elaborate active learning algorithm, which does not require the probability distribution of input variables but is driven by the sample data of input random variables and the response data of the design samples, and can construct the PCE model efficiently and accurately. We verify the proposed method using two classical high-dimensional numerical examples, a 200-bar truss without explicit expression and one practical engineering problem. The results show that the AL-AS-DDSPCE is a good choice for solving high-dimensional UQ problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  • Abraham S, Raisee M, Ghorbaniasl G, Contino F, Lacor C (2017) A robust and efficient stepwise regression method for building sparse polynomial chaos expansions. J Comput Phys 332:461–474

    MATH  Google Scholar 

  • Ali W, Duong PLT, Khan MS, Getu M, Lee M (2018) Measuring the reliability of a natural gas refrigeration plant: Uncertainty propagation and quantification with polynomial chaos expansion based sensitivity analysis. Reliab Eng Syst Saf 172:103–117

    Google Scholar 

  • Blatman G, Sudret B (2011) Adaptive sparse polynomial chaos expansion based on least angle regression. J Comput Phys 230(6):2345–2367

    MATH  Google Scholar 

  • Chen L, Qiu H, Gao L, Yang Z, Xu D (2022) Exploiting active subspaces of hyperparameters for efficient high-dimensional Kriging modeling. Mech Syst Signal Process 169:108643

    Google Scholar 

  • Cheng K, Lu Z (2018a) Adaptive sparse polynomial chaos expansions for global sensitivity analysis based on support vector regression. Comput Struct 194:86–96

    Google Scholar 

  • Cheng K, Lu Z (2018b) Sparse polynomial chaos expansion based on D-MORPH regression. Appl Math Comput 323:17–30

    Google Scholar 

  • Cheng K, Lu Z, Zhen Y (2019) Multi-level multi-fidelity sparse polynomial chaos expansion based on Gaussian process regression. Comput Methods Appl Mech Eng 349:360–377

    MATH  Google Scholar 

  • Constantine PG, Dow E, Wang Q (2014) Active subspace methods in theory and practice: applications to kriging surfaces. SIAM J Sci Comput 36(4):A1500–A1524

    MATH  Google Scholar 

  • Constantine PG, Eftekhari A, Hokanson J, Ward RA (2017) A near-stationary subspace for ridge approximation. Comput Methods Appl Mech Eng 326:402–421

    MATH  Google Scholar 

  • Deng Z, Hu X, Lin X, Che Y, Xu L, Guo W (2020) Data-driven state of charge estimation for lithium-ion battery packs based on Gaussian process regression. Energy 205:118000

    Google Scholar 

  • Duong PLT, Qyyum MA, Lee M (2018) Sparse Bayesian learning for data driven polynomial chaos expansion with application to chemical processes. Chem Eng Res Des 137:553–565

    Google Scholar 

  • Duong PLT, Yang Q, Park H, Raghavan N (2019) Reliability analysis and design of a single diode solar cell model using polynomial chaos and active subspace. Microelectron Reliab 100:113477

    Google Scholar 

  • Eckert C, Beer M, Spanos PD (2020) A polynomial chaos method for arbitrary random inputs using B-splines. Probab Eng Mech 60:103051

    Google Scholar 

  • Erfani SM, Rajasegarar S, Karunasekera S, Leckie C (2016) High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning. Pattern Recogn 58:121–134

    Google Scholar 

  • Faramarzi A, Heidarinejad M, Mirjalili S, Gandomi AH (2020) Marine predators algorithm: a nature-inspired metaheuristic. Expert Syst Appl 152:113377

    Google Scholar 

  • Geladi P, Kowalski BR (1986) Partial least-squares regression: a tutorial. Anal Chim Acta 185:1–17

    Google Scholar 

  • Hariri-Ardebili MA, Pourkamali-Anaraki F (2018) Support vector machine based reliability analysis of concrete dams. Soil Dyn Earthq Eng 104:276–295

    Google Scholar 

  • Hariri-Ardebili MA, Sudret B (2020) Polynomial chaos expansion for uncertainty quantification of dam engineering problems. Eng Struct 203:109631

    Google Scholar 

  • He W, Zeng Y, Li G (2019) A novel structural reliability analysis method via improved maximum entropy method based on nonlinear mapping and sparse grid numerical integration. Mech Syst Signal Process 133:106247

    Google Scholar 

  • He W, Zeng Y, Li G (2020) An adaptive polynomial chaos expansion for high-dimensional reliability analysis. Struct Multidisc Optim 62(4):2051–2067

    Google Scholar 

  • He W, Hao P, Li G (2021a) A novel approach for reliability analysis with correlated variables based on the concepts of entropy and polynomial chaos expansion. Mech Syst Signal Process 146:106980

    Google Scholar 

  • He W, Yang H, Zhao G, Zeng Y, Li G (2021b) A quantile-based SORA method using maximum entropy method with fractional moments. J Mechan Des. https://doi.org/10.1115/1.4047911

    Article  Google Scholar 

  • He S, Xu J, Zhang Y (2022a) Reliability computation via a transformed mixed-degree cubature rule and maximum entropy. Appl Math Model 104:122–139

    MATH  Google Scholar 

  • He W, Li G, Nie Z (2022b) A novel polynomial dimension decomposition method based on sparse Bayesian learning and Bayesian model averaging. Mech Syst Signal Process 169:108613

    Google Scholar 

  • He W, Zhao G, Li G, Liu Y (2022c) An adaptive dimension-reduction method-based sparse polynomial chaos expansion via sparse Bayesian learning and Bayesian model averaging. Struct Saf 97:102223

    Google Scholar 

  • Hoeffding, W. (1992). A class of statistics with asymptotically normal distribution. In Breakthroughs in statistics (pp. 308–334). Springer, New York, NY.

  • Jahanbin R, Rahman S (2022) Stochastic isogeometric analysis on arbitrary multipatch domains by spline dimensional decomposition. Comput Methods Appl Mech Eng 393:114813

    MATH  Google Scholar 

  • Jaynes ET (1957) Information theory and statistical mechanics. Phys Rev 106(4):620

    MATH  Google Scholar 

  • Kessy A, Lewin A, Strimmer K (2018) Optimal whitening and decorrelation. Am Stat 72(4):309–314

    Google Scholar 

  • Kevasan, H. K., & Kapur, J. N. (1992). Entropy Optimization Principles with Applications.

  • Kougioumtzoglou IA, Petromichelakis I, Psaros AF (2020) Sparse representations and compressive sampling approaches in engineering mechanics: A review of theoretical concepts and diverse applications. Probab Eng Mech 61:103082

    Google Scholar 

  • Lee D, Rahman S (2022) Reliability-based design optimization under dependent random variables by a generalized polynomial chaos expansion. Struct Multidisc Optim 65(1):1–29

    Google Scholar 

  • Li M, Wang Z (2020) Deep learning for high-dimensional reliability analysis. Mech Syst Signal Process 139:106399

    Google Scholar 

  • Li G, He W, Zeng Y (2019a) An improved maximum entropy method via fractional moments with Laplace transform for reliability analysis. Struct Multidisc Optim 59(4):1301–1320

    Google Scholar 

  • Li J, Cai J, Qu K (2019b) Surrogate-based aerodynamic shape optimization with the active subspace method. Struct Multidisc Optim 59(2):403–419

    Google Scholar 

  • Li G, Wang YX, Zeng Y, He WX (2022) A new maximum entropy method for estimation of multimodal probability density function. Appl Math Model 102:137–152

    Google Scholar 

  • Lin Q, Xiong F, Wang F, Yang X (2020) A data-driven polynomial chaos method considering correlated random variables. Struct Multidisc Optim 62(4):2131–2147

    Google Scholar 

  • Liu B, Lin G (2020) High-dimensional nonlinear multi-fidelity model with gradient-free active subspace method. Commun Comput Phys 28(5):1937–1969

    MATH  Google Scholar 

  • Lukaczyk TW, Constantine P, Palacios F, Alonso JJ (2014) Active subspaces for shape optimization. In: 10th AIAA multidisciplinary design optimization conference (p 1171)

  • Marelli S, Sudret B (2015) UQLab user manual–Polynomial chaos expansions. Chair of risk, safety & uncertainty quantification, ETH Zürich, 0.9–104 edition, 97–110

  • Marelli S, Lamas C, Konakli K, Mylonas C, Wiederkehr P, Sudret B (2019) UQLAB user manual–Sensitivity analysis, Report UQLab-V1. 2–106

  • Meng Z, Zhang Z, Zhang D, Yang D (2019) An active learning method combining Kriging and accelerated chaotic single loop approach (AK-ACSLA) for reliability-based design optimization. Comput Methods Appl Mech Eng 357:112570

    MATH  Google Scholar 

  • Meng Z, Zhang Z, Li G, Zhang D (2020) An active weight learning method for efficient reliability assessment with small failure probability. Struct Multidisc Optim 61(3):1157–1170

    Google Scholar 

  • Oladyshkin S, Nowak W (2012) Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliab Eng Syst Saf 106:179–190

    Google Scholar 

  • Panagant N, Pholdee N, Bureerat S, Yildiz AR, Mirjalili S (2021) A comparative study of recent multi-objective metaheuristics for solving constrained truss optimisation problems. Arch Comput Methods Eng 28(5):4031–4047

    Google Scholar 

  • Psaros AF, Kougioumtzoglou IA, Petromichelakis I (2018) Sparse representations and compressive sampling for enhancing the computational efficiency of the Wiener path integral technique. Mech Syst Signal Process 111:87–101

    Google Scholar 

  • Rabitz H, Aliş ÖF (1999) General foundations of high-dimensional model representations. J Math Chem 25(2):197–233

    MATH  Google Scholar 

  • Rahman S (2011) Global sensitivity analysis by polynomial dimensional decomposition. Reliab Eng Syst Saf 96(7):825–837

    Google Scholar 

  • Rahman S (2018a) A polynomial chaos expansion in dependent random variables. J Math Anal Appl 464(1):749–775

    MATH  Google Scholar 

  • Rahman S (2018b) Mathematical properties of polynomial dimensional decomposition. SIAM/ASA J Uncertain Quantif 6(2):816–844

    MATH  Google Scholar 

  • Rahman S (2019) Uncertainty quantification under dependent random variables by a generalized polynomial dimensional decomposition. Comput Methods Appl Mech Eng 344:910–937

    MATH  Google Scholar 

  • Rahman S, Jahanbin R (2022) A spline dimensional decomposition for uncertainty quantification in high dimensions. SIAM/ASA J Uncertain Quantif 10(1):404–438

    MATH  Google Scholar 

  • Rahman S, Ren X (2014) Novel computational methods for high-dimensional stochastic sensitivity analysis. Int J Numer Meth Eng 98(12):881–916

    MATH  Google Scholar 

  • Ren O, Boussaidi MA, Voytsekhovsky D, Ihara M, Manzhos S (2022) Random Sampling High Dimensional Model Representation Gaussian Process Regression (RS-HDMR-GPR) for representing multidimensional functions with machine-learned lower-dimensional terms allowing insight with a general method. Comput Phys Commun 271:108220

    Google Scholar 

  • Roy A, Manna R, Chakraborty S (2019) Support vector regression based metamodeling for structural reliability analysis. Probab Eng Mech 55:78–89

    Google Scholar 

  • Shao Q, Younes A, Fahs M, Mara TA (2017) Bayesian sparse polynomial chaos expansion for global sensitivity analysis. Comput Methods Appl Mech Eng 318:474–496

    MATH  Google Scholar 

  • Slotnick JP, Khodadoust A, Alonso J, Darmofal D, Gropp W, Lurie E, Mavriplis DJ (2014) CFD vision 2030 study: a path to revolutionary computational aerosciences (No. NF1676L-18332)

  • Tang K, Congedo PM, Abgrall R (2016) Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation. J Comput Phys 314:557–589

    MATH  Google Scholar 

  • Thapa M, Mulani SB, Walters RW (2020) Adaptive weighted least-squares polynomial chaos expansion with basis adaptivity and sequential adaptive sampling. Comput Methods Appl Mech Eng 360:112759

    MATH  Google Scholar 

  • Tipping ME (2001) Sparse Bayesian learning and the relevance vector machine. J Machine Learning Res 1:211–244

    MATH  Google Scholar 

  • Tipping ME, Faul AC (2003) Fast marginal likelihood maximisation for sparse Bayesian models. In International workshop on artificial intelligence and statistics (pp 276–283). PMLR

  • Tripathy R, Bilionis I, Gonzalez M (2016) Gaussian processes with built-in dimensionality reduction: applications to high-dimensional uncertainty propagation. J Comput Phys 321:191–223

    MATH  Google Scholar 

  • Wan HP, Ren WX, Todd MD (2020) Arbitrary polynomial chaos expansion method for uncertainty quantification and global sensitivity analysis in structural dynamics. Mech Syst Signal Process 142:106732

    Google Scholar 

  • Wang H, Yan Z, Xu X, He K (2020) Probabilistic power flow analysis of microgrid with renewable energy. Int J Electr Power Energy Syst 114:105393

    Google Scholar 

  • Weinan E, Han J, Jentzen A (2017) Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Commun Math Stat 5(4):349–380

    MATH  Google Scholar 

  • Wen Z, Yin W (2013) A feasible method for optimization with orthogonality constraints. Math Program 142(1):397–434

    MATH  Google Scholar 

  • Xie W, Huang P (2021) Extreme estimation of wind pressure with unimodal and bimodal probability density function characteristics: a maximum entropy model based on fractional moments. J Wind Eng Ind Aerodyn 214:104663

    Google Scholar 

  • Xie W, Huang P, Gu M (2021) A maximum entropy model with fractional moments for probability density function estimation of wind pressures on low-rise building. J Wind Eng Ind Aerodyn 208:104461

    Google Scholar 

  • Xiong J, Cai X, Li J (2022) Clustered active-subspace based local Gaussian process emulator for high-dimensional and complex computer models. J Comput Phys 450:110840

    MATH  Google Scholar 

  • Xiu D, Karniadakis GE (2003) Modeling uncertainty in flow simulations via generalized polynomial chaos. J Comput Phys 187(1):137–167

    MATH  Google Scholar 

  • Xu J, Wang D (2019) Structural reliability analysis based on polynomial chaos, Voronoi cells and dimension reduction technique. Reliab Eng Syst Saf 185:329–340

    Google Scholar 

  • Yadav V, Rahman S (2014) Adaptive-sparse polynomial dimensional decomposition methods for high-dimensional stochastic computing. Comput Methods Appl Mech Eng 274:56–83

    MATH  Google Scholar 

  • Yan H, Hao C, Zhang J, Illman WA, Lin G, Zeng L (2021) Accelerating groundwater data assimilation with a gradient-free active subspace method. Water Resour Res 57(12):e2021WR029610

    Google Scholar 

  • Yang X, Karniadakis GE (2013) Reweighted ℓ1 minimization method for stochastic elliptic differential equations. J Comput Phys 248:87–108

    MATH  Google Scholar 

  • Yin J, Du X (2022) Active learning with generalized sliced inverse regression for high-dimensional reliability analysis. Struct Saf 94:102151

    Google Scholar 

  • Zhang X, Pandey MD (2013) Structural reliability analysis based on the concepts of entropy, fractional moment and dimensional reduction method. Struct Saf 43:28–40

    Google Scholar 

  • Zhang X, Wang L, Sørensen JD (2019) REIF: a novel active-learning function toward adaptive Kriging surrogate models for structural reliability analysis. Reliab Eng Syst Saf 185:440–454

    Google Scholar 

  • Zhang X, Wang L, Sørensen JD (2020) AKOIS: an adaptive Kriging oriented importance sampling method for structural system reliability analysis. Struct Saf 82:101876

    Google Scholar 

  • Zhang X, Pandey MD, Luo H (2021) Structural uncertainty analysis with the multiplicative dimensional reduction–based polynomial chaos expansion approach. Struct Multidisc Optim 64(4):2409–2427

    Google Scholar 

  • Zhang Q, Wu Y, Lu L, Qiao P (2022) An adaptive dendrite-HDMR metamodeling technique for high-dimensional problems. J Mech Des 144(8):081701

    Google Scholar 

  • Zhang K, Zuo W, Gu S, Zhang L (2017) Learning deep CNN denoiser prior for image restoration. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 3929–3938)

  • Zhou Y, Lu Z (2020) An enhanced Kriging surrogate modeling technique for high-dimensional problems. Mech Syst Signal Process 140:106687

    Google Scholar 

  • Zhou T, Peng Y (2021) Active learning and active subspace enhancement for PDEM-based high-dimensional reliability analysis. Struct Saf 88:102026

    Google Scholar 

  • Zhou Y, Lu Z, Hu J, Hu Y (2020) Surrogate modeling of high-dimensional problems via data-driven polynomial chaos expansions and sparse partial least square. Comput Methods Appl Mech Eng 364:112906

    MATH  Google Scholar 

  • Zhou Y, Lu Z, Cheng K (2022) Adaboost-based ensemble of polynomial chaos expansion with adaptive sampling. Comput Methods Appl Mech Eng 388:114238

    MATH  Google Scholar 

  • Zhou H, Ibrahim C, Zheng WX, Pan W (2021) Sparse Bayesian Deep Learning for Dynamic System Identification. arXiv preprint arXiv:2107.12910.

  • Zhu Y, Zabaras N, Koutsourelakis PS, Perdikaris P (2019) Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data. J Comput Phys 394:56–81

    MATH  Google Scholar 

Download references

Acknowledgements

The support of the National Key Research and Development Program (Grant No. 2019YFA0706803), the National Natural Science Foundation of China (Grant No. 11872142), and the China postdoctoral science foundation (Grant No. 2022T150086) is greatly appreciated.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gang Li.

Ethics declarations

Conflict of interest

The authors declare that they have no competing financial interest or personal relationships that could have appeared to influence the work reported in this paper.

Replication of results

The algorithm provided in this article is part of the software we are developing. As the software cannot be published due to confidential issue of the funded project, detail explanation about how the algorithm is implemented in Sect. 3. Therefore, no additional data and code are appended. Readers are welcome to contact the authors for details and further explanations.

Additional information

Responsible Editor: Chao Hu

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Brief introduction of fractional moment-based maximum entropy method

Reconstructing the PDF of a random variable Q with the finite number of moments as constraint conditions is a classical moment problem. One of the popular approaches for solving the problem is the maximum entropy method proposed by Jaynes (1957). For a continuous random variable Y, its information-theoretic entropy is defined as follows:

$$H(f(q)) = - \int_{Q} {f(q)\ln (f(q)){\text{d}}q} ,$$
(44)

where f(q) is the PDF of random variable Q. In according to maximum entropy principle, if κ accurate moments of Q are obtained, the unknown PDF, f(q), can be estimated by maximum entropy method, of which general optimization formulation is expressed as Eq. (13). When the orders of the statistical moment constrain are fractional number, Eq. (13) is named as FM-MEM.

To solve Eq. (13), its Lagrange function is constructed as follows:

$$L = - \int_{Q} {f_{{\text{e}}} (q)\ln f_{{\text{e}}} (q){\text{d}}q} + \lambda_{0} \left( {\int_{Q} {f_{{\text{e}}} (q){\text{d}}q} - 1} \right) + \sum\limits_{i = 1}^{N} {\lambda_{i} \left( {\int_{Q} {q^{{\gamma_{i} }} f_{{\text{e}}} (q){\text{d}}q} - E[Q^{{\gamma_{i} }} ]} \right)} {\kern 1pt} {\kern 1pt} {\kern 1pt} .$$
(45)

The solution of the stationary condition of L can be analytically obtained, as shown in Eq. (14). According to the normalization axiom in probability theory, the close-form solution of λ0 can be derived as follows:

$$\lambda_{0} = \ln \left[ {\int_{Q} {\exp ( - \sum\limits_{i = 1}^{\kappa } {\lambda_{i} } q^{{\gamma_{i} }} ){\text{d}}q} } \right].$$
(46)

It should be noted that there are two set of underdetermined variables in Eq. (13), namely, the fractional orders of the statistical moments γ = [γ1, γ2, …, γκ] and the Lagrange multipliers λ = [λ1, λ2, …, λκ]. When γ is given, λ can be solved through the statistical moment constrain conditions. Therefore, Eq. (13) is a double-loop optimization problem, where the γ-level is the outer loop, and the λ-level is the inner loop. In fact, dealing with the statistical moment constrain conditions directly is quite difficult; thus, the following equivalent form the maximum entropy method is adopted, namely, the minimization of the Kullback–Leibler divergence of the fe(q) and the real PDF of q (Kevasan and Kapur 1992):

$$\min :\;K[f,f_{{\text{e}}} ] = \int_{Q} {f(q)\log \left[ {f(q)/f_{{\text{e}}} (q)} \right]} {\text{d}}q = - H(f(q)) - \int_{Q} {f(q)\log \left( {f_{{\text{e}}} (q)} \right)} {\text{d}}q$$
(47)

Although H is unknown, it is invariant, as shown in Eq. (44). Therefore, replacing fe(q) with Eq. (14), Eq. (47) can be rewritten as follows:

$$\min :\;\Gamma ({{\varvec{\upgamma}}},{{\varvec{\uplambda}}}) = \lambda_{0} + \sum\limits_{i = 1}^{\kappa } {\lambda_{i} } E(Q^{{\gamma_{i} }} ).$$
(48)

Finally, FM-MEM is to solve the following minimization problem:

$$\mathop {\min }\limits_{{{\varvec{\upgamma}}}} \left\{ {\mathop {\min }\limits_{{{\varvec{\uplambda}}}} \left\{ {\Gamma ({{\varvec{\upgamma}}},{{\varvec{\uplambda}}})} \right\}} \right\}.$$
(49)

To guarantee the convergence and accuracy, the simplex search method is usually used for the minimization of Eq. (49) (Zhang and Pandey 2013).

Appendix 2: The concept of Sobol’ indices

A multi-dimensional function can be expressed as the summation of a series of low-order component functions:

$$g({\mathbf{x}}) = g_{0} + \sum\limits_{{i_{1} = 1}}^{M} {g_{{i_{1} }} } (x_{{i_{1} }} ) + \sum\limits_{\begin{subarray}{l} i_{1} ,i_{2} = 1 \\ \;i_{1} < i_{2} \end{subarray} }^{M} {g_{{i_{1} ,i_{2} }} (x_{{i_{1} }} ,x_{{i_{2} }} )} + ... + g_{1,2,...,M} (x_{1} ,\;...,\;x_{M} ),$$
(50)

where x = [x1, x2, …, xM] is the vector of input random variables. Equation (50) can be rewritten as a more compact form:

$$g(x_{1} ,\;...,\;x_{M} ) = \sum\limits_{{{\mathbf{u}} \subseteq \{ 1,2,...,M\} }} {g_{{\mathbf{u}}} ({\mathbf{x}}_{{\mathbf{u}}} )} ,$$
(51)

where the component functions yield

$$\begin{gathered} g_{\emptyset } = \int_{{{\mathbf{R}}^{M} }} {g({\mathbf{x}})} w({\mathbf{x}}){\text{d}}{\mathbf{x}}, \hfill \\ g_{1} = \int_{{{\mathbf{R}}^{M - 1} }} {g({\mathbf{x}})} w(x_{2} ,x_{3} ,...,x_{M} ){\text{d}}x_{2} {\text{d}}x_{3} ...{\text{d}}x_{M} - g_{\emptyset } , \hfill \\ ... \hfill \\ g_{{\mathbf{u}}} ({\mathbf{x}}_{{\mathbf{u}}} ) = \int_{{{\mathbf{R}}^{{M - ||{\mathbf{u}}||_{0} }} }} {g({\mathbf{x}}_{{\mathbf{u}}} ,{\mathbf{x}}_{{ - {\mathbf{u}}}} )} w_{{ - {\mathbf{u}}}} ({\mathbf{x}}_{{ - {\mathbf{u}}}} )d{\mathbf{x}}_{{ - {\mathbf{u}}}} - \sum\limits_{{{\mathbf{v}} \subset {\mathbf{u}}}} {g_{{\mathbf{v}}} ({\mathbf{x}}_{{\mathbf{v}}} )} , \hfill \\ \end{gathered}$$
(52)

where w(x) is the joint probability density function (PDF) of the input random variables, -u = {1, 2, …, N}\u, and ||•||0 is the 0-norm (i.e., the number of elements of a vector). The component function, gu(xu), is a ||u||0-dimensional function, representing the effect of \({\mathbf{x}}_{{\mathbf{u}}} = \left[ {x_{{i_{1} }} ,\;x_{{i_{2} }} ,\;...,\;x_{{i_{{||{\mathbf{u}}||_{0} }} }} } \right],\;1 \le i_{1} \le \ldots \le i_{{||{\mathbf{u}}||_{0} }} \le M\). The component functions in Eq. (52) are orthogonal.

The variance of g(x) can be expressed as follows:

$$D = \int_{{{\mathbf{R}}^{M} }} {\left( {g({\mathbf{x}})} \right)^{2} } w({\mathbf{x}}){\text{d}}{\mathbf{x}} - \left( {g_{\emptyset } } \right)^{2} .$$
(53)

Due to the orthogonality of the component functions, Eq. (53) can be decomposed written as follows:

$$D = \sum\limits_{{i_{1} = 1}}^{M} {D_{{i_{1} }} } + \sum\limits_{{i_{1} > i_{2} }}^{M} {D_{{i_{1} ,\;i_{2} }} } + D_{{i_{1} ,\;i_{2} ...i_{M} }} ,$$
(54)

where

$$D_{{i_{1} ,\;i_{2} ...i_{s} }} = \int_{{{\mathbf{R}}^{s} }} {\left( {g_{{i_{1} ,\;i_{2} ...i_{s} }} (x_{{i_{1} }} ,\;x_{{i_{2} }} ,\;...,\;x_{{i_{s} }} )} \right)^{2} } w(x_{{i_{1} }} ,\;x_{{i_{2} }} ,\;...,\;x_{{i_{s} }} ){\text{d}}x_{{i_{1} }} {\text{d}}x_{{i_{2} }} ...{\text{d}}x_{{i_{s} }} .$$
(55)

Thus, the partial Sobol’ index due to the cooperative effect of the input random variables \(\left[ {x_{{i_{1} }} ,\;x_{{i_{2} }} ,\;...,\;x_{{i_{s} }} } \right]\) can be defined as follows:

$$S_{{i_{1} ,\;i_{2} ...i_{s} }} = \frac{{D_{{i_{1} ,\;i_{2} ...i_{s} }} }}{D}.$$
(56)

Therefore, the total Sobol’ index that represents the contribution of xi to the variance of g(x) can be expressed as follows:

$$S_{i}^{{\text{T}}} = \sum\limits_{{{\mathbf{u}}:\;i \in {\mathbf{u}}}} {S_{{\mathbf{u}}} } .$$
(57)

Generally, the partial and total Sobol’ indices are usually named as global sensitivity index. When g(x) is replaced with its PCE model, Eqs. (1, 56 and 57) can be expressed by the PCE coefficients analytically. Interested readers can refer to Rahman (2011) and Shao et al. (2017) for details.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, W., Li, G., Zhong, C. et al. A novel data-driven sparse polynomial chaos expansion for high-dimensional problems based on active subspace and sparse Bayesian learning. Struct Multidisc Optim 66, 29 (2023). https://doi.org/10.1007/s00158-022-03475-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00158-022-03475-8

Keywords

Navigation