Approximation of Probability Density Functions for PDEs with Random Parameters Using Truncated Series Expansions

Abstract

The probability density function (PDF) of a random variable associated with the solution of a partial differential equation (PDE) with random parameters is approximated using a truncated series expansion. The random PDE is solved using two stochastic finite element methods, Monte Carlo sampling and the stochastic Galerkin method with global polynomials. The random variable is a functional of the solution of the random PDE, such as the average over the physical domain. The truncated series are obtained considering a finite number of terms in the Gram–Charlier or Edgeworth series expansions. These expansions approximate the PDF of a random variable in terms of another PDF, and involve coefficients that are functions of the known cumulants of the random variable. To the best of our knowledge, their use in the framework of PDEs with random parameters has not yet been explored.

Introduction

In the field of uncertainty quantification, the problem of describing the probability density function (PDF) of the output of a model given a known distribution of the input is of major relevance [13]. For instance, for the solution of inverse problems involving partial differential equations (PDEs) with Bayesian inference [10, 20, 55], the PDF for the forward problem may be sought to get insights on the forward uncertainty propagation [37, 38]. In such a framework, the forward PDF is usually inferred from the histogram, which may be obtained according to two different strategies. The first is a direct approach, where Monte Carlo (MC) sampling is used to obtain realizations of the quantity of interest (QoI) by solving the forward problem for each sample. Alternatively, once an approximation of the stochastic solution of the forward problem is obtained, realizations may be obtained with MC sampling without solving the problem for each sample. This is the method used for instance in [38], where the stochastic solution of the forward problem is approximated with polynomial chaos, or in [37], where adaptive sparse grid collocation is employed. It is important to observe that despite providing a simple and easy approximation of a density function, histograms are not continuous, hence they cannot be differentiated if necessary. The differentiability of the estimator is an important feature in many applications [12, 14, 27, 44, 52]. Another popular density estimation technique that provides a smooth alternative to the histogram is the method of kernel density estimation (KDE) [32]. The performance of the KDE is dependent on the choice of a parameter called the bandwidth, which regulates the smoothness of the estimator. The kernels, which obviously also influence the smoothness, are usually centered at the samples, but a binning of the sample set can be performed although usually introducing an extra dependence on the binning size.

In this work, we propose an alternative strategy to the histogram and the KDE for the estimation of the PDF of a quantity of interest, given a known distribution for the input data. The quantities of interest are associated with the solution of a PDE with random parameters, which is approximated using a stochastic finite element (SFE) method. The SFE methods used in this work are Monte Carlo sampling and the stochastic Galerkin (SG) method with global polynomials. Once the QoI has been obtained from the solution of the random PDE using a SFE method, its PDF is estimated by means of a truncated Gram–Charlier (GC) or Edgeworth (ED) series expansion [35]. The use of GC and ED truncated series is the alternative approach we are considering. These series have been used in several fields such as chemistry [22, 46], finance [33, 41, 50], physics and astrophysics [7, 17, 21, 23, 34, 45], material science [51], oceanology [60], power systems engineering [24], and other branches of applied mathematics [47]. However, to the best of our knowledge, their use in the field of random PDEs has not yet been explored. The GC and ED expansions approximate a PDF adding successive corrections to a known PDF that is used as a first approximation. The known PDF is usually chosen to be Gaussian, and only a few efforts have been made to generalize the GC expansion to the case of a non-Gaussian kernel [6, 8]. The terms in the GC and ED series involve derivatives of the known input PDF and are functions of the cumulants of the output PDF to be approximated. The cumulants of a given random variable can be obtained analytically from its moments, which involve integrals of powers of the random variable [35]. The advantage of employing the SG method with global polynomials for the solution of the random PDE is that with this choice, QoIs can be defined as polynomial functions in the random variable, so exact moments can be computed with appropriate quadrature rules. Consequently, also the coefficients of the GC and ED expansions can be computed exactly. The SG method is validated against the Monte Carlo method for small input variances of the random data. The approach we propose still makes use of the histogram to determine the PDF of the QoI, although the histogram is used only as a crude approximation to determine the most appropriate truncation order of the series. The approximate PDF is given analytically as a truncated series and it does not depend directly on the histogram, so it is not affected by the dimension of the bins used to construct it and by the number of samples. Moreover, it does not depend on a smoothing parameter, such as the bandwidth, as does the KDE. Finally, if the kernel is a \(C^{\infty }\) function (i.e., a Gaussian distribution), the truncated GC and ED expansions are continuous and infinitely differentiable, therefore potentially more useful than just a histogram approximation and simpler to deal with, given that smoothness is achieved without having to deal with extra parameters as in the KDE.

The paper is structured as follows: in Section 2, the mathematical problem and the input data description are introduced; in Section 3, the two SFE methods employed for the solution of a random PDE are described, namely the Monte Carlo method and the Stochastic Galerkin method; the theory on the Gram–Charlier and Edgeworth expansions is laid out in Section 4. Arguments on asymptotic expansions and convergence are given in Sections 5 and 6, respectively. Numerical results are reported in Section 7, for different types of output distributions. The paper is concluded with Section 9, where our findings are discussed.

Formulation of the Problem

For PDEs with random parameters, the stochasticity is taken into account assuming that the input data depends on a random variable, other than the physical variable as in standard partial differential equations [16, 42, 43, 56, 58, 59]. The input data consists of coefficient functions and forcing term. For simplicity, here it is assumed that the stochastic contribution is introduced only by the coefficients and not by the forcing term. The random model considered is the Poisson problem defined on D ×Ω, where \(D \subset \mathbb {R}^{d}\) and Ω is the sample space

$$ \left\{\begin{array}{ll} - \nabla \cdot (a(\mathbf{x}, \omega) \nabla u(\mathbf{x}, \omega)) = f(\mathbf{x}) &\quad \text{in }~D \times {\Omega}, \\ u(\mathbf{x}, \omega)=0 &\quad \text{on }~\partial D \times {\Omega}. \end{array}\right. $$
(1)

As in [2, 29, 42, 43], the following assumption is made.

Assumption 1

The random coefficient function a(x,ω) in system (1) has the following properties:

  1. 1.

    There exists a positive constants \(a_{\min \limits }\) such that \(a_{\min \limits } \leq a(\mathbf {x}, \omega )\), almost surely on Ω, for all xD.

  2. 2.

    a(x,ω) = a(x,ε(ω)) in \(\overline {D} \times {\Omega }\), where ε(ω) = (ε1(ω),ε2(ω),…,εN(ω)) is a vector of real-valued uncorrelated random variables defined on a probability space \(({\Omega }, \mathcal {F}, \mathbb {P})\).

  3. 3.

    a(x,ε(ω)) is measurable with respect to ε.

For any n = 1,…,N, let \({\Gamma }_{n} := \varepsilon _{n}({\Omega }) \subset \mathbb {R}\) and let us define the parameter space \({\Gamma } := {\prod }_{n=1}^{N} {\Gamma }_{n}\). The joint probability density function for \(\{\varepsilon _{n}\}_{n=1}^{N}\) is denoted by \(\rho (\boldsymbol {\varepsilon }): {\Gamma } \rightarrow \mathbb {R}^{+}\). Several choices of coefficient functions are possible to ensure that Assumption 1 is satisfied. Some examples are given in [29]. The choice made in this work is described in the next section.

Karhunen–Loève Expansion

The Karhunen–Loève (KL) series expansion has been widely used in the field of uncertainty quantification to represent random fields, such as the coefficient function in system (1), as an infinite sum of random variables [26, 36, 53]. For numerical simulations, due to the finite computational capability available, the series is truncated, resulting in the following approximation

$$ a(\mathbf{x}, \boldsymbol{\varepsilon}(\omega)) \approx \mathbb{E}[a(\mathbf{x}, \cdot)] + \sum\limits_{n=1}^{N} \sqrt{\lambda_{n}} b_{n}({\mathbf{x}}) \varepsilon_{n}(\omega), $$

where, for n = 1,…,N, λn and bn are, respectively, the eigenvalues and eigenfunctions of the covariance function of a(x,ε(ω)).

Remark 1

We assume that the input random processes approximated with a truncated KL expansion are Gaussian, hence the random variables \(\{ \varepsilon _{n}(\omega )\}_{n=1}^{N}\) are standard independent and identically distributed. This allows us to generate realizations of the truncated KL expansion by Monte Carlo sampling from Gaussian distributions.

To bound the coefficient function away from zero, the KL expansion is used on the logarithm of \(a(\mathbf {x}, \boldsymbol {\varepsilon }(\omega )) - a_{\min \limits }\) rather than on the function itself. Thus, let us define the function γ(x,ε(ω)) as follows

$$ \gamma(\mathbf{x}, \boldsymbol{\varepsilon}(\omega)) := \log(a(\mathbf{x}, \varepsilon(\omega)) - a_{\min}). $$
(2)

The above definition implies that

$$ a(\mathbf{x}, \boldsymbol{\varepsilon}(\omega)) = a_{\min} + \exp \left( \gamma(\mathbf{x}, \boldsymbol{\varepsilon}(\omega)) \right). $$
(3)

Using a truncated KL expansion to approximate γ(x,ε(ω)), we get

$$ \gamma(\mathbf{x}, \boldsymbol{\varepsilon}(\omega)) \approx \mu_{\gamma} + \sum\limits_{n=1}^{N} \sqrt{\lambda_{n}} b_{n}({\mathbf{x}}) \varepsilon_{n}(\omega), $$
(4)

where \(\mu _{\gamma } := \mathbb {E}[\gamma (\mathbf {x}, \cdot )]\), and N is the dimension of Γ. Note that, in (4), (λn,bn) are the eigenpairs associated with the covariance function of γ(x,ε(ω)). Moreover, according to Remark 1, when the truncated KL is used to approximate γ(x,ε(ω)), the assumption of Gaussian distribution is made on γ(x,ε(ω)), and consequently a(x,ε(ω)) has a log-normal distribution, as shown in (3). It follows from (3) that a(x,ε(ω)) is approximated by

$$ a(\mathbf{x}, \boldsymbol{\varepsilon}(\omega)) \approx a_{\min} + \exp \left( \mu_{\gamma} + \sum\limits_{n=1}^{N} \sqrt{\lambda_{n}} b_{n}({\mathbf{x}}) \varepsilon_{n}(\omega) \right). $$
(5)

The eigenvalues and eigenfunctions in (5) are obtained solving the generalized eigenvalue problem

$$ {\int}_{D} C_{\gamma}(\mathbf{x},\widehat {\mathbf{x}})b_{n}(\mathbf{x}) d\mathbf{x} = \lambda_{n} b_{n}(\widehat{\mathbf{x}}), $$
(6)

where \(C_{\gamma }(\mathbf {x},\widehat {\mathbf {x}})\) is the covariance function of the field γ(x,ε(ω)). The covariance structure is generally unknown and usually a specific covariance function is assumed.

In this work, the solution of (6) is obtained according to a Galerkin approach, following the procedure presented, for instance, in [31, 53]. Assume that the physical domain D is discretized with a regular finite element grid \(\mathcal {T}_{h}\) of size h [1, 9, 15], and let Jh be the total number of degrees of freedom. If \(\{\phi _{j}\}_{j=1}^{J_{h}}\) denotes the global nodal basis associated with \(\mathcal {T}_{h}\), the eigenfunction bn of the covariance function of γ(x,ε(ω)) in (5) is approximated by its nodal interpolant

$$ b_{n}(\mathbf{x}) \approx \sum\limits_{j=1}^{J_{h}} b_{j,n} \phi_{j}(\mathbf{x}), $$
(7)

where bj,n := bn(xj), with xj being the j-th degree of freedom of \(\mathcal {T}_{h}\). A substitution of (7) in (6) transforms the continuous problem into a finite dimensional one

$$ \sum\limits_{j=1}^{J_{h}} b_{j,n} \left( {\int}_{D} C_{\gamma}(\mathbf{x},\widehat {\mathbf{x}})\phi_{j}(\mathbf{x}) d\mathbf{x} - \lambda_{n}\phi_{j}(\widehat{\mathbf{x}}) \right) = 0. $$
(8)

Next, (8) is multiplied by ϕi, and integrated with respect to \(\widehat {\mathbf {x}}\), to obtain

$$ \sum\limits_{j=1}^{J_{h}} b_{j,n} \left( {\int}_{D}{\int}_{D} C_{\gamma}(\mathbf{x},\widehat {\mathbf{x}})\phi_{j}(\mathbf{x}) \phi_{i}(\widehat{\mathbf{x}}) d\mathbf{x} d\widehat{\mathbf{x}} - \lambda_{n} {\int}_{D} \phi_{j}(\widehat{\mathbf{x}}) \phi_{i}(\widehat{\mathbf{x}}) d\widehat{\mathbf{x}} \right) = 0. $$
(9)

Let us define the two real symmetric Jh × Jh matrices C and M by

$$ C_{ij} := {\int}_{D} {\int}_{D} C_{\gamma}(\mathbf{x},\widehat {\mathbf{x}})\phi_{j}(\mathbf{x}) \phi_{i}(\widehat{\mathbf{x}}) d\mathbf{x} d\widehat{\mathbf{x}}, \qquad M_{ij} := {\int}_{D} \phi_{j}(\widehat{\mathbf{x}}) \phi_{i}(\widehat{\mathbf{x}}) d\widehat{\mathbf{x}}. $$
(10)

Note that M is the standard finite element mass matrix, and is positive definite. With these matrices, (9) can be rewritten as the vector equation

$$ \boldsymbol{C} \boldsymbol{b}_{n} = \lambda_{n} \boldsymbol{M} \boldsymbol{b}_{n}. $$
(11)

Because C is symmetric, its eigenvectors bn associated with distinct eigenvalues are orthogonal with respect to the mass matrix M, that is \(\boldsymbol {b}_{n}^{T} \boldsymbol {M} \boldsymbol {b}_{m} = 0\) if mn. This orthogonality property implies that the approximations of the functions bn(x) in (7) are orthogonal in L2(D). Orthonormality can be obtained dividing these functions by their L2(D) norm. Note that orthonormality in L2(D) is a requirement for the functions bn(x).

For the numerical tests, the generalized eigenvalue problem in (11) is implemented in the in-house finite element code FEMuS [11], and solved with the SLEPc library [30].

Quantities of Interest

The aim of the paper is to approximate probability density functions of functionals of the random PDE solution, which we refer to as quantities of interest. Let \(u_{J_{h}}(\mathbf {x}, \boldsymbol {y})\) be the approximate solution of system (1), obtained with a SFE method. The methods used to solve such a system are discussed in the next section. Given \(u_{J_{h}}(\mathbf {x}, \boldsymbol {y})\), examples of random quantities of interest include the spatial average over the physical domain

$$ \mathcal{Q}_{u}(\boldsymbol{\varepsilon}) = \frac{1}{|D|} {\int}_{D} u_{J_{h}}(\mathbf{x}, \boldsymbol{\varepsilon}) d\mathbf{x}, $$
(12)

the integral of the square

$$ \mathcal{Q}_{u}(\boldsymbol{\varepsilon}) = {\int}_{D} \left( u_{J_{h}}(\mathbf{x}, \boldsymbol{\varepsilon})\right)^{2} d\mathbf{x}, $$

or the maximum over the physical domain

$$ \mathcal {Q}_{u}(\boldsymbol{\varepsilon}) = \underset{\mathbf{x} \in D}{\max} u_{J_{h}}(\mathbf{x}, \boldsymbol{\varepsilon}). $$

Note that the dependence on the physical variable x is eliminated in the quantities of interest, as \(\mathcal {Q}_{u}(\boldsymbol {\varepsilon })\) is a scalar random variable that only depends on the stochastic variable ε.

Solution of the Random PDE

The PDE with random parameters (1) is solved numerically using stochastic finite element methods. A detailed description of SFE methods for PDEs with random input data such as the one in system (1) can be found in the review article [29] and references therein. The SFE methods used here are described in the next two sections, together with the procedures to compute stochastic quantities such as moments, once the approximate SFE solution is obtained.

The Monte Carlo Method

The first SFE method considered is the classical Monte Carlo (MC) method, which is a stochastic sampling method [29]. With MC, M points \(\{\boldsymbol {\varepsilon }_{m}\}_{m=1}^{M}\) are chosen randomly in the parameter domain Γ, and system (1) is solved independently for each of these points. In such a way, M realization of the solution of the random PDE are obtained and M uncoupled finite element systems are solved. For more details on MC and error estimates, see [25, 29, 39]. From now on, the number M will refer to the number of MC samples. According to Remark 1, the MC samples are randomly drawn from Gaussian distributions. The mean μγ and standard deviation σγ of γ(x,ε(ω)) in (2) are not in general zero and one, respectively. Hence, if one chooses samples for the KL expansion from a standard Gaussian distribution, because of the presence of μγ in the KL and σγ in the covariance function, the result is the same as sampling from a non-standard Gaussian distribution with mean μγ and standard deviation σγ. If εm denotes one of the M samples obtained from the Monte Carlo Method, let \(u_{J_{h}}(\mathbf {x}, \boldsymbol {\varepsilon }_{m})\) be the MC solution of system (1) associated with the sample εm and let \(\mathcal {Q}_{u}(\boldsymbol {\varepsilon }_{m})\) be a value of a quantity of interest obtained from the realization \(u_{J_{h}}(\mathbf {x}, \boldsymbol {\varepsilon }_{m})\). Then \(\mathcal {Q}_{u}(\boldsymbol {\varepsilon }_{m})\) is not a random variable but rather just a scalar quantity. On the other hand, the quantity of interest \(\mathcal {Q}_{u}(\boldsymbol {\varepsilon })\) is a function of a random variable, and a random variable itself. Hence stochastic quantities such as moments and cumulants can be computed. In the framework of MC, the mean of \(\mathcal {Q}_{u}(\boldsymbol {\varepsilon })\) is approximated with Monte Carlo integration as follows

$$ \mathbb{E}\left[\mathcal{Q}_{u}(\boldsymbol{\varepsilon})\right] \approx \frac{1}{M}\sum\limits_{m=1}^{M} \mathcal{Q}_{u}(\boldsymbol{\varepsilon}_{m}) := \mu_{\mathcal{Q}_{u}}. $$

For l > 1, the l-th moment is approximated in the same fashion by

$$ \mathbb{E}\left[\left( \mathcal{Q}_{u}(\boldsymbol{\varepsilon}) \right)^{l}\right] \approx \frac{1}{M} \sum\limits_{m=1}^{M} \left( \mathcal{Q}_{u}(\boldsymbol{\varepsilon}_{m}) \right)^{l}. $$

An estimate of the variance is given by

$$ \mathbb{E}\left[\left( \mathcal{Q}_{u}(\boldsymbol{\varepsilon}) - \mathbb{E}\left[\mathcal{Q}_{u}(\boldsymbol{\varepsilon})\right] \right)^{2}\right] \approx \frac{1}{M} \sum\limits_{m=1}^{M} \left( \mathcal{Q}_{u}(\boldsymbol{\varepsilon}_{m}) - \mu_{\mathcal{Q}_{u}}\right)^{2}. $$

The accuracy of Monte Carlo integration improves with the number of samples M, but the sampling error increases with the magnitude of the input variance [29]. For this reason, here the MC is used only for small values of the input variance and to validate the stochastic Galerkin method, which is later employed for larger values of the input variance.

The Stochastic Galerkin Method

The second SFE method used to obtain an approximation of the solution of system (1) is a stochastic Galerkin (SG) method. With SG, the stochastic function space is approximated using a Galerkin procedure, as it is done for the physical function space in standard finite element methods [3, 4, 28]. Let us assume that the parameters are independent, so that the input joint probability function ρ(ε) can be expressed as \(\rho (\boldsymbol {\varepsilon }) = {\prod }_{n=1}^{N} \rho _{n}(\varepsilon _{n})\). According to a Galerkin methodology, the infinite dimensional space \(L^{2}_{\rho }({\Gamma })\) is approximated by the finite dimensional space

$$ \begin{array}{@{}rcl@{}} &&\mathcal{P}_{\mathcal{J}(p)}({\Gamma}) = \text{span}\left\{\prod\limits_{n=1}^{N} \varepsilon_{n}^{p_{n}}~\Big|~\boldsymbol{p} \in \mathcal{J}(p), \varepsilon_{n} \in {\Gamma}_{n} \right\}, \\ \text{where } \quad&&\mathcal{J}(p) = \left\{\boldsymbol{p} \in \mathbb{N}^{N}~ \Big|~\sum\limits_{n=1}^{N}p_{n} \leq p \right\}, \end{array} $$
(13)

which corresponds to the total degree (TD) multivariate polynomial space from [29]. Gaussian PDFs are assumed for the input variables, hence the multivariate probabilist Hermite polynomials are employed as an orthogonal basis for \(\mathcal {P}_{\mathcal {J}(p)}({\Gamma })\). Moreover, it holds that \({\Gamma }_{n} = \mathbb {R}\) for all n = 1,…,N, and so \({\Gamma } = \mathbb {R}^{N}\). The multivariate polynomials are obtained in a tensor product fashion from the univariate probabilist Hermite polynomials, which are appropriately scaled so that they form an orthonormal basis with respect to the PDF

$$ \rho_{n}(\varepsilon_{n}) = s(\varepsilon_{n}) := \frac{\exp{(-{\varepsilon_{n}^{2}}/2)}}{\sqrt{2 \pi}}, $$
(14)

which is a standard Gaussian. The scaling is carried out by dividing the pn-th univariate probabilist Hermite polynomial \(H_{e_{p_{n}}}\) by \(\sqrt {n!}\), to obtain

$$ {\int}_{\mathbb{R}} \frac{H_{e_{p_{n}}}}{\sqrt{n!}} \frac{H_{e_{p_{m}}}}{\sqrt{m!}} \rho_{n}(\varepsilon_{n}) d\varepsilon_{n} = \delta_{nm}, $$
(15)

where ρn(εn) is as in (14) and δnm is Kronecker’s delta. Multivariate \(L^{2}_{\rho }({\Gamma })\)-orthonormal Hermite polynomials are then defined as

$$ H_{e_{\boldsymbol{p}}}(\boldsymbol{\varepsilon}) = \prod\limits_{n=1}^{N} H_{e_{p_{n}}}(\varepsilon_{n}). $$

The SG approximation of the solution of system (1) is defined as [29]

$$ u^{gSG}_{J_{h} M_{p}}(\mathbf{x}, \boldsymbol{\varepsilon}) = \underset{\boldsymbol{p} \in \mathcal{J}(p)}{\sum} u_{\boldsymbol{p}}(\mathbf{x}) H_{e_{\boldsymbol{p}}}(\boldsymbol{\varepsilon}), \qquad \text{where }~u_{\boldsymbol{p}}(\mathbf{x}) = \sum\limits_{j=1}^{J_{h}} u_{\boldsymbol{p},j} \phi_{j}(\mathbf{x}). $$

We consider quantities of the form

$$ \mathcal{Q}_{u}(\boldsymbol{\varepsilon}) = \underset{\boldsymbol{p} \in \mathcal{J}(p)}{\sum} \beta_{\boldsymbol{p}} H_{e_{\boldsymbol{p}}}(\boldsymbol{\varepsilon}). $$
(16)

When βp is the average of up(x), we have

$$ \mathcal{Q}_{u}(\boldsymbol \varepsilon) = \underset{\boldsymbol{p} \in \mathcal{J}(p)}{\sum} \left( \frac{1}{|D|}{\int}_{D} u_{\boldsymbol{p}}(\mathbf{x}) d\mathbf{x} \right) H_{e_{\boldsymbol{p}}}(\boldsymbol{\varepsilon}), $$
(17)

as in (12). When βp is the integral of the square of up(x), we have

$$ \mathcal{Q}_{u}(\boldsymbol \varepsilon) = \underset{\boldsymbol{p} \in \mathcal{J}(p)}{\sum} \left( {\int}_{D} u^{2}_{\boldsymbol{p}}(\mathbf{x}) d\mathbf{x} \right) H_{e_{\boldsymbol{p}}}(\boldsymbol{\varepsilon}). $$
(18)

As opposed to what it is obtained with MC, the quantities of interest given by the SG method are now polynomial functions in the variable ε. Such a feature gives a considerable advantage in terms of accuracy, because moments can now be computed exactly with appropriate quadrature rules, rather than being approximated by Monte Carlo integration. The l-th moment of the \(\mathcal {Q}_{u}(\boldsymbol {\varepsilon })\) in (17) is defined as

$$ \mathbb{E}\left[\left( \mathcal{Q}_{u}(\boldsymbol{\varepsilon})\right)^{l}\right] := {\int}_{\mathbb{R}^{N}} \left( \mathcal{Q}_{u}(\boldsymbol{\varepsilon})\right)^{l} \rho(\boldsymbol{\varepsilon}) d\boldsymbol{\varepsilon}. $$
(19)

The multivariate integral in the above equation can be computed exactly with a multidimensional Hermite quadrature rule. Exploiting (15), monodimensional quadrature points can be obtained as the zeros of the probabilist Hermite polynomials. The quadrature weights are given by the following formula

$$ w_{i} = \frac{(n-1)!}{n (H_{e_{n-1}}(x_{i}))^{2}}, $$

where xi is the i-th quadrature point and \(H_{e_{n-1}}\) is the (n − 1)-th probabilist Hermite polynomial. By construction, the weights wi have the property that \({\sum }_{i}^{N_{q}} w_{i} = 1\), where Nq is the total number of quadrature points employed in the numerical integration. Multidimensional quadrature points and weights can be obtained in a tensor product fashion. For the SG method, given \(q \in \mathbb {N}\), the coefficient function a(x,ε(ω)) in system (1) is computed as [29]

$$ \begin{array}{@{}rcl@{}} a_{SG}(\mathbf{x}, \boldsymbol{\varepsilon}) &=& \underset{\boldsymbol{q} \in \mathcal{J}(q)}{\sum} a_{\boldsymbol{q}}(\mathbf{x}) H_{e_{\boldsymbol{q}}}(\boldsymbol{\varepsilon}), \end{array} $$
(20a)
$$ \begin{array}{@{}rcl@{}} a_{\boldsymbol{q}}(\mathbf{x}) &=& {\int}_{{\Gamma}} a(\mathbf{x}, \boldsymbol{\varepsilon}(\omega)) H_{e_{\boldsymbol{q}}}(\boldsymbol{\varepsilon}) \rho(\boldsymbol{\varepsilon}) d\boldsymbol{\varepsilon}, \end{array} $$
(20b)

with a(x,ε(ω)) as in (3). In the numerical results, with the exception of the integral for the data projection in (??), all integrals are computed by an exact quadrature rule. Among the integrals that are computed exactly, there are also the moments in (19). It is worth mentioning that, if a non-standard Gaussian distribution with mean μγ and standard deviation σγ is considered for the input variables, the change of variable (xμγ)/σγ would have to be made in all integrals over the parameter space. Although, one can avoid this change of variable by taking into account the non-standard distribution in the input data, as it has been discussed in Section 3.1.

The Gram–Charlier and Edgeworth Expansions

The goal of this paper is to obtain the PDF \(f: \mathbb {R} \rightarrow \mathbb {R}^{+}\) of a given quantity of interest \(\mathcal {Q}_{u}(\boldsymbol {\varepsilon })\) using the Gram–Charlier (GC) and the Edgeworth (ED) series expansions, which express the PDF in terms of a reference PDF. The coefficients of these expansions are functions of the cumulants of \( \mathcal {Q}_{u}(\boldsymbol {\varepsilon })\), which can be easily obtained once the moments of \( \mathcal {Q}_{u}(\boldsymbol {\varepsilon })\) have been computed. In general, if we denote by ml and κl the l-th moment and the l-th cumulant respectively, the first six cumulants are expressed in terms of the moments as follows [6, 35]:

$$ \begin{array}{@{}rcl@{}} \kappa_{1} &=& m_{1}, \\ \kappa_{2} &=& m_{2} - {m_{1}^{2}}, \\ \kappa_{3} &=& m_{3} + 2 {m_{1}^{3}} - 3 m_{1} m_{2},\\ \kappa_{4} &=& m_{4} - 6 {m_{1}^{4}} + 12 {m_{1}^{2}} m_{2} - 3{m_{2}^{2}} - 4 m_{1} m_{3},\\ \kappa_{5} &=& m_{5} - 5 m_{4} m_{1} - 10 m_{3} m_{2} + 20 m_{3} {m_{1}^{2}} + 30 {m_{2}^{2}} m_{1} - 60 m_{2} {m_{1}^{3}} + 24 {m_{1}^{5}},\\ \kappa_{6} &=& m_{6} - 6 m_{5} m_{1} - 15 m_{4} m_{2} + 30 m_{4} {m_{1}^{2}} -10 {m_{3}^{2}} + 120 m_{3} m_{2} m_{1} \\ && -120 m_{3} {m_{1}^{3}} + 30 {m_{2}^{3}} - 270 {m_{2}^{2}} {m_{1}^{2}} + 360 m_{2} {m_{1}^{4}} - 120 {m_{1}^{6}}. \end{array} $$

In practical applications, the GC and ED series are truncated and only a finite number of terms is retained. The truncation results in an approximation of the PDF. The behavior of the truncated series as the number of terms is increased will be subject to a discussion in the next sections.

Derivation of the Expansions

We choose the formulation given in [8] to formally describe the GC and ED series. Let X be any random variable with PDF fX(x). The characteristic function ΨX(t) of X is the Fourier transform of fX(x) [35]. The cumulant generating function KX(t) is defined as \(K_{X}(t) := \ln ({\Psi }_{X}(t))\), and it holds that

$$ \ln({\Psi}_{X}(t)) = \sum\limits_{l=1}^{\infty}\kappa_{X,l}\frac{(i t)^{l}}{l!}, $$
(21)

where i denotes the imaginary unit and κX,l is the l-th cumulant of X, i.e., the cumulants are defined as the coefficients of the series expansion of \(\ln (\psi _{X}(t))\) with respect to it. Let φ denote a random variable that is distributed according to the standard Gaussian distribution, then, according to the notation adopted in this section, we have fφ(x) = s(x), with s(x) defined in (14). We also observe that in the present section it is important for the clarity of the presentation to specify the random variable with which a PDF is associated, whereas in the previous sections this need was not imperative. With the same notation used for X, the following equation can be obtained

$$ \ln({\Psi}_{\varphi}(t)) = \sum\limits_{l=1}^{\infty}\kappa_{\varphi,l}\frac{(i t)^{l}}{l!}. $$
(22)

Using the properties of exponentials and convergent series, (21) and (22) can be combined to get

$$ {\Psi}_{X}(t) = \exp \left( \sum\limits_{l=1}^{\infty} (\kappa_{X,l} - \kappa_{\varphi,l})\frac{(i t)^{l}}{l!}\right) {\Psi}_{\varphi}(t). $$
(23)

The GC and ED expansions follow from different manipulations of (23).

The GC is obtained with the use of Bell polynomials Bl(x1,…,xl) [5, 40]. These polynomials have the property that

$$ \exp \left( \sum\limits_{l=1}^{\infty} x_{l} \frac{(i t)^{l}}{l!}\right) = \sum\limits_{l=0}^{\infty} B_{l}(x_{1}, \ldots, x_{l}) \frac{(i t)^{l}}{l!}, $$

with B0 := 1. Therefore, (23) becomes

$$ {\Psi}_{X}(t) = \left( 1 + \sum\limits_{l=1}^{\infty} B_{l}(\kappa_{X,1} - \kappa_{\varphi,1}, \ldots, \kappa_{X,l} - \kappa_{\varphi,l})\frac{(i t)^{l}}{l!} \right) {\Psi}_{\varphi(t)}. $$
(24)

The inverse Fourier transform applied to (24) gives the GC expansion

$$ f_{X}(x) = \left( 1 + \sum\limits_{l=1}^{\infty} B_{l}(\kappa_{X,1} - \kappa_{\varphi,1}, \ldots, \kappa_{X,l} - \kappa_{\varphi,l})\frac{(-1)^{l} D_{x}^{(l)}}{l!} \right) s(x), $$
(25)

where \(D_{x}^{(l)}\) represents the l-th derivative operator with respect to x and

$$ D^{(l)}_{x} \left( s(x)\right)= (-1)^{l} H_{e_{l}}(x) s(x). $$

The first six Bell polynomials are given by [8]

$$ \begin{array}{@{}rcl@{}} B_{1}(x_{1}) &=& x_{1},\\ B_{2}(x_{1},x_{2}) &=& {x_{1}^{2}} + x_{2},\\ B_{3}(x_{1},x_{2},x_{3}) &=& {x_{1}^{3}} + 3 x_{1} x_{2} + x_{3},\\ B_{4}(x_{1},x_{2},x_{3},x_{4}) &=& {x_{1}^{4}} + 6 {x_{1}^{2}} x_{2} + 3 {x_{2}^{2}} + 4 x_{1} x_{3} + x_{4},\\ B_{5}(x_{1},x_{2},x_{3},x_{4},x_{5}) &=& {x_{1}^{5}} + 10 {x_{1}^{3}} x_{2} + 15 x_{1} {x_{2}^{2}} + 10 {x_{1}^{2}} x_{3} + 10 x_{2} x_{3}+ 5 x_{1} x_{4} + x_{5} ,\\ B_{6}(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}) &=& {x_{1}^{6}} + 15 {x_{1}^{4}} x_{2} + 45 {x_{1}^{2}} {x_{2}^{2}} + 15 {x_{2}^{3}} + 20 {x_{1}^{3}} x_{3} + 60 x_{1} x_{2} x_{3} \\ && + 10 {x_{3}^{2}} + 15 {x_{1}^{2}} x_{4} + 15 x_{2} x_{4} + 6 x_{1} x_{5} + x_{6}. \end{array} $$

Note that when X is a standardized variable, κX,0 = 0 and κX,1 = 1. Since κφ,0 = 0, κφ,1 = 1, and κφ,l = 0 for l ≥ 3, (25) reduces to

$$ f_{X}(x) = \left( 1 + \sum\limits_{l=3}^{\infty} B_{l}(0,0,\kappa_{X,3}, \ldots, \kappa_{X,l})\frac{(-1)^{l} D_{x}^{(l)}}{l!} \right) s(x), $$
(26)

To derive ED, one assumes that X can be written as the standardized sum [48, 49]

$$ X = \frac{1}{\sqrt{r}} \sum\limits_{i=1}^{r} \frac{Z_{i} - \mu}{\sigma}, $$
(27)

where the random variables Z1,Z2,…,Zr are independent and identically distributed, each with mean μ, standard deviation σ, and l-th cumulant κZ,l. Note that κZ,1 = μ and κZ,2 = σ2. Define νl = κZ,l/σl, then [8]

$$ \kappa_{X,0}=0, \qquad \kappa_{X,1}=1, \qquad \kappa_{X,l} = \frac{\nu_{l}}{r^{\frac{l}{2}-1}}, \quad l \geq 3. $$

Substituting the above equation in (23), we get

$$ {\Psi}_{X}(t) = \exp \left( \sum\limits_{l=3}^{\infty} \frac{\nu_{l}}{r^{\frac{l}{2}-1}}\frac{(i t)^{l}}{l!}\right) {\Psi}_{\varphi}(t). $$

With a proper shift of the summation index and an inverse Fourier transform, the above series gives the ED expansion

$$ f_{X}(x) = \left( 1 + \sum\limits_{l=1}^{\infty} B_{l}(a_{1},a_{2}, \ldots,a_{l})\frac{1}{r^{l/2}~l!} \right) s(x), $$
(28)

The interested reader can consult [8] for more details on how the above equation is obtained. The coefficients al are given by

$$ a_{l} = \frac{\nu_{l+2}(-1)^{l+2}D_{x}^{(l+2)}}{(l+1)(l+2)}. $$

For ease of notation, if we write the series in (28) as

$$ f_{X}(x) = \sum\limits_{l=0}^{\infty} (-1)^{l} \frac{\vartheta_{l}(x)}{r^{l/2}}, $$

the first six coefficient functions 𝜗l(x) are given by [7]

$$ \begin{array}{@{}rcl@{}} \vartheta_{0}(x) &=& s(x),\\ \vartheta_{1}(x) &=& \frac{1}{3!} \nu_{3} D_{x}^{(3)}\left( s(x)\right),\\ \vartheta_{2}(x) &=& \frac{1}{4!} \nu_{4} D_{x}^{(4)}\left( s(x)\right) + \frac{1}{72} {\nu^{2}_{3}} D_{x}^{(6)}\left( s(x)\right),\\ \vartheta_{3}(x) &=& \frac{1}{5!} \nu_{5} D_{x}^{(5)}\left( s(x)\right) + \frac{1}{144} \nu_{3} \nu_{4} D_{x}^{(7)}\left( s(x)\right) + \frac{1}{1296} {\nu_{3}^{3}} D_{x}^{(9)}\left( s(x)\right),\\ \vartheta_{4}(x) &=& \frac{1}{6!} \nu_{6} D_{x}^{(6)}\left( s(x)\right) + \left( \frac{1}{1152} {\nu_{4}^{2}} + \frac{1}{720} \nu_{3} \nu_{5} \right)D_{x}^{(8)}\left( s(x)\right)\\ \ && + \frac{1}{1728} {\nu_{3}^{2}} \nu_{4} D_{x}^{(10)}\left( s(x)\right) + \frac{1}{31104} {\nu_{3}^{4}} D_{x}^{(12)}\left( s(x)\right). \end{array} $$

A schematic representation of the necessary steps to get from the coefficient function a(x,ε(ω)) to the approximation of the PDF \(f_{\mathcal {Q}_{u}}\) is summarized in Fig. 1. Referring to Fig. 1, the physical variable x and the stochastic variable ε are given as inputs to the random coefficient function a(x,ε(ω)), which is used to assemble the SFE system associated with the random PDE in (1). The system is then solved with an SFE method and an approximate solution is obtained. From this solution, quantities of interest are evaluated and their moments are computed, in the way associated with the SFE method chosen. Once the moments have been obtained, they are fed to the truncated series expansions described in this section and an approximation of the PDF \(f_{\mathcal {Q}_{u}}\) is produced. In the next two sections, important considerations on the convergence of the GC and ED expansions are made. First, the concept of asymptotic expansion is discussed.

Fig. 1
figure1

Summary of the steps to obtain an approximation of the PDF \(f_{\mathcal {Q}_{u}}\)

Asymptotic Expansions

Let {Fr(x)} be a sequence of functions to be approximated by any partial sum of the series

$$ \sum\limits_{i=0}^{\infty} \frac{A_{i}(x)}{(\sqrt{r})^{i}}. $$
(29)

According to [54, 57], for a given r, the series (29) is an asymptotic expansion valid to n terms if the first n + 1 partial sums have the property

$$ \left|F_{r}(x) - \sum\limits_{i=0}^{n} \frac{A_{i}(x)}{(\sqrt{r})^{i}}\right| \leq \frac{C_{n}(x)}{(\sqrt{r})^{n+1}}. $$
(30)

Moreover, if Cn(x) does not depend on x, the expansion is said to be valid uniformly in x. As pointed out by Wallace [57], the asymptotic property is a property of finite partial sums and, for a given r, the series (29) may or may not be convergent. If the series (29) converges to Fr(x), we know that every 𝜖 > 0 there is an \(R \in \mathbb {N}\) such that

$$ \left|F_{r}(x) - \sum\limits_{i=0}^{n} \frac{A_{i}(x)}{(\sqrt{r})^{i}}\right| < \epsilon, \qquad \forall n \geq R. $$

While the above inequality states that for all partial sums with nR, the error will be less than 𝜖, it does not give specific information of what happens to the error before n reaches R. On the other hand, if r is sufficiently large, an asymptotic expansion valid to n terms has the property that the error will be uniformly reduced as more terms are added to the finite partial sum, from 1 up to n + 1 terms, after which there is no longer the guarantee that adding successive terms will provide a uniform reduction of the error [54, 57]. With small r, the situation is a little different. Because the bounds Cn(x) typically increase rapidly with n, a small value of r may be unable to make the denominator grow faster than the numerator of the right hand side in Ineq. (30), and only the first few terms would be improvements, as pointed out in [54, 57]. If a series is convergent, regardless of it being also asymptotic, the error will eventually go to zero as the number of terms in the partial sums is increased. If a series is asymptotic but not convergent, regardless of the value of r, there will be a minimum error that can be achieved, which limits the accuracy of the series. To summarize: convergent series give information on what happens to the error as the number of terms in the partial sum goes to infinity, whereas asymptotic series give information on the error as more terms are added starting from the first, and up to the (n + 1)-th term.

We note that divergent asymptotic expansion have a long and useful history, especially with regards to applications. Excellent expositions on the subject are found in [54, 57].

Considerations on Convergence

In [35, p. 152], the following sufficient condition for the convergence of the GC expansion is given

Proposition 1

If the PDF f(x) is of bounded variation on every finite interval and

$$ {\int}_{-\infty}^{\infty} | f(x) | \exp(-x^{2}/4) dx < \infty, $$

then the GC expansion of f(x) is convergent.

While the integral in Proposition 1 is finite for PDFs with bounded support, the requirement of being of bounded variations on every finite interval may be in general harder to satisfy. We could not find in the literature any explicit sufficient condition addressing the convergence of the Edgeworth (ED) expansion.

Although, as pointed out by several authors [19, 24, 35, 41], convergence of the GC or ED series should not be a concern, because for practical applications the series are truncated. Of course, if the series are convergent, the error will eventually go to zero, but this is not useful for practical applications because the number of terms necessary to achieve convergence will likely be too big. The real question is how well a truncated GC or ED series can approximate the PDF of interest. This is directly connected to the definition of asymptotic expansion introduced in the previous section. With an asymptotic expansion, there is the guarantee that the initial error will be reduced by adding a certain number of successive terms. This property is relevant for our purpose regardless of the convergence of the series, because only a finite number of terms after the initial approximation are considered. Unfortunately, the GC series is not an asymptotic expansion [7, 34, 47, 57]. On the other hand, it has been shown in [18] that the ED expansion is asymptotic uniformly in x. Even though explicit bounds on errors were not given in [18], the asymptotic nature of ED expansion still makes it more valuable than the GC for the applications of our interest, where the series are truncated to a small number of terms.

In light of these observations, we believe that the best approach is composed of the following components. First we assume that we have in hand a stochastic Galerkin approximation of the quantity of interest. We do not include this step in the description of our algorithm because having in hand an SG approximation of the QoI is something one might want for any approach for estimating PDFs of QoIs.

At the heart of our algorithms is the construction of a truncated Edgeworth or truncated Gram–Charlier expansion approximation of the PDF for the QoI.

  • 1. Recursively for l = 3,…, compute the l-th moment of the QoI. Note that we initialize by computing three moments.

    • Supposing we have in hand l moments so obtained, we use those moments to construct an l-term truncated ED or GC expansion, determining cumulants from the moments.

    • The moments are obtained via exact numerical integration of the SG approximation of the QoI so that one need only integrate polynomials.

We need to determine the “optimal” number of terms in the ED or GC expansions.

  • 2. This is done by comparing, each time we have incremented l, the l-term expansion with the (l–1)-term expansion previously determined.

    • The comparison can be done visually or by computing, e.g., using a sampling of the two expansions, the 2-norm of the difference between them.

At this point we have two possibilities.

  • 3a. If the ED or GC expansion is known to be convergent or, using the comparisons done in 2 above, the expansion “seems” to be convergent, we stop the recursion when the difference computed in 2 is smaller that a prescribed tolerance.

    • In this case, the “optimal” number of terms in the truncated ED or GC expansion is determined by the prescribed tolerance.

  • 3b. If the ED or GC expansion is known to an be asymptotic but divergent or, using the comparisons done in 2 above, the expansion “seems” to be divergent, increasing the number of terms in the expansion may not result in a better approximation so that we instead determine the “optimal” number of terms kept in the expansion as follows.

    • Run a Monte Carlo simulation to obtain samples of the QoI that are used to construct a histogram approximation of the PDF for the QoI, including bounds on the support of that PDF, in case it is bounded.

      • The Monte Carlo samples are determined from the SG approximation of the QoI and not by doing expensive solves of the discretized random PDE.

      • As a result, the histogram approximation of the PDF for the QoI can be obtained at almost no cost. Even so, we do not need to take a huge number of samples because we only need to have in hand a crude histogram approximation.

    • By comparing the crude histogram so computed with the ED or GC approximate PDFs for all values of the number of terms used, one can determine the optimal number of terms as that for which the expansion approximation is “closest” to the histogram.

We also remark that the number of terms kept in ED or GC expansion approximations of PDFs is often considerable lower than, e.g., kernel density estimators (KDE) [32]. The number of terms in the latter may be as big as the number of samples taken of the SG approximation of the QoI. On the other hand, for ED or GC approximations of the PDF, the SG approximation is sampled only at the quadrature points used to approximate moment integrals, so that the number of terms kept in the expansions is not otherwise related to the number of samples.

For the sake of simplicity, the numerical results reported in the next section do not follow every detail of the above algorithm, but those results are sufficient to illustrate the efficacy of our approach. A full implementation of the above algorithm would cast our approach in an even more favorable light as would some implementation steps that we have not discussed. An example of the latter is to take advantage of the use of SG approximations of the QoI to avoid approximating integrals of polynomials that are known to vanish due to the use of orthogonal polynomials in the construction of the SG approximation.

Numerical Results

We consider random variables associated with the solution of (1), obtained with the SFE methods discussed above. The methods are implemented in the in-house C++ FEM solver FEMuS [11], whereas the generalized eigenvalue problem in (11) is solved with the SLEPc library [30]. All numerical tests consider f ≡− 1 and dimension d = 2, for simplicity. The physical domain D is a unit square, with coarse grid composed of four bi-quadratic quadrilateral elements. The mesh for the simulations is obtained by refining three times the coarse grid according to a midpoint refinement procedure.

The random field a(x,ε(ω)) is given by (5) with μγ = 0 and \(a_{\min \limits } = 0.01\) and with stochastic dimension N = 2. The covariance function of γ(x,ε(ω)) in (3) is chosen to be

$$ C_{\gamma}(\mathbf{x},\widehat{\mathbf{x}}) = \sigma_{\gamma}^{2} \exp\left[-\frac{1}{L}\left( \sum\limits_{i=1}^{d} | x_{i} - \widehat{x}_{i}|\right)\right], $$
(31)

where σγ denotes the standard deviation of γ(x,ε(ω)), d is the dimension of the spatial variable, and L > 0 is a correlation length satisfying L ≤diam(D). The input standard deviation σγ is varied during the simulations whereas the correlation length is set to L = 0.1. Note that although analytic expressions are known for the eigenvalues and eigenfunctions of the covariance function (31), we do not presume such knowledge. Instead, for the numerical results given here, the eigenpairs are approximated as described in (711). We do so to mimic the usual situation in which eigenpairs of covariance functions are not known analytically. We also note that the approach taken in this paper also applies to anisotropic covariance functions; the choice of a correlation length L that is the same for all variables xi is made merely for simplicity.

The quantities of interest used are given in (17), and (18). For the MC method, the number of samples is M = 105. Unless otherwise stated, the same amount samples is used to obtain the crude histogram approximations. We point out that a smaller value of M can be chosen, but a fairly large number of samples allows the crude histogram to also be used for comparison with the GC and ED expansions. Within the proposed method, the histogram is obtained sampling random values of ε from a standard Gaussian Gaussian distribution and plugging them in (16), once the SG system has been solved. With MC, the histogram is obtained in the standard way. For the SG method, we choose the values of p = 4 in (13), and q = 5 in (??).

Remark 2

When using GC, the quantity of interest is standardized before computing the moments, so the expansion adopted for the tests is (26). For ED, considering (28), it is assumed that r = 1 and that \(Z_{1} = \mathcal {Q}_{u}\). Hence, for both the GC and the ED expansions, the PDF refers to the standardized quantity of interest.

Tests with Nearly Gaussian Output Distribution

We begin with simulations where the output distribution of the quantity of interest is close to a standard Gaussian. For the QoI in (17), this is achieved with σγ ∈{0.02,0.04,0.06,0.08}. Because the truncated GC and ED expansions perform successive corrections of a standard Gaussian distribution, it is expected that the expansions will perform very well in this context. In Fig. 2, the eigenfunctions b1(x) and b2(x) from (5) are reported for σγ = 0.08.

Fig. 2
figure2

First eigenfunction (left) and second eigenfunction (right) in the KL expansion from (5) for the case of σγ = 0.08 and N = 2

The values of the moments and cumulants for the quantity of interest in (17) are shown in Table 1. Note that such values are associated with the non-standardized quantity of interest. As discussed in [29], the sampling error of the MC method increases with the input standard deviation, hence we use this method only for small values of σγ, such as those considered in this section. The results obtained with the MC method are relevant because they serve as a validation of the SG method, that is employed in the next section for non-Gaussian output distributions. Observing Table 1, it is fair to say that convergence has been reached for the moments, given that there is a good agreement between the MC and the SG predictions. Therefore, the SG method can be considered validated by the MC results. All cumulants κl with l ≥ 3 are zero, confirming that the PDF is close to a Gaussian distribution, which is the one distribution that has κl = 0 for l ≥ 3.

Table 1 Behavior of moments and cumulants of the QoI in (17) for σγ ∈{0.02,0.04,0.06,0.08}

The histograms for σγ = 0.08 are in Fig. 3. As expected, due to the small values of the input variance, both suggest a nearly-Gaussian PDF: the major difference with an actual Gaussian distribution is that in this case the PDFs have bounded support due to the fact that the quantity of interest is bounded. In fact, the leading term of the expansion is a normal distribution, with the additional terms being higher-moment contributions. If σγ is small, then those contributions are relatively small compared to the leading term, so that the overall distribution in nearly normal. Our tests have shown that larger values of σγ cause an increased skewness on the histograms, although this effect is very weak for the values of σγ considered here, and so histograms for σγ = 0.02,0.04 and 0.06 are not shown due to their strong similarity to those in Fig. 3. The increased skewness will be clearly visible when larger values of σγ are considered in the next section.

Fig. 3
figure3

Gram–Charlier expansions of the standardized QoI in (17) for σγ = 0.08 and N = 2. I: GC - MC, II: GC - SG

In Fig. 3, the truncated GC expansion is displayed for σγ = 0.08. GC expansions for other values of σγ are not reported due to their strong resemblance to those in Fig. 3. The notation GCj means that the GC expansion in (26) has been truncated at l = j. Truncations up to l = 6 have been computed. The GC expansion is a function defined on all \(\mathbb {R}\), hence it could be graphed for ideally any x. Although, after the range of values of the quantity of interest is obtained from the crude approximation, the graph of the GC is plotted only for values of x in such a range. The same is done for the truncated Edgeworth expansions. Referring to Fig. 3, we see that the GC expansions are in good agreement with the respective histogram.

In Fig. 4 the truncated ED expansions are shown for MC and SG. Only the results for σγ = 0.08 are reported, because other values produced similar graphs. Note that, for GC, all the truncated series computed are displayed in Fig. 3, namely GC3, GC4, GC5 and GC6. As discussed above, GC is not an asymptotic expansion and so it might be that, for instance, GC4 is a worse approximation than GC3 but GC5 is better than both GC3 and GC4. Hence, it makes sense to display all computed curves, to have an idea of the behavior of the truncated series. For ED, truncations up to four terms have been computed. However, not all curves are displayed in the figures, but only those that show a monotone reduction of the error. Such curves are typically ED1 and ED2. This scenario is expected considering the discussion in Section 5, and that r = 1 in (27). Figure 4 shows great agreement of the ED expansion with the histograms, for both the MC method and the SG method.

Fig. 4
figure4

Edgeworth expansions of the standardized QoI in (17) for N = 2 and σγ = 0.08. I: ED - MC, II: ED - SG

When the distribution of the quantity of interest is nearly Gaussian, our results suggest that the Gram–Charlier and Edgeworth expansions can effectively be used to describe the PDF of the quantity of interest. This is consistent with the nature of such expansions, that add successive corrections to a standard Gaussian distribution. The natural question that arises next concerns how well GC and ED can work when the PDF to approximate is not nearly Gaussian. Numerical results are presented in the next section to address this uncertainty.

Tests with non-Gaussian Output Distribution

We now investigate how well the GC and ED expansions can approximate a PDF that is not nearly Gaussian, in particular we are interested in PDFs for which not just the first and the second moment are non-zero. This is achieved with larger values of the input standard deviation than those considered in the previous section. The QoI in (17) and QoI in (18) are considered. With the former, σγ ∈{1,1.2,1.4,1.6}, while with the latter σγ ∈{2.7,2.8,2.9,3}. Only the SG method is used, because for the values of σγ considered here, the sampling error of the MC method would produce results that are not accurate enough. The value N = 2 is chosen for the stochastic dimension. In Tables 2 and 3, the values of the moments and cumulants are reported for the standardized QoIs. As the input standard deviation grows, all moments increase in magnitude, suggesting that larger values of σγ give increasingly non-Gaussian distributions. In Fig. 5, the histograms for the QoI in (17) are reported: larger values of σγ produce an increasing negative skewness. The GC curves are in Fig. 5. For all values of σγ, GC3 provides the best approximation of the PDF, among the GC curves that have been computed. The quality of the approximation slightly decreases as the input standard deviation grows but remains overall satisfactory, especially for lower values of σγ. GC4 and GC5 are also good approximations for σγ = 1 and σγ = 1.2, whereas GC6 may be considered good enough only for σγ = 1. In general, since the GC and ED series keep on correcting a standard Gaussian, it could be that more terms are required for a good approximation when the PDF is far from a Gaussian.

Table 2 Moments and cumulants of the standardized QoI in (17) with N = 2, for different values of the input standard deviation
Table 3 Moments and cumulants of the standardized QoI in (18) with N = 2, for different values of the input standard deviation
Fig. 5
figure5

Gram–Charlier expansion of the standardized QoI in (17) for N = 2, obtained with the SG method. I: σγ = 1.6, II: σγ = 1.4, III: σγ = 1.2, IV: σγ = 1

The ED results are shown in Fig. 6. The best approximation among the curves computed is given by ED2 which well fits all histograms. The quality of the approximation slightly decreases as σγ grows, but the magnitude of this deterioration is much smaller than in the GC case. The ED expansions are qualitatively superior to the GC curves, because ED2 is a better approximation than GC3. Note that GC3 is by definition the same curve as ED1.

Fig. 6
figure6

Edgeworth expansion of the standardized QoI in (17) for N = 2, obtained with the SG method. I: σγ = 1.6, II: σγ = 1.4, III: σγ = 1.2, IV: σγ = 1

Next, we consider the quantity of interest given in (18). The histograms are in Fig. 7. Greater values of σγ cause an increasing positive skewness, especially going from σγ = 2.9 to σγ = 3. The GC curves are in Fig. 7. For σγ = 2.7, all computed GC curves well approximate the PDF, with GC4 and GC5 lying on top of each other. For all the other values of σγ, GC3 is again the best approximation, as the other computed curves progressively deviated from the histogram. For σγ = 2.8, GC4 and GC5 are still acceptable approximations, however they are not accurate enough for the subsequent values of σγ. GC6 is acceptable only for σγ = 2.7.

Fig. 7
figure7

Gram–Charlier expansion of the standardized QoI in (18) for N = 2, obtained with the SG method. I: σγ = 3, II: σγ = 2.9, III: σγ = 2.8, IV: σγ = 2.7

The ED curves are in Fig. 8. ED2 well approximates the histogram for all values of σγ, similarly to what was observed in the previous example. Once again, due to the better approximation provided by ED2 compared to GC3, we conclude that the ED expansion is more valuable than the GC, for the examples considered.

Fig. 8
figure8

Edgeworth expansion of the standardized QoI in (18) for N = 2, obtained with the SG method. I: σγ = 3, II: σγ = 2.9, III: σγ = 2.8, IV: σγ = 2.7

We conclude this section with a comparison of the GC and ED expansions with the kernel density estimator. Given that our analysis is set in a univariate setting, the KDE is given by

$$ f_{K}(x) = \frac{1}{h M} \sum\limits_{m=1}^{M} s\left( \frac{x - \mathcal{Q}_{u}(\boldsymbol{\varepsilon}_{m})}{h} \right), $$
(32)

where s is the standard Gaussian distribution defined in (14), and M is the size of the sample set. The parameter h is the bandwidth and we selected it to be the same as the bin width used for the histograms in the previous figures. We chose a standard Gaussian kernel for the KDE because the GC and ED expansions also use a standard Gaussian kernel, hence the comparison is fair. In Fig. 9, GC3 and ED2 are compared to the KDE estimator in (32) with M = 105: results for the QoI in (17) are visible in I and II using σγ = 1.6, whereas for the QoI in (18) they are in III and IV considering σγ = 3. The plots in Fig. 9 show that the GC and ED expansions are comparable to the KDE in terms of accuracy.

Fig. 9
figure9

GC3 and ED2 expansions compared with the KDE estimator in (32). I and II: QoI in (17) σγ = 1.6, III and IV: QoI in (18) σγ = 3

Computational Times

Next, we compare the computational time required to construct a histogram approximation using M = 106 samples and the time required by the proposed method. For the latter, the moments in (19) are evaluated with numerical quadrature, and a crude histogram approximation using M = 104 samples is computed. The comparison is carried out for different values of N, i.e., the dimension of the parameter space. The aim of this comparison is to show that the computational time required by the proposed approach is comparable to the one required to obtain an accurate histogram approximation. It is important to remark that the histogram is a discontinuous approximation, regardless of the number of samples employed, whereas the GC and ED expansions are continuous and infinitely differentiable. All computations were performed on a Dell Inspiron 15, 5000 series laptop with the CPU {Intel(R) Core(TM) i3-4030U CPU1.90GHz, 1895 MHz} and 8 GB of RAM. The CPU times are shown in Table 4 and refer to the quantity of interest in (17) with σγ = 1.6. The other simulations parameters are as in the previous section, except N that is varied from 1 to 4. For the proposed method, the total CPU time reported is the sum of three different costs: the cost of the crude approximation with M = 104 samples, the cost of the numerical quadrature that does not involve any sampling, and the cost of performing enough function evaluations to plot the curves, that also does not involve any sampling.

Table 4 Computational times of histogram and proposed method

The results show that, for the values of N considered, the CPU times of the proposed method are lower than those of the histogram. It is true however that as the values of N increase, the number of quadrature points necessary for the evaluation of the moments in (19) will grow, likely causing the histogram to eventually become faster. In Table 5 we compare the CPU time of a function evaluation using the KDE with the CPU time of a function evaluation using the proposed method. The main difference between the two methods is that the dominant cost for the KDE is associated with online operations, whereas for the proposed method the most expensive operations are done offline. For the KDE, every function evaluation requires the computation of as many kernel values as the total number of samples. Function evaluations count as online operations for the estimation of the PDF. If the sample set is composed of Ms points and Me function evaluations are required to estimate the PDF, the estimated cost of the KDE is proportional to the product MsMe (the influence of the parameter space dimension is negligible). For the proposed method, the cost of a single function evaluation is given by the computation of the moments with numerical quadrature and by the crude histogram computation. These operations are all offline and can be performed once and for all regardless of the number of functions evaluations necessary to estimate the PDF. If Mc < Ms is the number of samples used for the crude histogram and Qp is the number of quadrature points required for the exact one-dimensional numerical quadrature, the estimated cost of the proposed method is proportional to \(M_{c} + {Q_{p}^{N}}\), with N being the dimension of the parameter space. Table 5 shows CPU times results for N = 1,2,3,4, considering Mc = 104 and Ms = 106 or Ms = 500 for the KDE. The KDE with Ms = 500 is the fastest approximation, however it also the most inaccurate, as it can be seen from Fig. 10. The KDE with Ms = 106, GC3 and ED2 have a comparable level of accuracy, hence we focus our analysis on the comparison between the proposed method and the KDE with Ms = 106. For N = 1,2,3 the proposed method is faster than a single function evaluation of the KDE with Ms = 106, whereas for N = 4, one KDE evaluation is approximately nine times faster than the proposed method. It is very likely that in general more than just nine function evaluations will be necessary to appropriately describe the approximated PDF. For instance, in Fig. 10, the plots have been obtained with 46 function evaluations. Hence, the proposed method can compete with the KDE in terms of CPU time.

Table 5 Computational times for a single evaluation of the KDE and of the proposed method (104 are used for the crude histogram)
Fig. 10
figure10

KDE and proposed method (GC3 and ED2) curves associated with the results in Table 5

Discussion

It has been shown that the GC and ED truncated expansions represent a valid alternative to existing methods for the approximation of probability density functions associated with solutions of PDEs with random parameters. Our numerical results suggested that GC and ED provide an accurate estimate when the PDF is nearly Gaussian. This is consistent with the nature of the distributions. Even in the case of a non-Gaussian PDF, the truncated expansions well approximated the distributions. The asymptotic character of the ED expansion makes it more valuable than the GC due to a better monitoring of the error, as the truncation order is increased. Moreover, ED better approximated the distribution than GC, at least up to the truncation order considered in this work.

The proposed method is easy to implement and takes advantage of the fact that exact moments can be computed with the SG method, provided that enough quadrature points are considered. Moreover, all the computational burden is associated with offline computations, and point-wise evaluations have a negligible cost. A limitation is the lack of a rigorous procedure to determine the necessary number of terms in the truncated series to obtain the best possible approximation. This is inherent in the use of truncated series and the issue is present in all works on ED and GC that we were able to find in the literature. While the optimal number of terms to retain will likely be too large for the desired error in case of a convergent series, it could be theoretically determined a priori in case of an asymptotic expansion such as Edgeworth. Unfortunately, the lack of explicit error bounds make this unfeasible at the moment.

References

  1. 1.

    Aulisa, E., Capodaglio, G., Ke, G.: Construction of h-refined continuous finite element spaces with arbitrary hanging node configurations and applications to multigrid algorithms. arXiv:1804.10632 (2018)

  2. 2.

    Babuška, I., Nobile, F., Tempone, R.: A stochastic collocation method for elliptic partial differential equations with random input data. SIAM J. Numer. Anal. 45, 1005–1034 (2007)

    MathSciNet  MATH  Article  Google Scholar 

  3. 3.

    Babuška, I., Tempone, R., Zouraris, G.E.: Galerkin finite element approximations of stochastic elliptic partial differential equations. SIAM J. Numer. Anal. 42, 800–825 (2004)

    MathSciNet  MATH  Article  Google Scholar 

  4. 4.

    Babuška, I., Tempone, R., Zouraris, G.E.: Solving elliptic boundary value problems with uncertain coefficients by the finite element method: the stochastic formulation. Comput. Methods Appl. Mech. Eng. 194, 1251–1294 (2005)

    MathSciNet  MATH  Article  Google Scholar 

  5. 5.

    Bell, E.T.: Partition polynomials. Ann. Math. 29, 38–46 (1927)

    MathSciNet  MATH  Article  Google Scholar 

  6. 6.

    Berberan-Santos, M.N.: Expressing a probability density function in terms of another PDF: A generalized Gram-Charlier expansion. J. Math. Chem. 42, 585–594 (2007)

    MathSciNet  MATH  Article  Google Scholar 

  7. 7.

    Blinnikov, S., Moessner, R.: Expansions for nearly Gaussian distributions. Astron. Astrophys. Suppl. Ser. 130, 193–205 (1998)

    Article  Google Scholar 

  8. 8.

    Brenn, T., Anfinsen, S.N.: A revisit of the Gram-Charlier and Edgeworth series expansions. Preprint, https://hdl.handle.net/10037/11261 (2017)

  9. 9.

    Brenner, S.C., Scott, L.R.: The Mathematical Theory of Finite Element Methods. Texts in Applied Mathematics, 3rd edn., vol. 15. Springer, New York (2007)

    Google Scholar 

  10. 10.

    Bui-Thanh, T., Ghattas, O., Martin, J., Stadler, G.: A computational framework for infinite-dimensional Bayesian inverse problems part i: The linearized case, with application to global seismic inversion. SIAM J. Sci. Comput. 35, A2494–A2523 (2013)

    MathSciNet  MATH  Article  Google Scholar 

  11. 11.

    Capodaglio, G.: Github webpage. https://github.com/gcapodag/MyFEMuS

  12. 12.

    Chacón, J. E., Duong, T.: Data-driven density derivative estimation, with applications to nonparametric clustering and bump hunting. Electron. J. Statist. 7, 499–532 (2013)

    MathSciNet  MATH  Article  Google Scholar 

  13. 13.

    Chen, P., Schwab, C.: Model order reduction methods in computational uncertainty quantification. In: Ghanem, R., Higdon, D., Owhadi, H (eds.) Handbook of Uncertainty Quantification, pp 937–990. Springer, Cham (2017)

  14. 14.

    Cheng, Y.: Mean shift, mode seeking, and clustering. IEEE Trans. Pattern Anal. Mach. Intell. 17, 790–799 (1995)

    Article  Google Scholar 

  15. 15.

    Ciarlet, P.G.: The Finite Element Method for Elliptic Problems. Classics in Applied Mathematics, vol. 40. SIAM, Philadelphia (2002)

    Google Scholar 

  16. 16.

    Cliffe, K.A., Giles, M.B., Scheichl, R., Teckentrup, A.L.: Multilevel Monte Carlo methods and applications to elliptic PDEs with random coefficients. Comput. Visual. Sci. 14, 3–15 (2011)

    MathSciNet  MATH  Article  Google Scholar 

  17. 17.

    Contaldi, C.R., Bean, R., Magueijo, J.: Photographing the wave function of the universe. Phys. Lett. B 468, 189–194 (1999)

    Article  Google Scholar 

  18. 18.

    Cramér, H.: On the composition of elementary errors: First paper: Mathematical deductions. Scand. Actuar. J. 1928, 13–74 (1928)

    MATH  Article  Google Scholar 

  19. 19.

    Cramér, H.: Mathematical Methods of Statistics. Princeton Mathematics Series, vol. 9. Princeton University Press, Curray (2016)

    Google Scholar 

  20. 20.

    Dashti, M., Stuart, A.M.: The Bayesian approach to inverse problems. In: Ghanem, R., Higdon, D., Owhadi, H (eds.) Handbook of Uncertainty Quantification, pp 311–428. Springer, Cham (2017)

  21. 21.

    de Kock, M.B., Eggers, H.C., Schmiegel, J.: Edgeworth versus Gram-Charlier series: x-cumulant and probability density tests. Phys. Part. Nuclei Lett. 8, 1023–1027 (2011)

    Article  Google Scholar 

  22. 22.

    Di Marco, V.B., Bombi, G.G.: Mathematical functions for the representation of chromatographic peaks. J. Chromatogr. A 931, 1–30 (2001)

    Article  Google Scholar 

  23. 23.

    Eggers, H.C., de Kock, M.B., Schmiegel, J.: Determining source cumulants in femtoscopy with Gram-Charlier and Edgeworth series. Modern Phys. Lett. A 26, 1771–1782 (2011)

    Article  Google Scholar 

  24. 24.

    Fan, M., Vittal, V., Heydt, G.T., Ayyanar, R.: Probabilistic power flow studies for transmission systems with photovoltaic generation using cumulants. IEEE Trans. Power Syst. 27, 2251–2261 (2012)

    Article  Google Scholar 

  25. 25.

    Fishman, G.S.: Monte Carlo: Concepts, Algorithms, and Applications. Springer, New York (1996)

    Google Scholar 

  26. 26.

    Frauenfelder, P., Schwab, C., Todor, R.A.: Finite elements for elliptic problems with stochastic coefficients. Comput. Methods Appl. Mech. Eng. 194, 205–228 (2005)

    MathSciNet  MATH  Article  Google Scholar 

  27. 27.

    Fukunaga, K., Hostetler, L.: The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Trans. Inf. Theory 21, 32–40 (1975)

    MathSciNet  MATH  Article  Google Scholar 

  28. 28.

    Ghanem, R.G., Spanos, P.D.: Stochastic finite element method: Response statistics. In: Ghanem, R. G., Spanos, P. D. (eds.) Stochastic Finite Elements: A Spectral Approach, pp 101–119. Springer, New York (1991)

  29. 29.

    Gunzburger, M.D., Webster, C.G., Zhang, G.: Stochastic finite element methods for partial differential equations with random input data. Acta Numer. 23, 521–650 (2014)

    MathSciNet  MATH  Article  Google Scholar 

  30. 30.

    Hernandez, V., Roman, J.E., Vidal, V.: SLEPc: A scalable and flexible toolkit for the solution of eigenvalue problems. ACM Trans. Math. Softw. 31, 351–362 (2005)

    MathSciNet  MATH  Article  Google Scholar 

  31. 31.

    Huang, S., Quek, S., Phoon, K.: Convergence study of the truncated Karhunen–Loeve expansion for simulation of stochastic processes. Int. J. Numer. Methods Eng. 52, 1029–1043 (2001)

    MATH  Article  Google Scholar 

  32. 32.

    Izenman, A.J.: Review papers: Recent developments in nonparametric density estimation. J. Amer. Statist. Assoc. 86, 205–224 (1991)

    MathSciNet  MATH  Google Scholar 

  33. 33.

    Jondeau, E., Rockinger, M.: Gram–Charlier densities. J. Econ. Dyn. Control 25, 1457–1483 (2001)

    MATH  Article  Google Scholar 

  34. 34.

    Juszkiewicz, R., Weinberg, D., Amsterdamski, P., Chodorowski, M., Bouchet, F.: Weakly non-linear Gaussian fluctuations and the Edgeworth expansion. arXiv:astro-ph/9308012 (1993)

  35. 35.

    Kendall, M.G.: Advanced Theory Of Statistics, vol. I. Charles Griffin, London (1943)

    Google Scholar 

  36. 36.

    Li, C.F., Feng, Y.T., Owen, D.R.J., Li, D.F., Davis, I.M.: A Fourier–Karhunen–Loève discretization scheme for stationary random material properties in SFEM. Int. J. Numer. Methods Eng. 73, 1942–1965 (2008)

    MATH  Article  Google Scholar 

  37. 37.

    Ma, X., Zabaras, N.: An efficient Bayesian inference approach to inverse problems based on an adaptive sparse grid collocation method. Inverse Probl. 25, 035013 (2009)

    MathSciNet  MATH  Article  Google Scholar 

  38. 38.

    Marzouk, Y.M., Najm, H.N., Rahn, L.A.: Stochastic spectral methods for efficient Bayesian solution of inverse problems. J. Comput. Phys. 224, 560–586 (2007)

    MathSciNet  MATH  Article  Google Scholar 

  39. 39.

    Metropolis, N., Ulam, S.: The Monte Carlo method. J. Amer. Statist. Assoc. 44, 335–341 (1949)

    MathSciNet  MATH  Article  Google Scholar 

  40. 40.

    Mihoubi, M.: Bell polynomials and binomial type sequences. Discrete Math. 308, 2450–2459 (2008)

    MathSciNet  MATH  Article  Google Scholar 

  41. 41.

    Ñíguez, T.M., Perote, J.: Forecasting heavy-tailed densities with positive Edgeworth and Gram-Charlier expansions. Oxf. Bull. Econ. Statist. 74, 600–627 (2012)

    Article  Google Scholar 

  42. 42.

    Nobile, F., Tempone, R., Webster, C.G.: An anisotropic sparse grid stochastic collocation method for partial differential equations with random input data. SIAM J. Numer. Anal. 46, 2411–2442 (2008)

    MathSciNet  MATH  Article  Google Scholar 

  43. 43.

    Nobile, F., Tempone, R., Webster, C.G.: A sparse grid stochastic collocation method for partial differential equations with random input data. SIAM J. Numer. Anal. 46, 2309–2345 (2008)

    MathSciNet  MATH  Article  Google Scholar 

  44. 44.

    Noh, Y.K., Sugiyama, M., Liu, S., du Plessis, M.C., Park, F.C., Lee, D.D.: Bias reduction and metric learning for nearest-neighbor estimation of Kullback-Leibler divergence. In: Artificial Intelligence and Statistics, pp. 669–677 (2014)

  45. 45.

    O’brien, M.: Using the Gram-Charlier expansion to produce vibronic band shapes in strong coupling. J. Phys. Condens. Matter 4, 2347 (1992)

    Article  Google Scholar 

  46. 46.

    Olivé, J., Grimalt, J.O.: Gram-Charlier and Edgeworth-Cramér series in the characterization of chromatographic peaks. Anal. Chimica Acta 249, 337–348 (1991)

    Article  Google Scholar 

  47. 47.

    Pender, J.: Gram Charlier expansion for time varying multiserver queues with abandonment. SIAM J. Appl. Math. 74, 1238–1265 (2014)

    MathSciNet  MATH  Article  Google Scholar 

  48. 48.

    Petrov, V.V.: Limit Theorems of Probability Theory: Sequences of Independent Random Variables. Tech. rep., Oxford, New York (1995)

  49. 49.

    Petrov, V.V.: Sums of Independent Random Variables. Ergebnisse der Mathematik und ihrer Grenzgebiete, vol. 82. Springer, Berlin (2012)

    Google Scholar 

  50. 50.

    Popovic, R., Goldsman, D.: Easy Gram-Charlier valuations of options. J. Deriv. 20, 79–97 (2012)

    Article  Google Scholar 

  51. 51.

    Rickman, J., Lawrence, A., Rollett, A., Harmer, M.: Calculating probability densities associated with grain-size distributions. Comput. Mater. Sci. 101, 211–215 (2015)

    Article  Google Scholar 

  52. 52.

    Sasaki, H., Noh, Y.K., Sugiyama, M.: Direct density-derivative estimation and its application in KL-divergence approximation. In: Artificial Intelligence and Statistics, pp. 809–818 (2015)

  53. 53.

    Schevenels, M., Lombaert, G., Degrande, G.: Application of the stochastic finite element method for gaussian and non-gaussian systems. In: ISMA2004 International Conference on Noise and Vibration Engineering, pp. 3299–3314 (2004)

  54. 54.

    Sedgewick, R., Flajolet, P.: An introduction to the analysis of algorithms. Pearson Education India, Bengaluru (2013)

    Google Scholar 

  55. 55.

    Stuart, A.M.: Inverse problems: a Bayesian perspective. Acta Numer. 19, 451–559 (2010)

    MathSciNet  MATH  Article  Google Scholar 

  56. 56.

    Tartakovsky, D.M., Broyda, S.: PDF equations for advective–reactive transport in heterogeneous porous media with uncertain properties. J. Contam. Hydrol. 120–121, 129–140 (2011)

    Article  Google Scholar 

  57. 57.

    Wallace, D.L.: Asymptotic approximations to distributions. Ann. Math. Statist. 29, 635–654 (1958)

    MathSciNet  MATH  Article  Google Scholar 

  58. 58.

    Wan, X., Karniadakis, G.E.: An adaptive multi-element generalized polynomial chaos method for stochastic differential equations. J. Comput. Phys. 209, 617–642 (2005)

    MathSciNet  MATH  Article  Google Scholar 

  59. 59.

    Xiu, D., Hesthaven, J.S.: High-order collocation methods for differential equations with random inputs. SIAM J. Sci. Comput. 27, 1118–1139 (2005)

    MathSciNet  MATH  Article  Google Scholar 

  60. 60.

    Zapevalov, A., Bol’shakov, A., Smolov, V.: Simulating of the probability density of sea surface elevations using the Gram-Charlier series. Oceanology 51, 407–414 (2011)

    Article  Google Scholar 

Download references

Acknowledgements

MG and HW thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the program Uncertainty Quantification for Complex Systems: Theory and Methodologies where work on this paper was undertaken. This work was supported by UK EPSRC grant numbers EP/K032208/1 and EP/R014604/1. GC and MG were supported in part by the US Air Force Office of Scientific Research grant FA9550-15-1-0001 and by the US Department of Energy Office of Science grant DE-SC0016591.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Max Gunzburger.

Ethics declarations

Conflict of interests

The authors declare that they have no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Dedicated to Professor Enrique Zuazua on the occasion of his 60th birthday.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Capodaglio, G., Gunzburger, M. & Wynn, H.P. Approximation of Probability Density Functions for PDEs with Random Parameters Using Truncated Series Expansions. Vietnam J. Math. (2021). https://doi.org/10.1007/s10013-020-00465-5

Download citation

Keywords

  • Gram–Charlier
  • Edgeworth
  • Density estimation
  • Random PDEs
  • SFEM
  • Stochastic Galerkin

Mathematics Subject Classification (2010)

  • 62G07
  • 65N30
  • 65C60