This section is concerned with the derivation of a reliable a posteriori error estimator based on the stochastic residual. In comparison to the derivation in [15, 16, 21], the lognormal coefficient requires a more involved approach directly related to the employed weighted function spaces introduced in Sect. 3. In theory, an additional error occurs because of the discretization of the coefficient which we assume to be negligible. The developed adaptive algorithm makes possible a computable a posteriori steering of the error components by a refinement of the FE mesh and the anisotropic Hermite polynomial chaos of the solution. The efficient implementation is due to the formulation in the TT format the ranks of which are also set adaptively.
The definition of the operators as in (4.6) leads to a decomposition of the residual
$$\begin{aligned} {\mathcal {R}}(v) = {\mathcal {R}}_+(v) + {\mathcal {R}}_-(v), \end{aligned}$$
with
$$\begin{aligned} {\mathcal {R}}_+(v) := f - {\mathcal {A}}_+(v),\qquad {\mathcal {R}}_-(v) := - {\mathcal {A}}_-(v). \end{aligned}$$
The discrete solution \(w_N \in {\mathcal {V}}_N\) reads
$$\begin{aligned} w_N = \sum _{i=0}^{N-1} \sum _{\mu \in \varLambda } W(i,\mu ) \varphi _i H^{\tau _{\theta \varrho }}_\mu . \end{aligned}$$
We assume that the operator is given in its approximate semi-discrete form \({\mathcal {A}}_+\) and aim to estimate the energy error
$$\begin{aligned} {\Vert u - w_N \Vert _{{\mathcal {A}}_+}^2} = \int _\varGamma \int _D a_{\varDelta ,s} |\nabla (u - w_N)|^2 \, \,\mathrm {d}{x} \mathrm {d}\gamma _{\vartheta \varrho }(y). \end{aligned}$$
Remark 5.1
As stated before, we assume that the error that results from approximating the coefficient is small. Estimation of this error is subject to future research. Work in this direction has e.g. been carried out in [7, 23]. Additionally, we require that the bounds (3.2) and (3.3) still hold, possibly with different constants \({\hat{c}}_{\vartheta \varrho }^+\) and \({\check{c}}_{\vartheta \varrho }^+\). This is for example guaranteed if \(a_{\varDelta ,s}\) is positive, i.e., if
$$\begin{aligned} a_{\varDelta ,s}(x,y) > 0 \qquad \forall x \in D, y \in \varGamma . \end{aligned}$$
Then, since the approximated coefficient is polynomial in y, the arguments in Lemma 3.5 yield the same constants
$$\begin{aligned} {\hat{c}}_{\vartheta \varrho }^+ = {\hat{c}}_{\vartheta \varrho }, \qquad {\check{c}}_{\vartheta \varrho }^+ = {\check{c}}_{\vartheta \varrho }. \end{aligned}$$
We recall Theorem 5.1 from [15] and also provide the proof for the sake of a complete presentation. Note that the result allows for non-orthogonal approximations \(w_N\in {{\mathcal {V}}}_N\).
Theorem 5.2
Let \({{\mathcal {V}}}_N\subset {{\mathcal {V}}}_{\vartheta \varrho }\) a closed subspace and \(w_N\in {{\mathcal {V}}}_N\), and let \(u_N\) denote the \({{\mathcal {A}}}_+\) Galerkin projection of u onto \({{\mathcal {V}}}_N\). Then it holds
$$\begin{aligned} \Vert u - w_N\Vert _{{{\mathcal {A}}}_+}^2&\le \left( \sup _{v \in {{\mathcal {V}}}_{\theta \varrho } \setminus \{ 0 \}} \frac{| \langle {\mathcal {R}}_+(w_N), ({{\,\mathrm{id}\,}}- {{\mathcal {I}}})v \rangle _{\theta \varrho } |}{{\check{c}}_{\theta \varrho }^+ \Vert v\Vert _{L^2(\varGamma , \gamma ;{{\mathcal {X}}})}} + c_{{\mathcal {I}}} \Vert u_N - w_N\Vert _{{{\mathcal {A}}}_+} \right) ^2 \\&\quad + \Vert u_N - w_N\Vert _{{{\mathcal {A}}}_+}^2. \end{aligned}$$
Here, \({\mathcal {I}}\) denotes the Clément interpolation operator in (3.4) and \(c_{{\mathcal {I}}}\) is the operator norm of \({{\,\mathrm{id}\,}}- {\mathcal {I}}\) with respect to the energy norm \(\Vert \cdot \Vert _{{{\mathcal {A}}}_+}\). The constant \({\check{c}}_{\theta \varrho }^+\) is derived from the assumed coercivity of the bilinear form induced by \({{\mathcal {A}}}_+\) similar to (3.2) and (3.3).
Proof
Due to Galerkin orthogonality of \(u_N\), it holds
$$\begin{aligned} \Vert u-w_N\Vert _{{{\mathcal {A}}}_+}^2 = \Vert u-u_N\Vert _{{{\mathcal {A}}}_+}^2 + \Vert u_N-w_N\Vert _{{{\mathcal {A}}}_+}^2. \end{aligned}$$
By the Riesz representation theorem, the first part is
$$\begin{aligned} \Vert u - u_N\Vert _{{{\mathcal {A}}}_+} = \sup _{v \in {{\mathcal {V}}}_{\theta \varrho } \setminus \{ 0 \}} \frac{|\langle {{\mathcal {R}}}_+(u_N),v\rangle _{\theta \varrho }|}{\Vert v \Vert _{{{\mathcal {A}}}_+}}. \end{aligned}$$
We now utilise the Galerkin orthogonality and introduce the bounded linear map \({{\mathcal {I}}}: {{\mathcal {V}}}_{\theta \varrho } \rightarrow {{\mathcal {V}}}_N\) to obtain
$$\begin{aligned} \Vert u - u_N\Vert _{{{\mathcal {A}}}_+} = \sup _{v \in {{\mathcal {V}}}_{\theta \varrho } \setminus \{ 0 \}} \frac{|\langle {{\mathcal {R}}}_+(u_N), ({{\,\mathrm{id}\,}}- {{\mathcal {I}}})v \rangle _{\theta \varrho }|}{\Vert v \Vert _{{{\mathcal {A}}}_+}}. \end{aligned}$$
Since we do not have access to the Galerkin solution \(u_N\), we reintroduce \(w_N\)
$$\begin{aligned} \Vert u - u_N\Vert _{{{\mathcal {A}}}_+}&\le \sup _{v \in {{\mathcal {V}}}_{\theta \varrho } \setminus \{ 0 \}} \frac{|\langle {{\mathcal {R}}}_+(w_N), ({{\,\mathrm{id}\,}}- {\mathcal {I}}) v \rangle _{\theta \varrho }|}{\Vert v \Vert _{{{\mathcal {A}}}_+}} \\&\qquad + \frac{|\langle {{\mathcal {R}}}_+(u_N) - {{\mathcal {R}}}_+(w_N), ({{\,\mathrm{id}\,}}- {\mathcal {I}}) v \rangle _{\theta \varrho }|}{\Vert v \Vert _{{{\mathcal {A}}}_+}} \\&\le \sup _{v \in {{\mathcal {V}}}_{\theta \varrho } \setminus \{ 0 \}} \frac{|\langle {{\mathcal {R}}}_+(w_N), ({{\,\mathrm{id}\,}}- {\mathcal {I}}) v \rangle _{\theta \varrho }|}{\Vert v \Vert _{{{\mathcal {A}}}_+}} \\&\qquad + \frac{\Vert u_N - w_N\Vert _{{{\mathcal {A}}}_+} \Vert ({{\,\mathrm{id}\,}}- {\mathcal {I}}) v\Vert _{{{\mathcal {A}}}_+}}{\Vert v \Vert _{{{\mathcal {A}}}_+}} \\&\le \sup _{v \in {{\mathcal {V}}}_{\theta \varrho } \setminus \{ 0 \}} \frac{|\langle {{\mathcal {R}}}_+(w_N), ({{\,\mathrm{id}\,}}- {\mathcal {I}}) v \rangle _{\theta \varrho }|}{\Vert v \Vert _{{{\mathcal {A}}}_+}} + c_{{\mathcal {I}}} \Vert w_N - u_N \Vert _{{\mathcal {A}}_+}. \end{aligned}$$
We apply the coercivity of the operator \({{\mathcal {A}}}_+\) to the denominator, which yields the desired result. For the last inequality, we used the boundedness of \({\mathcal {I}}\) in the energy norm by defining the constant as the operator norm
$$\begin{aligned} c_{{\mathcal {I}}} := \sup _{v \in {{\mathcal {V}}}_{\theta \varrho } \setminus \{ 0 \}} \frac{\Vert ({{\,\mathrm{id}\,}}- {\mathcal {I}}) v\Vert _{{{\mathcal {A}}}_+}}{\Vert v \Vert _{{{\mathcal {A}}}_+}}. \end{aligned}$$
\(\square \)
Since the product of the Hermite polynomials for each \(m = 1,\ldots ,M\) has degree at most \(q_m + d_m -2\), it is useful to define the index set
$$\begin{aligned} \varXi := \varDelta + \varLambda&:= \bigl \{ \eta = (\eta _1,\ldots ,\eta _L,0,\ldots ) :\\&\qquad \eta _m = 0,\ldots ,q_m+d_m-2, \; m = 1,\ldots ,M; \\&\qquad \eta _\ell = 0,\ldots ,q_\ell -1, \; \ell = M+1,\ldots ,L \bigr \}. \end{aligned}$$
Then, the residual can be split into an active and an inactive part by using the tensor sets \(\varXi \) and \(\varLambda \),
$$\begin{aligned} {\mathcal {R}}_+(w_N)&= f - {\mathcal {A}}_+(w_N) \\&= f + \sum _{\eta \in \varXi } {{\,\mathrm{div}\,}}{{\,\mathrm{res}\,}}(\cdot ,\eta ) H^{\tau _{\theta \varrho }}_\eta \\&= {\mathcal {R}}_{+,\varLambda }(w_N) + {\mathcal {R}}_{+,\varXi \setminus \varLambda }(w_N), \end{aligned}$$
with
$$\begin{aligned} {\mathcal {R}}_{+,\varLambda }(w_N)&= f + \sum _{\eta \in \varLambda } {{\,\mathrm{div}\,}}{{\,\mathrm{res}\,}}(\cdot ,\eta ) H^{\tau _{\theta \varrho }}_\eta , \\ {\mathcal {R}}_{+,\varXi \setminus \varLambda }(w_N)&= \sum _{\eta \in \varXi \setminus \varLambda } {{\,\mathrm{div}\,}}{{\,\mathrm{res}\,}}(\cdot ,\eta ) H^{\tau _{\theta \varrho }}_\eta , \end{aligned}$$
where \({{\,\mathrm{div}\,}}{{\,\mathrm{res}\,}}(\cdot ,\eta ) \in {\mathcal {X}}^*\) for all \(\eta \in \varXi \).
For all \(\eta \in \varXi \), the function \({{\,\mathrm{res}\,}}\) is given as
$$\begin{aligned} {{\,\mathrm{res}\,}}(x,\eta )&= \sum _{k_1=1}^{r_1} \cdots \sum _{k_M=1}^{r_M} \sum _{k_1'=1}^{s_1} \cdots \sum _{k_L'=1}^{s_L} {{\,\mathrm{res}\,}}_0[k_1,k_1'](x) \\&\quad \times \left( \prod _{m=1}^{M} R_m(k_m,k_m',\eta _m,k_{m+1},k_{m+1}') \prod _{\ell =M+1}^{L} A_\ell (k_\ell ',\eta _\ell ,k_{\ell +1}') \right) \end{aligned}$$
with continuous first component
$$\begin{aligned} {{\,\mathrm{res}\,}}_0[k_1,k_1'](x) = \sum _{i=0}^{N-1} a_0[k_1'](x) W_0(i,k_1) \nabla \varphi _i(x) \end{aligned}$$
and stochastic components for \(m = 1,\ldots ,M\),
$$\begin{aligned}&R_m(k_m,k_m',\eta _m,k_{m+1},k_{m+1}') \\&\quad = \sum _{{\nu _m = 0}}^{q_m-1} \sum _{{\mu _m = 0}}^{d_m-1} A(k_m',\nu _m,k_{m+1}') W_m(k_m,\mu _m,k_{m+1}) \kappa _{\mu _m,\nu _m,\eta _m}. \end{aligned}$$
The function \({{\,\mathrm{res}\,}}\) is again a TT tensor with continuous first component with TT ranks \(r_ms_m\) for \(m=1,\ldots ,M\) and \(s_\ell \) for \(\ell = M+1,\ldots ,L\). The physical dimensions are \(d_m + q_m - 2\) for all \(m = 1,\ldots ,M\) and \(d_\ell -1\) for \(\ell = M+1,\ldots ,L\).
The above considerations suggest that the error can be decomposed into errors that derive from the respective approximations in the deterministic domain, the parametric domain and in the ranks. This is indeed the case, as we will see in the following. In a nutshell, if \(u_N\) is the Galerkin solution in \({{\mathcal {V}}}_N\) and \(u_\varLambda \) is the Galerkin solution in the semi-discrete space \({{\mathcal {V}}}(\varLambda )\), then the deterministic error\({{\,\mathrm{err}\,}}_{\mathrm {det}}= \Vert u_\varLambda - u_N\Vert _{{{\mathcal {A}}}_+}\) corresponds to the error of the active residual\({\mathcal {R}}_{+,\varLambda }\), the parametric error\({{\,\mathrm{err}\,}}_{\mathrm {param}}= \Vert u - u_\varLambda \Vert _{{{\mathcal {A}}}_+}\) corresponds to the inactive residual\({\mathcal {R}}_{+,\varXi \setminus \varLambda }\) and the error made by restricting the ranks is the error in the discrete space \({{\,\mathrm{err}\,}}_{\mathrm {disc}}(w_N) = \Vert u_N - w_N\Vert _{{{\mathcal {A}}}_+}^2\), see Fig. 2 for an illustration.
Deterministic error estimation
We define the deterministic error estimator
where the flow is given by the residual contributions
$$\begin{aligned} \sigma _{\varLambda }^{\theta \varrho } := \sum _{\eta \in \varLambda } {{\,\mathrm{res}\,}}(\cdot , \eta )H_\eta ^{\tau _{\theta \varrho }}. \end{aligned}$$
This estimates the active residual as follows.
Proposition 5.3
For any \(v \in {{\mathcal {V}}}_{\vartheta \varrho }\) and any \(w_N \in {{\mathcal {V}}}_N\), it holds
$$\begin{aligned} \frac{|\langle {\mathcal {R}}_{+,\varLambda }(w_N), ({{\,\mathrm{id}\,}}- {{\mathcal {I}}})v \rangle _{\theta \varrho }|}{\Vert v\Vert _{L^2(\varGamma , \gamma ;{{\mathcal {X}}})}} \le c_{\mathrm {det}}{{\,\mathrm{est}\,}}_{\mathrm {det}}(w_N). \end{aligned}$$
Proof
By localization to the elements of the triangulation \({{\mathcal {T}}}\) and integration by parts,
The Cauchy-Schwarz inequality yields
With the interpolation properties (3.4) we obtain
Since the overlaps of the patches \(\omega _T\) and \(\omega _F\) are bounded uniformly, a Cauchy-Schwarz estimate leads to
$$\begin{aligned} |\langle {\mathcal {R}}_{+,\varLambda }(w_N), ({{\,\mathrm{id}\,}}- {\mathcal {I}})v \rangle _{\theta \varrho }| \le c_{\mathrm {det}}{{\,\mathrm{est}\,}}_{\mathrm {det}}(w_N) \Vert v \Vert _{L^2(\varGamma , \gamma ;{{\mathcal {X}}})}. \end{aligned}$$
Here, the constant \(c_{\mathrm {det}}\) depends on the properties of the interpolation operator (3.4). \(\square \)
Remark 5.4
Note that an \(L^2\)-integration of the residual, which is an element of the dual space \({{\mathcal {V}}}_{\vartheta \varrho }^*\), is possible since the solution consists of finite element functions. These are piecewise polynomial and thus smooth on each element \(T \in {\mathcal {T}}\).
Tail error estimation
The parametric or tail estimator is given by
$$\begin{aligned} {{\,\mathrm{est}\,}}_{\mathrm {param}}(w_N) := \left( \int _\varGamma \int _D \Bigl ( \sum _{\eta \in \varXi \setminus \varLambda } {{\,\mathrm{res}\,}}(x,\eta ) H^{\tau _{\theta \varrho }}_{\eta }(y) \, \zeta _{\vartheta \varrho }(y) \Bigr )^2 \, \,\mathrm {d}{x} \mathrm {d}\gamma (y) \right) ^{1/2} \end{aligned}$$
and bounds the parametric error as follows.
Proposition 5.5
For any \(v \in {{\mathcal {V}}}_{\vartheta \varrho }\) and any \(w_N \in {{\mathcal {V}}}_N\), it holds
$$\begin{aligned} \frac{|\langle {\mathcal {R}}_{+,\varXi \setminus \varLambda }(w_N) , ({{\,\mathrm{id}\,}}- {{\mathcal {I}}})v \rangle _{\theta \varrho }|}{\Vert v\Vert _{L^2(\varGamma , \gamma ;{{\mathcal {X}}})}} \le {{\,\mathrm{est}\,}}_{\mathrm {param}}(w_N). \end{aligned}$$
Proof
Recall that \(\langle {\mathcal {R}}_{+,\varXi \setminus \varLambda }(w_N) , {\mathcal {I}} v \rangle _{\theta \varrho } = 0\) since \({\mathcal {I}} v \in {{\mathcal {V}}}_N\).
Instead of factorizing out the \(L^\infty \)-norm of the diffusion coefficient as in [15, 16, 21], we use the Cauchy-Schwarz inequality to obtain
$$\begin{aligned}&\langle {\mathcal {R}}_{+, \varXi \setminus \varLambda }(w_N), v \rangle _{\theta \varrho } \\&\quad = \int _\varGamma \int _D \Bigl ( \sum _{\eta \in \varXi \setminus \varLambda } {{\,\mathrm{res}\,}}(x,\eta ) H^{\tau _{\theta \varrho }}_\eta (y) \Bigr ) \cdot \nabla v(x,y) \, \zeta _{\vartheta \varrho }(y) \,\mathrm {d}{x} \mathrm {d}\gamma (y) \\&\quad \le \int _\varGamma \int _D \Bigl ( \sum _{\eta \in \varXi \setminus \varLambda } {{\,\mathrm{res}\,}}(x,\eta ) H^{\tau _{\theta \varrho }}_\eta (y) \, \zeta _{\vartheta \varrho }(y) \Bigr )^2 \,\mathrm {d}{x} \mathrm {d}\gamma (y) \Vert v \Vert _{L^2(\varGamma , \gamma ;{{\mathcal {X}}})} \\&\quad = {{\,\mathrm{est}\,}}_{\mathrm {param}}(w_N) \Vert v \Vert _{L^2(\varGamma , \gamma ;{{\mathcal {X}}})}. \end{aligned}$$
\(\square \)
Algebraic error estimation
In order to define the algebraic error estimator, we need to state the linear basis change operator that translates integrals over two Hermite polynomials in the measure \(\gamma _{\vartheta \varrho }\) to the measure \(\gamma \):
$$\begin{aligned} {\mathbf {H}}_{\vartheta \varrho \rightarrow 0}&: {\mathbb {R}}^{N \times d_1 \times \cdots \times d_M } \rightarrow {\mathbb {R}}^{N \times d_1 \times \cdots \times d_M}, \\ {\mathbf {H}}_{\vartheta \varrho \rightarrow 0}&:= Z_0 \otimes Z_1 \otimes \cdots \otimes Z_M, \\ Z_0(i,j)&:= \int _D \nabla \varphi _i \cdot \nabla \varphi _j \,\mathrm {d}{x}, \\ Z_m(\mu _m,\mu _m')&:= \int _{{\mathbb {R}}} H^{\tau _{\theta \varrho }}_{\mu _m}(y_m) H^{\tau _{\theta \varrho }}_{\mu _m'}(y_m) \, \,\mathrm {d}{\gamma _m}(y_m) \quad \text { for all } m=1,\ldots ,M. \end{aligned}$$
This yields the estimator
$$\begin{aligned} {{\,\mathrm{est}\,}}_{\mathrm {disc}}(w_N) := \Vert ({\mathbf {A}}(W) - F) {\mathbf {H}}_{\vartheta \varrho \rightarrow 0}^{-1/2} \Vert _{\ell ^2({\mathbb {R}}^{N \times d_1 \times \cdots \times d_M})}. \end{aligned}$$
(5.1)
Proposition 5.6
For any \(w_N \in {{\mathcal {V}}}_N\) and the Galerkin solution \(u_N \in {{\mathcal {V}}}_N\), it holds
$$\begin{aligned} \Vert u_N - w_N\Vert _{{{\mathcal {A}}}_+} \le ({\check{c}}_{\vartheta \varrho }^+)^{-1} {{\,\mathrm{est}\,}}_{\mathrm {disc}}(w_N). \end{aligned}$$
Proof
For \(v_N = \sum _{i=0}^{N-1} \sum _{\mu \in \varLambda } V(i,\mu ) \varphi _i H^{\tau _{\theta \varrho }}_\mu \in {\mathcal {V}}_N\), it holds
$$\begin{aligned} \int _\varGamma \int _D \nabla v_N \cdot \nabla v_N \, \,\mathrm {d}{x} \mathrm {d}\gamma (y) = \langle V {\mathbf {H}}_{\vartheta \varrho \rightarrow 0}, V \rangle = {\Vert V {\mathbf {H}}_{\vartheta \varrho \rightarrow 0}^{1/2} \Vert ^2_{\ell ^2({\mathbb {R}}^{N \times d_1 \times \cdots \times d_M})}}. \end{aligned}$$
With this and using the coercivity of \({{\mathcal {A}}}_+\), we can see that
$$\begin{aligned} \Vert w_N - u_N \Vert _{{{\mathcal {A}}}_+}^2&={ \langle {\mathcal {A}}_+(w_N - u_N), (w_N - u_N) \rangle _{\theta \varrho }} \\&= \langle {\mathbf {A}}W - F , W - U \rangle \\&= \langle ({\mathbf {A}}W - F) {\mathbf {H}}_{\vartheta \varrho \rightarrow 0}^{-1/2} , (W - U) {\mathbf {H}}_{\vartheta \varrho \rightarrow 0}^{1/2} \rangle \\&\le \Vert ({\mathbf {A}}W - F) {\mathbf {H}}_{\vartheta \varrho \rightarrow 0}^{-1/2}\Vert _{\ell ^2({\mathbb {R}}^{N \times d_1 \times \cdots \times d_M})} \Vert w_N - u_N \Vert _{L^2(\varGamma ,\gamma ;{\mathcal {X}})} \\&\le ({\check{c}}_{\vartheta \varrho }^+)^{-1} \Vert ({\mathbf {A}}W - F) {\mathbf {H}}_{\vartheta \varrho \rightarrow 0}^{-1/2}\Vert _{\ell ^2({\mathbb {R}}^{N \times d_1 \times \cdots \times d_M})} \Vert w_N - u_N \Vert _{{\mathcal {A}}_+} \end{aligned}$$
and thus
$$\begin{aligned} \Vert w_N - u_N \Vert _{{\mathcal {A}}_+} \le ({\check{c}}_{\vartheta \varrho }^+)^{-1} {{\,\mathrm{est}\,}}_{\mathrm {disc}}(w_N). \end{aligned}$$
\(\square \)
Overall error estimation
A combination of the above estimates yields an overall error estimator.
Corollary 5.7
For any \(w_N \in {{\mathcal {V}}}_N\), the energy error can be bounded by
$$\begin{aligned} \Vert u - w_N\Vert _{{{\mathcal {A}}}_+}^2&\le ({\check{c}}_{\vartheta \varrho }^+)^{-2} {{\,\mathrm{est}\,}}_{\mathrm {all}}(w_N)^2 \end{aligned}$$
with the error estimator given by
$$\begin{aligned} {{\,\mathrm{est}\,}}_{\mathrm {all}}(w_N)^2&:= \Bigl ( c_{\mathrm {det}}{{\,\mathrm{est}\,}}_{\mathrm {det}}(w_N) + {{\,\mathrm{est}\,}}_{\mathrm {param}}(w_N) + c_{{{\mathcal {I}}}} {{\,\mathrm{est}\,}}_{\mathrm {disc}}(w_N) \Bigr )^2 \\&\quad + {{\,\mathrm{est}\,}}_{\mathrm {disc}}(w_N)^2. \end{aligned}$$
Remark 5.8
In order to get suitable measures for the estimators, the squared density \(\zeta _{\vartheta \varrho }^2\) appears, which upon scaling with
$$\begin{aligned} c_{\sigma } := \prod _{\ell =1}^L \frac{1}{\sigma _\ell \sqrt{2 - \sigma _\ell ^2}} \end{aligned}$$
again is a Gaussian measure with standard deviation \(\sigma ' = (\sigma _\ell ')_{1\le \ell \le L}\) for
$$\begin{aligned} \sigma _\ell ' := \frac{\sigma _\ell }{\sqrt{2 - \sigma _\ell ^2}}. \end{aligned}$$
First, this adds a restriction on \(\vartheta \) such that the argument in the square root is positive. This is fulfilled if \(\exp (2 \vartheta \varrho \alpha _1) < 2\), since \((\alpha _m)_{1\le m \le M}\) is a decreasing series, which can be ensured for some \(\vartheta \) small enough. Second, it is important to check whether the new measure is weaker or stronger than \(\gamma _{\vartheta \varrho }\) [49], i.e., which space contains the other. Since
$$\begin{aligned} \sigma _\ell ' = \frac{\sigma _\ell }{\sqrt{2 - \sigma _\ell ^2}} = \frac{\exp (\vartheta \varrho \alpha _\ell )}{\sqrt{2 - \exp (2\vartheta \varrho \alpha _\ell )}} \ge \exp (\vartheta \varrho \alpha _\ell ) = \sigma _\ell , \end{aligned}$$
functions that are integrable with respect to the measure \(\gamma _{\vartheta \varrho }\) are not necessarily integrable with respect to the squared measure. However, since f is independent of the parameters and \({\mathcal {A}}_+(w_N) \in {\mathcal {X}}^* \otimes {\mathcal {Y}}(\varXi )\) has a polynomial chaos expansion of finite degree, the residual \({\mathcal {R}}_+(w_N)\) is integrable over the parameters for any Gaussian measure and therefore it is also integrable with respect to the squared measure.
Efficient computation of the different estimators
The error estimators can be calculated efficiently in the TT format. For each element \(T \in {\mathcal {T}}\) of the triangulation, the residual estimator is given by
$$\begin{aligned} {{\,\mathrm{est}\,}}_{\mathrm {det},T}(w_N)^2&= {h_T^2 \Vert \bigl ( f + {{\,\mathrm{div}\,}}\sigma _{\varLambda }^{\theta \varrho }(w_N)\bigr ) \zeta _{\vartheta \varrho } \Vert _{L^2(\varGamma ,\gamma ;L^2(T))}^2} \\&= h_T^2 \int _\varGamma \int _T \Biggl (f + \sum _{\eta \in \varLambda } {{\,\mathrm{div}\,}}{{\,\mathrm{res}\,}}(x,\eta ) H^{\tau _{\theta \varrho }}_\eta \Biggr )^2 \, \zeta _{\vartheta \varrho }^2 \,\mathrm {d}{x} \mathrm {d}\gamma (y) \\&= h_T^2 (f,f)_{L^2(T)}\int _\varGamma \zeta _{\vartheta \varrho }^2 \, \mathrm {d}\gamma (y) \\&\quad + 2 h_T^2 \sum _{\eta \in \varLambda } ( f, {{\,\mathrm{div}\,}}{{\,\mathrm{res}\,}}(x,\eta ))_{L^2(T)} \int _\varGamma H^{\tau _{\theta \varrho }}_\eta \, \zeta _{\vartheta \varrho }^2 \mathrm {d}\gamma (y) \\&\quad + \sum _{\eta \in \varLambda } \sum _{\eta '\in \varLambda } ( {{\,\mathrm{div}\,}}{{\,\mathrm{res}\,}}(x,\eta ), {{\,\mathrm{div}\,}}{{\,\mathrm{res}\,}}(x,\eta ') )_{L^2(T)} \\&\quad \times \int _\varGamma H^{\tau _{\theta \varrho }}_\eta H^{\tau _{\theta \varrho }}_{\eta '} \, \zeta _{\vartheta \varrho }^2 \mathrm {d}\gamma (y). \end{aligned}$$
A complication of the change of the measure to \(\gamma \) and the involved weight \(\zeta _{\vartheta \varrho }^2\) is the fact that the shifted Hermite polynomials \(H^{\tau _{\theta \varrho }}\) are not orthogonal with respect to this measure. However, this property can be restored easily by calculating the basis change integrals beforehand. This results in another tensor product operator that is defined element-wise for \(\eta ,\eta ' \in \varXi \) by
$$\begin{aligned} \tilde{{\mathbf {H}}}(\eta ,\eta ')&:= {\tilde{Z}}_1(\eta _1,\eta _1') \cdots {\tilde{Z}}_L(\eta _L,\eta _L'), \\ {\tilde{Z}}_\ell (\eta _\ell ,\eta _\ell ')&:= \int _\varGamma H^{\tau _{\theta \varrho }}_{\eta _\ell } H^{\tau _{\theta \varrho }}_{\eta '_\ell } \, \zeta _{\vartheta \varrho ,\ell }^2 \mathrm {d}\gamma (y_\ell ). \end{aligned}$$
This operator encodes the basis change to the squared measure and can be inserted in order to calculate the scalar product. With this, the estimator takes the form
$$\begin{aligned} {{\,\mathrm{est}\,}}_{\mathrm {det},T}(w_N)^2&= h_T^2 (f,f)_{L^2(T)} \int _\varGamma \zeta _{\vartheta \varrho }^2 \, \mathrm {d}\gamma (y) \\&\quad + 2 h_T^2 \sum _{\eta \in \varLambda } \tilde{{\mathbf {H}}}(\eta ,0) ( f, {{\,\mathrm{div}\,}}{{\,\mathrm{res}\,}}(x,\eta ))_{L^2(T)} \\&\quad + \sum _{\eta \in \varLambda } \sum _{\eta '\in \varLambda } \tilde{{\mathbf {H}}}(\eta ,\eta ') ( {{\,\mathrm{div}\,}}{{\,\mathrm{res}\,}}(x,\eta ), {{\,\mathrm{div}\,}}{{\,\mathrm{res}\,}}(x,\eta ') )_{L^2(T)}. \end{aligned}$$
Since \(\tilde{{\mathbf {H}}}\) is a tensor product operator, this summation can be done component-wise, i.e., performing a matrix-vector multiplication of every component of the operator \(\tilde{{\mathbf {H}}}\) with the corresponding component of the tensor function r.
Similarly, for the jump over the edge F we obtain the estimator
Analogously to the affine case dealt with in [21], both of these estimators can then be computed efficiently in the TT format. The parametric error estimator \({{\,\mathrm{est}\,}}_{\mathrm {param}}(w_N)\) can be estimated in a similar way.
To gain additional information about the residual influence of certain stochastic dimensions, we sum over specific index sets. Let
where the strike through means that \(\ell \) takes all values but m. For every \(m=1,2,\ldots \), and \(w_N \in {\mathcal {V}}_N\) we define
$$\begin{aligned} {{\,\mathrm{est}\,}}_{\mathrm {param},m}(w_N)^2&:= \int _\varGamma \int _D \zeta _{\vartheta \varrho }^2 \Bigl | \sum _{\eta \in \varXi _m} {{\,\mathrm{res}\,}}(x,\eta ) H^{\tau _{\theta \varrho }}_\eta \Bigr |^2 \, \,\mathrm {d}{x}\mathrm {d}\gamma (y). \end{aligned}$$
Using the same arguments and notation as above, we can simplify
$$\begin{aligned} {{\,\mathrm{est}\,}}_{\mathrm {param},m}(w_N)^2&= \int _D \sum _{\eta \in \varXi _n} \Bigl ( {{\,\mathrm{res}\,}}(x,\eta ) \cdot \sum _{\eta '\in \varXi _m} \tilde{{\mathbf {H}}}(\eta ,\eta ') {{\,\mathrm{res}\,}}(x,\eta ') \Bigr ) \, \,\mathrm {d}{x}. \end{aligned}$$
These operations, including the calculation of the discrete error estimator (5.1), can be executed efficiently in the TT format.