Adaptive stochastic Galerkin FEM for lognormal coefficients in hierarchical tensor representations

Stochastic Galerkin methods for non-affine coefficient representations are known to cause major difficulties from theoretical and numerical points of view. In this work, an adaptive Galerkin FE method for linear parametric PDEs with lognormal coefficients discretized in Hermite chaos polynomials is derived. It employs problem-adapted function spaces to ensure solvability of the variational formulation. The inherently high computational complexity of the parametric operator is made tractable by using hierarchical tensor representations. For this, a new tensor train format of the lognormal coefficient is derived and verified numerically. The central novelty is the derivation of a reliable residual-based a posteriori error estimator. This can be regarded as a unique feature of stochastic Galerkin methods. It allows for an adaptive algorithm to steer the refinements of the physical mesh and the anisotropic Wiener chaos polynomial degrees. For the evaluation of the error estimator to become feasible, a numerically efficient tensor format discretization is developed. Benchmark examples with unbounded lognormal coefficient fields illustrate the performance of the proposed Galerkin discretization and the fully adaptive algorithm.


Introduction
In the thriving field of Uncertainty Quantification (UQ), efficient numerical methods for the approximate solution of random PDEs have been a topic of vivid research.As common benchmark problem, one often considers the Darcy equation (as a model for flow through a porous medium) with different types of random coefficients in order to assess the efficiency of a numerical approach.Two important properties are the length of the expansion of random fields, which often directly translates to the number of independent random variables describing the variability in the model, and the type of dependence on these random variables.The affine case with uniform random variables has been studied extensively, since it represents a rather simple model which can easily be treated with standard methods.Opposite to that, the lognormal case with Gaussian random variables is quite challenging, from the analytical as well as numerical point of view.A theory for the solvability of linear elliptic PDEs with respective unbounded coefficients (and hence a lack of uniform ellipticity) in a variational formulation was only developed recently in [49,25,41].Computationally, the problems quickly become difficult or even intractable with many stochastic dimensions, which might be required to accurately represent the stochasticity in the random field expansion.This paper is concerned with the development of an efficient numerical method for this type of problems.
While popular sample-based Monte Carlo methods obtain dimensionindependent convergence rates, these are rather low despite often encountered higher regularity of the parameter dependence.Moreover, such methods can only be used to evaluate functionals of the solution (QoIs = quantities of interest) and an a posteriori error control usually is not feasible reliably.Some recent developments in this field can e.g.be found in [33,32,36,38] for the model problem with a lognormal coefficient.Some ideas on a posteriori adaptivity for Monte Carlo methods can e.g.be found in [20,11].
An alternative are functional (often called spectral) approximations, which for instance are obtained by Stochastic Collocation (SC) [2,44,43], the related Multilevel Quadraure (MLQ) [31,30] and Stochastic Galerkin (SG 1 ) methods.The latter in particular is popular in the engeneering sciences since it can be perceived as an extension of classical finite element methods (FEM).These approaches provide a complete parameter to solution map based on which e.g.statistical moments of the stochastic solution can be evaluated.Notably, the regularity of the solution can be exploited in order to obtain quasi optimal convergence rates.However, the number of random variables and nonlinear parameter representations have a significant impact on the computational feasibility and techniques for a model order reduction are required.Collocation methods with pointwise evaluations in the parameter space are usually constructed either based on some a priori knowledge or by means of an iterative refinement algorithm which takes into account the hierarchical surplus on possible new discretization levels.While these approaches work reasonably well, methods for a reliable error control do not seem immediate since the approximation relies only on interpolation properties.Nevertheless, for the affine case and under certain assumptions, first ideas were recently presented in [28].
The computationally more elaborate Stochastic Galerkin methods carry out an orthogonal projection with respect to the energy norm onto 1 we usually use SGFEM for Stochastic Galerkin FEM a discrete space which is usually spanned by a tensor basis consisting of FE basis functions in physical space and polynomial chaos polynomials orthogonal with respect to the joint parameter measure in (stochastic) parameter space.The use of global polynomials is justified by the high (analytic) regularity of the solution map with respect to the parameters [8,34,9].However, in particular the large computational cost of Galerkin methods make adaptivity and model reduction techniques a necessity.
In order to achieve this, different paths have been pursued successfully.As a first approach, sparse approximations as in [15,16,19] or [4,10,5] with either a residual based or an hierarchical a posteriori error estimators can be computed.Here, the aim is to restrict an exponentially large discrete basis to the most relevant functions explictly by iteratively constructing a quasi-optimal subset.In [16], convergence of the adaptive algorithm could be shown.Moreover, adjoint based error estimators are considered in [6,48].
As a second approach, an adaptive discretization in hierarchical tensor representations can be derived as described in [21].These modern compression formats have lately been investigated intensively in the numerical analysis community [29,3,45,46].It has been examined that with appropriate assumptions the curse of dimensionality can be alleviated, particularly so when employed with typical random PDE problems in UQ, see [12,13] for examples with sample-based reconstruction strategies.Such representations can be understood as an implicit model order reduction technique, closely related to (but more general than e.g.) reduced basis methods.
In the mentioned adaptive approaches, the FE mesh for the physical space and the parametric polynomial chaos space are adapted automatically with respect to the considered problem.In the case of tensor approximations, also the ranks of the representation are adjusted.
However, in all adaptive SGFEM research, so far only the affine case with uniform elliptic coefficient has been considered.In this paper, we extend the ASGFEM approach developed in [21] to the significantly more involved case of lognormal (non-affine) coefficients.This poses several severe complications analytically and numerically.Analytical aspects have recently been tackled in [25,26,41,34].Numerically, in particular computationally efficient Galerkin methods are quite diffucult to construct for this case and have not been devised.Compression techniques and adaptivity most certainly are required in order to make these problems tractable with SGFEM, as described in this paper.Of particular interest is the construction of a computable a posteriori error estimator, which also greatly benefits from using tensor formats.In order to obtain a well-posed discretization, problem adapted spaces according to the presentation in [50] are used.
Main contributions of this work are a representation of the coefficient in the tensor train (TT) format, the operator discretization in tensor format and the derivation of an reliable residual based error estimatator.This then serves as the basis for an adaptive algorithm which steers all discretization parameters of the SGFEM.The performance of the proposed method is demonstrated with some benchmark problems.Here, the used field models are not artificially bounded or shifted away from zero.
We point out that, to our knowledge, an SGFEM for the lognormal case so far has only been practically computed in the weighted function space setting in [42] for a small 1d model as proof of concept.Moreover, there has not been any adaptive numerical method with reliable a posteriori error estimation as derived in this work.However, we note that our approach relies on the assumption that the coefficient is discretized sufficiently accurately and hence the related discretization error can be neglected.In practice, this can be ensured with high probability by sampling the error of the discrete coefficient.Additionally, since constants in the error bound can become quite large, we interpret the error estimate as a refinement indicator.
It should be noted that a functional adaptive evluation of the forward map allows for the derivation of an explicit adaptive Bayesian inversion with functional tensor representations as in [17].The results of the present work lay the ground for a similar approach with a Gaussian prior assumption.This will be the topic of future research.Moreover, the described approach enables to construct SGFEM with arbitrary densities (approximated in hierarchical tensor formats).This generalization should also be examined in more detail in further research.Lastly, while sparse discretizations seem infeasible for the lognormal coefficient, a transformation [52] yields a convection-diffusion formulation of the problem with affine parameter dependence, which then again is amenable to an adaptive sparse SGFEM.This direction is examined in [14].
The structure of this paper is as follows: We first introduce the setting of parametric PDEs with our model problem in Section 2. The variational formulation in problem dependent weighted function spaces and the finite element (FE) setting are described in Section 3. The employed tensor formats and the tensor representations of the coefficient and the differential operator are examined in Section 4. As a central part, in Section 5 we derive the a posteriori error estimators and define a fully adaptive algorithm in Section 6, including efficient ways to compute error indicators for physical and stochastic refinement.Numerical examples in Section 7 illustrate the performance of the presented method and conclude the paper.

Setting and Discretization
In the following, we introduce the considered model problem formally, present its weak formulation and describe the employed discretizations in finite dimensional function spaces.We closely follow the presentations in [50,34,26] regarding the lognormal problem in problem-dependent function spaces.In [21], a related formulation for the solution and evaluation of a posteriori error estimators for parametric PDEs with affine coefficient fields in hierarchical tensor formats is derived.
2.1.Model problem.We assume some bounded domain D ⊂ R d , d = 1, 2, 3, with Lipschitz boundary ∂D and a probability space (Ω, F, P).For P-almost all ω ∈ Ω, we consider the random elliptic problem The coefficient a : D ×Ω → R denotes a lognormal, isotropic diffusion coefficient, i.e., log(a) is an isotropic Gaussian random field.
Remark 2.1.The source term f ∈ L 2 (D) is assumed deterministic.However, it would not introduce fundamental additional difficulties to also model f and the boundary conditions as stochastic fields not correlated to the coefficient a(x, ω) as long as appropriate integrability of the data is given.
For the coefficient a(x, ω) of (2.1), we assume a Karhunen-Loève type expansion of b := log(a) of the form x ∈ D, P-almost all ω ∈ Ω.
Here, the parameter vector Y = (Y ) ∈N consists of independent standard normal Gaussian random variables in R.Then, by passing to the image space (R N , B(R N ), γ) with the Borel σ-algebra B(R N ) of all open sets of R N and the Gaussian product measure and dγ we can consider the parameter vector y = (y For any sequence β ∈ 1 (N) with we define the set The set Γ β of admissible parameter vectors is γ-measurable and of full measure.
Lemma 2.2 ([34] Lem.2.1).For any sequence β ∈ 1 (N), there holds For any y ∈ Γ β , we define the deterministic parametric coefficient Due to Lemmas 2.2 and 2.3, we consider Γ = Γ β as the parameter space instead of R N .By Lemma 2.3, the stochastic coefficient a(x, y) is well defined, bounded from above and admits a positive lower bound for almost all y ∈ Γ .Thus, the equations (2.1) and (2.2) have a unique solution u(y) ∈ X for almost all y ∈ Γ .
Let X := H 1 0 (D) denote the closed subspace of functions in the Sobolev space H 1 (D) with vanishing boundary values in the sense of trace and define the norm We denote by •, • = •, • X * ,X the duality pairing of X * and X and consider f as an element of the dual X * .
For any y ∈ Γ , the variational formulation of (2.1) reads: find u(y) ∈ X such that where B : X × X × Γ → R is defined by Hence, pathwise existence and uniqueness of the solution u(y) is obtained by the Lax-Milgram lemma due to uniform ellipticity for any fixed y ∈ Γ .
In particular, for all y ∈ Γ , (2.3), it holds with some 0 < ǎ(y) ≤ a(x, y) on D. The integration of (2.3) over Γ with respect to the standard normal Gaussian measure γ does not lead to a well-defined problem since the coefficient a(x, y) is not uniformly bounded in y ∈ Γ and not bounded away from zero.Hence, a more involved approach has to be pursued, which is elaborated in Section 3.2.
Alternative results for this equation were presented in [50,26,25].The formulation of (2.1) as a parametric deterministic elliptic problem with solution u(y) ∈ X for each parameter y ∈ Γ reads − div(a(x, y)∇u(x, y)) = f (x) for x ∈ D, u(x, y) = 0 for x ∈ ∂D.

Variational Formulation and Discretization
This section is concerned with the introduction of appropriate function spaces required for the discretization of the model problem.In particular, a problem-adapted probability measure is introduced which allows for a well-defined formulation of the weak problem rescaled polynomial chaos basis.For any such subset Λ ⊂ F, we define supp Λ := µ∈Λ supp µ ⊂ N. We denote by (H n ) ∞ n=0 the orthonormal basis of L 2 (R, γ m ) = L 2 (R, γ 1 ) with respect to the standard Gaussian measure consisting of Hermite polynomials H n of degree n ∈ N 0 on R.An orthogonal basis of L 2 (Γ, γ) is obtained by tensorization of the univariate polynomials, see [50,51].To reduce notation, we drop the explicit dependency on the sigmaalgebra which is always assumed to be rich enough.For any multi-index µ ∈ F, the tensor product polynomial H µ := ∞ m=1 H µm in y ∈ Γ is expressed as the finite product For practical computations, an analytic expression for the triple product of Hermite polynomials can be used.Lemma 3.1 ([51, 40]).For µ, ν, ξ ∈ F, m ∈ N, it holds Lemma 3.2 ([22, 41]).
The expansion of X = exp(tY ) in Hermite polynomials is given by We recall some results from [50] required in our setting.Let σ = (σ m ) m∈N ∈ exp( 1 (N)) and define is the one-dimensional Radon-Nikodym derivative of γ σm with respect to γ m , i.e., the respective probability density.We assume that the sequence σ depends exponentially on β = (β m ) m∈N and some ∈ R, namely σ m ( ) := exp( β m ), m ∈ N, and define By multiplication, this yields the multivariate identity A basis of orthonormal polynomials with respect to the weighted measure γ can be defined by the transformation We define the scaled Hermite polynomials H τ µ := H µ • τ .Remark 3.3.Lemmas 3.1 and 3.2 are also valid with the transformed multivariate Hermite polynomials H τ .In particular, κ ξm,νm,µm does not change under transformation and the expansion in Lemma 3.2 holds by substituting t ∈ R with σ m t in the corresponding dimension m ∈ N.

3.2.
Weak formulation in problem-dependent spaces.In order to obtain a well-posed variational formulation of (2.1) on L 2 (Γ, γ; X ), we follow the approach in [50] and introduce a measure γ which is stronger than γ and assume integrability of f with respect to this measure.For > 0 and 0 ≤ ϑ < 1, let the bilinear form B ϑ : The solution space is then defined as the Hilbert space The different employed spaces are related as follows.

Lemma 3.4 ([50] Prop. 2.43). For
It can be shown that the bilinear form B ϑ (•, •) is V ϑ -elliptic in the sense of the following Lemma.
Moreover, we assume that f is such that the linear form 2) and (3.3) in particular lead to the unique solvability of the variational problem in V ϑ , and u ∈ V ϑ is the unique solution of (2.1).

Deterministic discretization.
We discretise the deterministic space X by a conforming finite element space X p (T ) := span{ϕ i } N i=1 ⊂ X of degree p on some simplicial regular mesh T of domain D with the set of faces S (i.e., edges for d = 2) and basis functions ϕ i .
In order to circumvent complications due to an inexact approximation of boundary values, we assume that D is a polygon.We denote by P p (T ) the space of piecewise polynomials of degree p on the triangulation T .The assumed FE discretization with Lagrange elements of order p then satisfies X p (T ) ⊂ P p (T ) ∩ C(T ).For any element T ∈ T and face F ∈ S, we set the entity sizes h T := diam T and h F := diam F .Let n F denote the exterior unit normal on any face F .The jump of some By ω T and ω F we denote the element and facet patches defined by the union of all elements which share at least a vertex with T or F , respectively.Consequently, the Clément interpolation operator The fully discrete approximation space with |Λ| < ∞ is given by We define a tensor product interpolation operator I : For v ∈ V ϑ (Λ) and all T ∈ T , this yields the interpolation estimate Likewise, on the edges F ∈ S we derive Remark 3.7.It should be noted that the constant ĉϑ čϑ tends to ∞ as → {0, ∞} and for ϑ → {0, 1}, see Remark 2.46 in [50].This is expected, as the problem is ill-posed in these limits.In order to obtain reasonable upper error bounds, the parameters have hence to be chosen judiciously.A more detailed investigation of an optimal parameter choice is postponed to future research.

Decomposition of the Operator
In this section, we introduce the discretization of the operator in an appropriate tensor format.For this, an efficient representation of the non-linear coefficient is derived.We first introduce basic aspects of the employed Tensor Train (TT) format.
4.1.The Tensor Train format.We only provide a brief overview of the notation used regarding the tensor train representation.For further details, we refer the reader to [21,3] and the references therein.
Any function w N ∈ V N (Λ; T , p) can be written as Thus, the discretization space is isomorphic to the tensor space, namely The tensor W grows exponentially with the order M , which constitutes the so called curse of dimensionality.We employ a low-rank decomposition of the tensor for a dimension reduction.In this paper, we adhere to the Tensor Train (TT) format for tensor decomposition [46].This seems reasonable, as the components (of the operator and hence the solution) are of decreasing importance due to the decay of the coefficient functions b m and therefore we can expect decreasing ranks in the tensor train format.Nevertheless, other tensor formats are also feasible in principle.
The TT representation of a tensor For simplicity of notation, we set r 0 = r M +1 = 1.If all dimensions r m are minimal, then this is called the TT decomposition of W and rank TT (W ) := r = (1, r 1 , . . ., r M , 1) is called the TT rank of W .The TT decomposition always exists and it can be computed in polynomial time using the hierarchical SVD (HSVD) [35].A truncated HSVD yields a quasi-optimal approximation in the Frobenius norm [46,27,39,29].Most algebraic operations can be performed efficiently in the TT format [3].
Once the function w N has a low-rank TT decomposition, it is advisable to obtain a similar representation for the Galerkin operator on V N (Λ; T , p) in order to allow for efficient tensor solvers.For the lognormal coefficient a(x, y), this can only be done approximately.
Later, it will be useful to express the storage complexity of a tensor train.We distinguish the degrees of freedom given by the tensor train representation and the full (uncompressed) degrees of freedom.For a tensor U ∈ R q 0 ×...×q L of TT-rank r = (1, r 1 , . . ., r L , 1), the dimension of the low-rank tensor manifold is given by tt-dofs(U ) := while the dimension of the full tensor space and hence its representation is full-dofs(U ) := One can conclude from (4.1) that the complexity of tensor trains depend only linear on the dimension, i.e., we have to store O(Lqr 2 ), r = max {r} q = max {q 0 , . . ., q L } entries instead of O(q L ), which is much smaller for moderate TT-ranks r.

4.2.
TT representation of the non-linear coefficient.We approximate the coefficient using the coefficient splitting algorithm described in [47].This results in a discretized coefficient on a tensor set with TT-rank s = (1, s 1 , . . ., s L , 1).Here, we exploit Lemma 3.2, i.e., the fact that every factor of (4.2) has a Hermite expansion of the form The procedure is as follows: First, we fix an adequate quadrature rule for solving the involved integrals by choosing quadrature points χ q ∈ D and weights w q ∈ R for q = 1, . . ., P quad .We begin the discretization at the right most side and define the correlation matrix This means that we have truncated the expansion (4.3) according to the tensor set ∆, which yields an approximation of the factors.This matrix is symmetric and positive semidefinite and it therefore admits an eigenvalue decomposition This yields reduced basis functions for k L = 1, . . ., s L that we can store explicitly at the quadrature points of the integral.If we choose s L = q L then this is just a transformation without any reduction.
Algorithm 1: Algorithm for coefficient splitting.input : Coefficient functions c (1) ν 1 , . . ., c (L) ν L , ν = 0, . . ., q − 1; = 1, . . ., L; ranks s 1 , . . ., s L ; quadrature rule (χ q , w q ), q = 1, . . ., P quad .output : TT Tensor components A 1 , . . ., A L ; Arrange correlation matrix C : Compute eigenvalue decomposition: Set reduced basis functions: We proceed successively for = L − 1, . . ., 1 by defining correlation matrices (χ q )w q with eigenvalue decompositions and the resulting reduced basis functions at the quadrature points ck ( ) This results in a first component for k 1 = 1, . . ., s 1 .Note that on the one hand, it is possible to evaluate this component at any point x ∈ D by converting the reduced basis functions back into their original form by means of the tensor components A and the coefficient functions c ( )  ν .More specifically, (2) On the other hand, each original coefficient function is approximated by the reduced basis representation This approximation is exact if the ranks s = (s 1 , . . ., s L ) are full.
By the described procedure we obtain an approximate discretization a ∆,s ≈ a in a TT-like format that is continuous in the first component and that has the decomposition .4) see Figure 1.
On V N (Λ; T , p) the linear Galerkin operator A is in the TT matrix or matrix product operator (MPO) format: and Since the integral over the triple product κ µm,µ m ,νm = 0 for all ν m > 2 max(µ m , µ m ), it is sufficient to set q = 2d − 1 for all = 1, . . ., L. If the rank s of the decomposition of the coefficient is full, then the discretised coefficient a ∆,s is exact on the discrete space V N (up to quadrature errors).However, this is generally infeasible as the rank would grow exponentially with M .Therefore, a truncation of the rank becomes necessary and the coefficient is only an approximation.We assume in the following that the error that is due to this approximation of the coefficient is small.A thorough estimation of this error is subject to future research.Remark 4.1.A similar approach to decomposing the coefficient has been chosen in [23] where the knowledge of the eigenfunctions of the covariance operator was assumed a priori.This means that one has an orthogonal basis also for the deterministic part in x and all that remains to do is to decompose the coefficient tensor for this basis representation.This is also done using some quadrature and the L 2error of this approximation can be estimated.
According to the discretization of the coefficient, we introduce the splitting of the operator, with the active and inactive operators A + : V N (Λ; T , p) → V N (Λ; T , p) * and A − : V N (Λ; T , p) → V N (Λ; T , p) * defined by The above considerations yield that the approximate operator A + has the following tensor product structure with the operator components and for all = 1, . . ., L,

Error Estimates
This section is concerned with the derivation of a reliable a posteriori error estimator based on the stochastic residual.In comparison to the derivation in [15,16,21], the lognormal coefficient requires a more involved approach directly related to the employed weighted function spaces introduced in Section 3. In theory, an additional error occurs because of the discretization of the coefficient which we assume to be negligible.The developed adaptive algorithm makes possible a computable a posteriori steering of the error components by a refinement of the FE mesh and the anisotropic Hermite polynomial chaos of the solution.The efficient implementation is due to the formulation in the TT format the ranks of which are also set adaptively.
The definition of the operators as in (4.6) leads to a decomposition of the residual We assume that the operator is given in its approximate semi-discrete form A + and aim to estimate the energy error Remark 5.1.As stated before, we assume that the error that results from approximating the coefficient is small.Estimation of this error is subject to future research.Work in this direction has e.g.been carried out in [7,23].Additionally, we require that the bounds (3.2) and (3.3) still hold, possibly with different constants ĉ+ ϑ and č+ ϑ .This is for example guaranteed if a ∆,s is positive, i.e., if Then, since the approximated coefficient is polynomial in y, the arguments in Lemma 3.5 yield the same constants We recall Theorem 5.1 from [15] and also provide the proof for the sake of a complete presentation.Note that the result allows for nonorthogonal approximations w N ∈ V N .Theorem 5.2.Let V N ⊂ V ϑ a closed subspace and w N ∈ V N , and let u N denote the A + Galerkin projection of u onto V N .Then it holds Here, I denotes the Clément interpolation operator in (3.4) and c I is the operator norm of id −I with respect to the energy norm • A + .The constant č+ ϑ is derived from the assumed coercivity of the bilinear form induced by A + similar to (3.2) and (3.3).

Proof. Due to Galerkin orthogonality of u
By the Riesz representation theorem, the first part is We now utilise the Galerkin orthogonality and introduce the bounded linear map I : Since we do not have access to the Galerkin solution u N , we reintroduce We apply the coercivity of the operator A + to the denominator, which yields the desired result.For the last inequality, we used the boundedness of I in the energy norm by defining the constant as the operator norm Since the product of the Hermite polynomials for each m = 1, . . ., M has degree at most q m + d m − 2, it is useful to define the index set Ξ := ∆ + Λ := η = (η 1 , . . ., η L , 0, . ..) : Then, the residual can be split into an active and an inactive part by using the tensor sets Ξ and Λ, where div res(•, η) ∈ X * for all η ∈ Ξ.
For all η ∈ Ξ, the function res is given as and stochastic components for m = 1, . . ., M , The function res is again a TT tensor with continuous first component with TT ranks r m s m for m = 1, . . ., M and s for = M + 1, . . ., L.
The above considerations suggest that the error can be decomposed into errors that derive from the respective approximations in the deterministic domain, the parametric domain and in the ranks.This is indeed the case, as we will see in the following.In a nutshell, if u N is the Galerkin solution in V N and u Λ is the Galerkin solution in the semidiscrete space V(Λ), then the deterministic error err det = u Λ − u N A + corresponds to the error of the active residual R +,Λ , the parametric error err param = u − u Λ A + corresponds to the inactive residual R +,Ξ\Λ and the error made by restricting the ranks is the error in the discrete space err disc (w N ) = u N − w N 2 A + , see Figure 2 for an illustration.5.1.Deterministic error estimation.We define the deterministic error estimator This estimates the active residual as follows.
Proposition 5.3.For any v ∈ V ϑ and any An illustration of the different errors.
Proof.By localization to the elements of the triangulation T and integration by parts, The Cauchy-Schwarz inequality yields With the interpolation properties (3.4) we obtain Since the overlaps of the patches ω T and ω F are bounded uniformly, a Cauchy-Schwarz estimate leads to Here, the constant c det depends on the properties of the interpolation operator (3.4).
Remark 5.4.Note that an L 2 -integration of the residual, which is an element of the dual space V * ϑ , is possible since the solution consists of finite element functions.These are piecewise polynomial and thus smooth on each element T ∈ T .

Tail error estimation. The parametric or tail estimator is given by
and bounds the parametric error as follows.
Proposition 5.5.For any v ∈ V ϑ and any Instead of factorizing out the L ∞ -norm of the diffusion coefficient as in [15,16,21], we use the Cauchy-Schwarz inequality to obtain

Algebraic error estimation.
In order to define the algebraic error estimator, we need to state the linear basis change operator that translates integrals over two Hermite polynomials in the measure γ ϑ to the measure γ: This yields the estimator (5.1) Proposition 5.6.For any w N ∈ V N and the Galerkin solution With this and using the coercivity of A + , we can see that

Overall error estimation.
A combination of the above estimates yields an overall error estimator.
Corollary 5.7.For any w N ∈ V N , the energy error can be bounded by Remark 5.8.In order to get suitable measures for the estimators, the squared density ζ 2 ϑ appears, which upon scaling with again is a Gaussian measure with standard deviation σ = (σ ) 1≤ ≤L for First, this adds a restriction on ϑ such that the argument in the square root is positive.This is fulfilled if exp(2ϑ α 1 ) < 2, since (α m ) 1≤m≤M is a decreasing series, which can be ensured for some ϑ small enough.Second, it is important to check whether the new measure is weaker or stronger than γ ϑ [49], i.e., which space contains the other.Since functions that are integrable with respect to the measure γ ϑ are not necessarily integrable with respect to the squared measure.However, since f is independent of the parameters and A + (w N ) ∈ X * ⊗ Y(Ξ) has a polynomial chaos expansion of finite degree, the residual R + (w N ) is integrable over the parameters for any Gaussian measure and therefore it is also integrable with respect to the squared measure.

Efficient computation of the different estimators.
The error estimators can be calculated efficiently in the TT format.For each element T ∈ T of the triangulation, the residual estimator is given by A complication of the change of the measure to γ and the involved weight ζ 2 ϑ is the fact that the shifted Hermite polynomials H τ ϑ are not orthogonal with respect to this measure.However, this property can be restored easily by calculating the basis change integrals beforehand.This results in another tensor product operator that is defined element-wise for η, η ∈ Ξ by This operator encodes the basis change to the squared measure and can be inserted in order to calculate the scalar product.With this, the estimator takes the form Since H is a tensor product operator, this summation can be done component-wise, i.e., performing a matrix-vector multiplication of every component of the operator H with the corresponding component of the tensor function r.
Similarly, for the jump over the edge F we obtain the estimator Analogously to the affine case dealt with in [21], both of these estimators can then be computed efficiently in the TT format.The parametric error estimator est param (w N ) can be estimated in a similar way.
To gain additional information about the residual influence of certain stochastic dimensions, we sum over specific index sets.Let where the strike through means that takes all values but m.For every m = 1, 2, . .., and w N ∈ V N we define Using the same arguments and notation as above, we can simplify These operations, including the calculation of the discrete error estimator (5.1), can be executed efficiently in the TT format.

Fully Adaptive Algorithm
With the derived error estimators of the preceding sections, it is possible to refine all discretization parameters accordingly.As discussed before, the deterministic estimator assesses the error that arises from the finite element method.The discrete error estimator evaluates the error made by a low rank approximation.The rest of the error is estimated by the parametric error estimator.
The adaptive algorithm described in this section is similar to the algorithms presented in [15,16,21].Given some mesh T , a fixed polynomial degree p, a finite tensor set Λ ⊂ F, and a start tensor W Algorithm 2: The TTASGFEM algorithm.
input : Old solution w N with solution tensor W and rank r; mesh T with degrees p; index set Λ; accuracy TTASGFEM .output : New solution w + N with new solution tensor W + ; new mesh T + , or new index set Λ + , or new rank r + 1.
with TT rank r, we assume that a numerical approximation w N ∈ V N is obtained by a function w + N ← Solve[Λ, T , r, W ] where the superscript + always denotes the updated object.In our implementation, we used the preconditioned ALS algorithm but other solution algorithms are feasible as well.The lognormal error indicators 5 and thus the overall upper bound est all (w N ) in Corollary 5.7 are computed by the methods Depending on which error is largest, either the mesh is refined, or the index set Λ is enlarged, or the rank r of the solution is increased.This is done as follows: • If the deterministic error est det (w N ) outweighs the others, we mark the elements T ∈ T that have the largest error est det,T (w N ) until the sum of errors on the elements in this marked subset T mark ⊂ T exceeds a certain ratio 0 < θ < 1.This is called the Dörfler marking strategy We denote this procedure by The elements in this subset are subsequently refined by • In case the parametric error est param (w N ) dominates, we use estimators est param,m (w N ) in order to determine which components need to be refined.Here, we also mark until the Dörfler property is satisfied, that is, we obtain a subset I mark ⊂ N such that This is the marking • Finally, if est disc (w N ) exceeds the other errors, we simply add a random tensor of rank 1 to the solution tensor W .This increases all TT ranks of W by 1 almost surely.It would also be possible to add an approximation of the discrete residual as this is also in TT format.However, since the ALS algorithm will be performed after the refinement, the advantage of this rather costly improvement has shown to be negligible [53].Thus we get A single iteration step of the adaptive algorithm returns either a refined T + or Λ + or the tensor format solution with increased rank W + and then solves the problem with these properties.This is done repeatedly until the overall error estimator err all (w N ) is sufficiently small, i.e., defined error bound TTASGFEM or a maximum problem size is reached.This procedure is given by the function TTASGFEM, displayed in Algorithm 2. The upper error bounds directly follow from Corollary 5.7.

Numerical Experiments
In this section we examine the performance of the proposed adaptive algorithm with some benchmark problems.The computations are carried out with the open source framework ALEA [18].The FE discretization is based on the open source package FEniCS [24].The shown experiments are similar to the ones of the predecessor paper [21] in Section 7. As before, the model equation (2.1) is computed for different lognormal coefficients on the square domain.The derived error estimator is used as a refinement indicator.Of particular interest is the observed convergence of the true (sampled) expected energy error and the behaviour of the error indicator.Moreover, we comment on the complexity of the coefficient discretization.
7.1.Evaluation of the Error.The real error of the computed approximation is determined by a Monte Carlo estimation.For this, a set of N MC independent samples y (i) N MC i=1 of the stochastic parameter vector is considered.By the tensor structure of the probability measure γ = m∈N γ 1 , each entry of the vector valued sample y (i) is drawn with respect to N 1 .The parametric solution w N ∈ V N (Λ; T , p) obtained by the adaptive algorithm at a sample y (i) , is compared to the corresponding (deterministic) sampled solution u(y (i) ) ∈ X p ( T ) on a much finer finite element space subject to a reference triangulation T , obtained from uniform refinements of the finest adaptively created mesh, up to the point where we obtain at least 10 5 degrees of freedom with Lagrange elements of order p = 3.For computations, the truncation parameter applied to the infinite sum in (2.2), controlling the length of the affine field representation, is set to M = 100.The mean squared error is determined by a Monte-Carlo quadrature, A choice of N MC = 250 proved to be sufficient to obtain a reliable estimate in our experiments.
7.2.The stochastic model problem.In the numerical experiments, we consider the stationary diffusion problem (2.1) on the square domain D = (0, 1)2 .As in [21,15,16,19], the expansion coefficients of the stochastic field (2.We chose the scaling γ = 0.9.Moreover, with k(m) = −1/2 + 1/4 + 2m , i.e., the coefficient functions b m enumerate all planar Fourier sine modes in increasing total order.To illustrate the influence which the stochastic coefficient plays in the adaptive algorithm, we examine the expansion with varying decay, setting σ in (7.1) to different values.The computations are carried out with conforming FE spaces of polynomial degree 1 and 3.
The fully discretized problem is solved in the TT format using the Alternating Least Squares (ALS) algorithm as introduced in [21].Other algorithms like Riemannian optimization are also feasible [1,37].The ALS has a termination threshold of 10 −12 that needs to be reached in the Frobenius norm of the difference of two successive iteration results.
For our choice of the coefficient field, the introduced weights in (3.1) are set to = 1 and ϑ = 0.1.

7.3.
Tensor train representation of the coefficient.Since the tensor approximation of the coefficient is the starting point for the discretization of the operator, we begin with an examination of the rank dependence of the coefficient approximation scheme given in Algorithm 1.For this, we fix the multi-index set such that the incorporated Hermite polynomials are of degree 15 in each stochastic dimension, i.e., ∆ = {ν ∈ F : ν = 0, . . ., 16, = 1, . . ., L}.
As a benchmark, we chose a slow decay with σ = 2 and set the quadrature rule2 to exactly integrate polynomials of degree 7 with a grid such that the number of quadrature nodes is at least P quad = 10 4 .In the following, the relative root mean squared (RRMS) error is compared to the growth in degrees of freedom with respect to various tensor ranks.Numerically, this expression is computed by a Monte Carlo approximation as described in Section 7.1 with the reference mesh T .

By denoting
at every node x k of T , we can apply the notion of tt-dofs (4.1) to the discrete tensor train coefficient.To highlight the crucial choice of the rank parameter s = (1, s 1 , . . ., s L , 1) in a ∆,s , we let the maximal attainable rank s max vary.
It can be seen in Figure 3 that a higher rank yields a more accurate approximation of the coefficient, as long as the stochastic expansion L is long enough.For small numbers of L, the RRMS does not decrease any further than 6 × 10 −4 , even when increasing the maximal attainable rank.Otherwise, for up to L = 100 terms in the affine coefficient field parametrisation, a small rank parameter of s max = 10 gives a reasonable RRMS error of 1 × 10 −3 which can be improved by increasing s max .It should be pointed out that the used approximation only becomes feasible computationally by the employed tensor format since without low-rank representation the amount of degrees of freedom grows exponentially in der number of expansion dimensions.Furthermore, since the tensor complexity and arithmetic operations depend only linearly on the number of expansion dimensions, the computational time scales linearly as illustrated in the last column of Figure 3 showing the average run time of 10 runs 3 .7.4.Adaptivity in physical space.As a first example for an adaptive discretization, we examine the automatic refinement of the physical FE space only.For this, the stochastic expansion of the coefficient is fixed to some small value L = 5 in (4.4), which also is assumed for the corresponding reference solution.The considered polynomial (Hermite chaos) degree of the approximation in each dimension is fixed to d 1 = . . .= d 5 = 10, which can be considered sufficiently large for the respective problem, given an algebraic decay rate of σ = 2 in the coefficient.
Although this experiment does not illustrate the performance of the complete algorithm, it nevertheless depicts the varying convergence rates for different polynomial orders in the FE approximation.The rank of the tensor train approximation is fixed to r max = 10, which is sufficient due to the low stochastic dimensions.It can be observed in Figure 4 that the error estimator follows the rate of convergence of the sampled error.Moreover, the higher-order FE method exhibits a higher convergence rate as expected.This could also be observed in the (much simpler) affine case scrutinized in the preceding works [21,15,16,19].7.5.Fully adaptive algorithm.The fully adaptive algorithm described in Algorithm 2 is instantiated with a small initial tensor with full tensor rank r 1 = 2 consisting of a single M = 1 stochastic component discretized with a linear polynomial d 1 = 2 and a physical mesh with |T | = 32 elements for linear ansatz functions p = 1 and |T | = 8 for p = 3.The marking parameter is set to θ = 0.5.
Figure 5 depicts the real (sampled) mean squared energy error and the corresponding overall error estimator for FE discretizations of degrees p = 1, 3. On the left-hand side of the figure, a lower decay rate (σ = 2) in the coefficient representation is used, resulting in more stochastic modes to be relevant for an accurate approximation.The right-hand side shows results for a faster decay rate with σ = 4.As expected, the convergence rate for the p = 3 FE discretizations is faster than with p = 1 FE.Moreover, for the simpler problem with fast decay, we achieve a smaller error with a comparable number of degrees of freedom than in the harder slow decay setting.Remark 7.1.Note that we neglect the (large) factor č+ ϑ in Corollary 5.7, which depends on the choice of weights ϑ and of the discrete space.A detailed analysis of how to optimally select these weights is outside the scope of this article.
We conclude the numerical observations with Figures 6 and 7 to display the behaviour of the fully adaptive algorithm.To allow for more insights, we redefine the physical mesh resolution as m-dofs(W N ), which is the number of FE degrees of freedom for the respective parametric solution W N .Moreover, we define the number of degrees of freedom incorporated in the operator (4.5) generated to obtain the current solution.For a tensor train operator of dimension A ∈ R (N ×N )×(q 1 ×q 1 )×...×(q L ×q L ) and rank s = (s 1 , . . ., s L ), we define op-dofs(A) := N 2 s 1 − s 2 1 + L−1
Note that, using a sparse representation of the operator in the first (physical) component, it is possible to reduce number of op-dofs.Tables 6 and 7 highlight some iteration steps and the employed stochastic expansion length M , the maximal polynomial degree d max finite element order p = 1 and slow coefficient decay σ = 2 Iteration M d max r max m-dofs(W N ) tt-dofs(W N ) op-dofs(W N ) 5/37 1   (which was usually naturally attained in the first stochastic component), the maximal train rank r max (again most of the time this rank is the separation rank of the physical space and the first stochastic dimension) and the corresponding degrees of freedom.Notably, for the fast coefficient decay σ = 4, the stochastic expansion is very short since most of the randomness is due to the first few stochastic parameters.It is reasonable that the respective parameter dimensions require higher polynomial degrees in the approximation.For the more involved slow decay σ = 2 setting we observe a larger stochastic expansion of the solution and larger tensor ranks.
finite element order p = 3 and slow coefficient decay σ = 2 Iteration M d max r max m-dofs(W N ) tt-dofs(W N ) op-dofs(W N )

Figure 4 .
Figure 4. Relative sampled mean squared H 1 (D) error for fixed stochastic dimension L = 5 and adaptive refinement of the physical mesh.Each setting is shown along its respective overall error estimator as defined in Corollary 5.7 and plotted against the total number of degrees of freedom in the TT representation.Considered are FE approximations of order p = 1 and p = 3.

10 2 10 3 10 Figure 5 .
Figure 5. sampled, mean squared H 1 (D) error for the fully adaptive refinement.Each setting is shown along its respective overall error estimator as defined in Corollary 5.7 and plotted against the total number of degrees of freedom in the representation.Considered are finite elements approximation of order p = 1 and p = 3 and slow decay (σ = 2, left) and fast decay (σ = 4, right).

Figure 6 .
Figure 6.Stochastic expansion length, maximal polynomial chaos degrees, tensor ranks and degrees of freedom for the physical and (compressed) stochastic discretizations as well as for the operator in TT format for selected iteration steps in the fully adaptive algorithm.Finite elements of order p = 1 are used to solve the problem with coefficient decay rates of σ ∈ {2, 4}.

Figure 7 .
Figure 7. Stochastic expansion length, maximal polynomial chaos degrees, tensor ranks and degrees of freedom for the physical and (compressed) stochastic discretizations as well as for the operator in TT format for selected iteration steps in the fully adaptive algorithm.Finite elements of order p = 3 are used to solve the problem with coefficient decay rates of σ ∈ {2, 4}.