Efficient multivariate approximation on the cube

For the approximation of multivariate non-periodic functions $h$ on the high-dimensional cube $\left[-\frac{1}{2},\frac{1}{2}\right]^{d}$ we combine a periodization strategy for weighted $L_{2}$-integrands with efficient approximation methods. We prove sufficient conditions on $d$-variate torus-to-cube transformations ${\psi:\left[-\frac{1}{2},\frac{1}{2}\right]^{d}\to\left[-\frac{1}{2},\frac{1}{2}\right]^{d}}$ and on the non-negative weight function $\omega$ such that the composition of a possibly non-periodic function with a transformation $\psi$ yields a smooth function in the Sobolev space $H_{\mathrm{mix}}^{m}(\mathbb{T}^{d})$. In this framework we adapt certain $L_{\infty}(\mathbb{T}^{d})$- and $L_{2}(\mathbb{T}^{d})$-approximation error estimates for single rank-$1$ lattice approximation methods as well as algorithms for the evaluation and reconstruction of multivariate trigonometric polynomials on the torus to the non-periodic setting. Various numerical tests in up to dimension $d=5$ confirm the obtained theoretical results for the transformed approximation methods.


Introduction
In this paper we discuss a general framework for the approximation of non-periodic multivariate functions h on the d-dimensional cube − 1 2 , 1 2 d in which we combine a particular periodization strategy for weighted L 2 -integrands with approximation methods based on rank-1 lattices. At first we consider the univariate transformations ψ : − 1 2 , 1 2 → − 1 2 , 1 2 that are increasing, continuously differentiable and have some number k ∈ N of derivatives ψ (k) j that vanish at the boundary points − 1 2 , 1 2 . In one dimension the application of such a change of variables y = ψ(x) to any h ∈ L 2 − 1 2 , 1 2 , ω with a non-negative weight function ω yields with f (x) = h(ψ(x)) ω(ψ(x)) ψ (x). For dimensions d ≥ 2 a multivariate generalization yields similar d-variate functions f . We prove that if the non-periodic function h has certain smoothness properties and we assume certain boundary conditions on both the weight function ω and the transformation ψ, then the transformed function f is continuously extendable on the torus T d and has some guaranteed minimal degree of Sobolevsmoothness. This enables us to rewrite the involved objects, algorithms and approximation error bounds by means of the inverse transformation ψ −1 . The approximation of functions f ∈ L 2 (T d ) with respect to the Fourier system e 2πik·• k∈Z d by a Fourier partial sum S I f := k∈If k e 2πik·• translates into the approximation of functions h ∈ L 2 − 1 2 , 1 d ,ω . The outlined periodization strategy furthermore allows us to apply existing approximation methods for smooth functions defined on the torus T d . We focus on approximation theory concerned with the Wiener algebra A(T d ) containing all L 1 (T d )-functions with absolutely summable Fourier coefficientsf k and k = (k 1 , . . . , k d ) ∈ Z d , see [39,11]. Considering the weight function whose norms contain information about the decay rate of the Fourier coefficientsf k with respect to the weight function ω hc . For the hyperbolic crosses I d N := {k ∈ Z d : ω hc (k) ≤ N } ⊂ Z d with N ∈ N and the approximated Fourier partial sums of the form S Λ I f := k∈If Λ k e 2πik·• with only approximated Fourier coefficientsf Λ k ≈f k there are approximation errors when using single rank-1 lattices. It was shown in [21,Theorem 3.3] that the error of approximating a continuous function f ∈ A β (T d ) by the approximated Fourier partial sum S Λ I d N f measured in the L ∞ (T d )-norm is bounded above by N −β f A β (T d ) . Approximating a continuous function f ∈ H β (T d ) by the approximated Fourier partial sum S Λ I d N f measured in the L 2 (T d )-norm is bounded above by C d,β N −β (log N ) (d−1)/2 f H β (T d ) with some constant C d,β = C(d, β) > 0 as shown in [41,Theorem 2.30]. The approximation of functions in the Hilbert spaces H β (T d ) was also investigated by V. N. Temlyakov, see e.g. [38,21].
A major problem is that in general it's hard to calculate the Fourier coefficientsf k in order to determine if they are absolutely or square summable. Instead we utilize certain norm equivalences to get information about the decay rate of the Fourier coefficientsf k . Given a multi-index α = (α 1 , . . . , α d ) ∈ N d 0 with α ∞ := max(|α 1 |, . . . , |α d |) we define for of the Sobolev space H m mix (Ω) of functions f ∈ L 2 (Ω) with mixed natural smoothness m ∈ N 0 , that were discussed in [34,40,42]. As shown in [26,Lemma 2.3], the norms · H m mix (T d ) and · H β (T d ) are equivalent for β = m ∈ N. Furthermore, for all β ≥ 0 and all λ > 1 2 we have the continuous embedding H β+λ (T d ) → A β (T d ) as shown in [21,Lemma 2.2]. Hence, for m ∈ N we can just check if f is an element of a Sobolev space H m mix (T d ) in order to determine if a function f is in A m (T d ) or H m (T d ) instead of calculating all its Fourier coefficientsf k .
However, it's generally rather difficult to verify if an f obtained by the change of variables (1.1) is in the Sobolev space H m mix (T d ) by calculating its norm and checking the various L 2 -integrability conditions. Therefore we provide a set of sufficient L ∞ -conditions for f being in H m mix (T d ). At first we prove these conditions for all possible transformations ψ and weight functions ω. Later on we consider families of parameterized transformations ψ(•) = ψ(•, η) and families of weight functions ω(•) = ω(•, µ) with η, µ ∈ R d + . Then we have parameterized transformed functions f (•) = f (•, η, µ) ∈ L 2 (T d ) and both parameters may impact the smoothness of these functions. With the sufficient L ∞ -smoothness conditions we calculate lower bounds for η and µ such that the smoothness degree m of a function h ∈ L 2 − 1 2 , 1 • some good L 2 -and L ∞ -approximation results as in [21,3], • fast algorithms in [21,17] based on rank-1 lattice approximation which are suitable for high-dimensional approximation.
Generally the change of variables is a versatile and powerful tool in numerical analysis. An excellent overview is found in [1,Chapter 16 and 17] which contains many practical aspects of the mapped methods. In recent years they were repeatedly used for the numerical integration and approximation of non-periodic functions in Chebyshev spaces [32] as well as in half-periodic cosine spaces and Korobov spaces by means of tent-transformed lattice rules [10,6,13,27]. In particular for numerical integration certain strategies to periodize integrands have been discussed in [28]. For sampling purposes besides single and multiple rank-1 lattice rules [21,17], there are sampling methods on sparse grids [14,2,15], randomized least square sampling approaches [16,24] and also interlaced scrambled polynomial lattice rules [12,8]. An introduction to lattice rules can be found in [31,36,9]. These rules were used also for the approximation of functions on the torus, see [39]. Recently, efficient algorithms based on component-by-component methods [7,5] were presented in order to compute high-dimensional integrals. For the approximation of high-dimensional functions there are efficient algorithms using sampling schemes based on rank-1 lattices [21,17], and furthermore these schemes provide good approximation properties, see also [3,27]. We adapt these algorithms to the non-periodic setting and incorporate the outlined use of transformations. Furthermore, we present numerical examples.
The outline of the paper is as follows: In Section 2 we establish the basic notions from classical Fourier approximation theory on the torus T d , the corresponding function spaces and important convergence properties. We introduce the Sobolev spaces H m mix (T d ) of mixed natural smoothness order m ∈ N 0 and the Wiener Algebra A(T d ) of functions with absolutely summable Fourier coefficients. Furthermore, we discuss certain properties of the subspaces A β (T d ) and H β (T d ) of the Wiener Algebra, in particular we highlight the norm equivalence of · H m (T d ) and · H m mix (T d ) for all m ∈ N, see [26] . Then we define rank-1 lattices as introduced in [23], discuss their importance in the context of Fourier approximation and recall two important approximation error bounds on the torus in Theorems 2.4 and 2.5. In Section 3 we define the notion of a torus-to-cube transformation ψ : − With the sufficient L ∞ -conditions from Section 3 we then calculate explicit bounds for η ∈ R d + that determine the degree of smoothness m ∈ N of h that is preserved under composition with the family of transformations ψ(•, η). Then we use the algorithms of the previous section, compare the decay of the discretized weighted L ∞ -approximation error given in (4.2) and observe the proposed approximation error decay caused by increasing the parameter η ∈ R d + in up to dimension d = 5.

Fourier approximation
At first we introduce weighted L p -function spaces and Sobolev spaces of mixed smoothness, recall some definitions of classical Fourier approximation theory and define a space of functions that have absolute square-summable Fourier coefficients. Finally, we reflect the ideas of rank-1 lattices from [37,5,17], the corresponding Fourier approximation methods, and approximation error bounds that were discussed in e.g., [38,21,3].

Preliminaries
Let The space C(Ω), · L∞(Ω) denotes the collection of all continuous multivariate functions f : Ω → C, and C k (Ω), · L∞(Ω) with k ∈ N denotes the space of all continuous multivariate functions We define the weighted function spaces L p (Ω, ω) for 1 ≤ p < ∞ with the weight function ω : Ω → [0, ∞) as For functions f and g in the Hilbert space L 2 (T d ) we have the scalar product For any frequency set I ⊂ Z d of finite cardinality |I| < ∞ we denote the space of all multivariate trigonometric polynomials supported on I by The functions e 2πik·x = d j=1 e 2πik j x j with k ∈ Z d and x ∈ T d are orthogonal with respect to the L 2 (T d )-scalar product. For all k ∈ Z d we denote the Fourier coefficientsf k bŷ and the corresponding Fourier partial sum by For multi-indices α ∈ N d 0 and the differential operator we define the Sobolev spaces of mixed natural smoothness of L 2 (Ω)-functions with smoothness order m ∈ N 0 , see [34,40,42], as For Ω = T d we recall some notation introduced in [26]. The H m mix (T d )-norm is expressible in terms of the Fourier coefficientsf k , which leads to the equivalent norm In [26,Lemma 2.3] it is specified that for m ∈ N and all f ∈ H m mix (T d ) we have Based on the weight function ω hc (k) given in (1.2) we define hyperbolic crosses I d N as for all k j ∈ Z. In total, for m ∈ N we have the norm equivalences

Rank-1 lattices and reconstructing rank-1 lattices
Before discussing the approximation of functions f ∈ H β (T d ) ∩ C(T d ) we recollect some related objects and observations from [37,5,17]. For each frequency set I ⊂ Z d there is the difference set holds. Given a reconstructing rank-1 lattice Λ(z, M, I), we have exact integration for all multivariate trigonometric polynomials g ∈ Π D(I) , see [37], so that g(x j ), x j ∈ Λ(z, M, I).
In particular, for f ∈ Π I and k ∈ I we have f (•) e −2πik·• ∈ Π D(I) and f (x j ) e −2πik·x j , x j ∈ Λ(z, M, I). (2.8) For an arbitrary function f ∈ H β (T d ) ∩ C(T d ) and lattice points x j ∈ Λ(z, M, I) we lose the former mentioned exactness and get approximated Fourier coefficientsf Λ k of the form leading to the approximated Fourier partial sum S Λ I f given by

Lattice based approximation on the torus
We discuss upper bounds for certain approximation errors f − S Λ  [20,19] to use multiple rank-1 lattices which are obtained by taking a union of several single rank-1 lattices. This method overcomes the limitations of the single rank-1 lattice approach. That is, for the reconstruction of multivariate trigonometric polynomials supported on an arbitrary frequency set I of finite cardinality |I| < ∞ with a single reconstructing rank-1 lattice, the lattice size M is bounded by |I| ≤ M ≤ |I| 2 under certain mild assumptions, see [ [20,22]. Remarkably, in both cases the upper bound is independent of the dimension d.
Furthermore there are methods where the support of the Fourier coefficientsf k is unknown. We adapt the methods presented in [33] that describe a dimension incremental construction of a frequency set I ⊂ Z d containing only non-zero or the approximately largest Fourier coefficientsĥ k , based on component-by-component construction of rank-1 lattices. This is done with respect to a specific search space in form of a full integer grid [−N, N ] d ∩ Z d with refinement N ∈ N and a sparsity that bounds the cardinality of the support. Finally, let us also note that instead of rank-1 lattice points one can use a dimensional incremental support identification technique based on randomly chosen sampling points, that was recently developed in [4]. Even though the transformation method is easily incorporated into both the multiple rank-1 lattice methods as well as the component-by-component construction method, they won't be discussed any further in this work.
with |I d N | < ∞ and N ∈ N, and a reconstructing rank-1 lattice Λ(z, M, I d N ) be given. The approximation of f by the approximated Fourier partial sum S Λ I d N f leads to an approximation error that is estimated by (2.9) The approximation of functions in the Hilbert spaces H β (T d ) was investigated by V. N. Temlyakov, see [38,21]. He showed that for β > 1 there exists a reconstructing rank-1 lattice generated by a vector of Korobov form z := (1, z, z 2 , . . . , z d−1 ) ∈ Z d such that the L 2 -truncation error is bounded above by A generalization of this estimate as well as an upper bound for the corresponding aliasing error can be found in [3,Theorem 2], where they are stated in terms of dyadic hyperbolic cross frequency sets and where they use a component-by-component approach to construct the generating vector z ∈ Z d , which generally isn't of Korobov form anymore. However, every dyadic hyperbolic cross is embedded in a non-dyadic one, see [41,Lemma 2.29]. Thus, the error estimates are easily translated in terms of non-dyadic hyperbolic crosses I d N , see [41,Theorem 2.30], and we are particularly interested in the following special case: , and a reconstructing rank-1 lattice Λ(z, M, I d N ) be given. Then we have with some constant C d,β := C(d, β) > 0.
As highlighted earlier in (2.4), for β = m ∈ N the norms · H β (T d ) and · H m mix (T d ) are equivalent. Eventually we utilize this norm equivalence in order to apply the above approximation error bounds for functions f in the Sobolev space H m mix (T d ) that are characterized by their derivatives.

Torus-to-cube transformation mappings
Change of variables were discussed for example in [1,35] and were used for high dimensional integration in e.g., [29,25]. In this chapter we define torus-to-cube transformations ψ : and provide examples that will reappear later in this paper. Furthermore, we discuss special parameterized families of such torus-to-interval transformations, some of which are induced by transformationsψ : [1,35,30]. We provide examples that will reappear later in this paper. Afterwards we describe the weighted Hilbert spaces

Torus-to-cube transformations
We call a mapping a torus-to-cube transformation if it is continuously differentiable, increasing and has the first derivative ψ ( The respective inverse transformation is also continuously differentiable, increasing and is denoted by We call the derivative of the inverse transformation the density function of ψ, which is a non-negative L 1 -function on the interval [− 1 2 , 1 2 ] and given by . For multivariate transformations we put Next we describe a particular family of parameterized torus-to-cube transformations as defined in (3.1) that are based on transformationsψ to R, whose definition is recalled from [30]. We call a continuously differentiable, increasing and odd mappingψ : (3.5) These transformations form a subset of all torus-to-cube transformations and are in a natural way continuously differentiable and increasing. The respective first derivative and inverse torus-to-cube transformation are given by The corresponding density functions (•, η) and (•, η) as well as the multivariate torus-tocube transformation ψ(•, η) and its inverse ψ −1 (•, η) with η ∈ R d + are simply parameterized versions of (3.1), (3.2) and (3.3) and share the same properties.

Exemplary transformations
In e.g. [1,Section 17.6], [35,Section 7.5] and [30] we find various suggestions for transformations to R. We are particularly interested in the transformatioñ based on the log-function, and the transformatioñ based on the inverse of the error function Both (3.6) and (3.7) induce a parameterized torus-to-cube transformation as defined in (3.5). A useful property is the fact that (y, η) = ψ y, 1 η . For x, y ∈ [− 1 2 , 1 2 ] we have the following torus-to-cube transformations: • logarithmic transformation: and we observe that • error function transformation: with the error function erf as given in (3.8), and erf −1 denoting the inverse error function, and we observe that Additionally, we list an example for a torus-to-cube transformation ψ : 1 2 ] as defined in (3.1) that isn't induced by a transformation to R: • sine transformation: Later on we compare the highly limited smoothening effect of this particular transformation on any given test function d with the logarithmic transformation (3.9) for which we can achieve much more smoothness if the parameter the parameter η ∈ R + is large enough. In Figure 3.1 we compare the transformation mapping, the inverse and their derivatives of the logarithmic transformation (3.9) for η ∈ {2, 4} with the sine transformation (3.11).

Weighted Hilbert spaces on the cube
We describe the structure of the unvariate weighted function spaces L 2 [− 1 2 , 1 2 ], ω as defined in (2.1). In this section the weight function ω : [− 1 2 , 1 2 ] → [0, ∞) remains unspecified. Later on we may consider families of non-negative parameterized weight functions ω(•, µ) with µ ∈ R + for the purpose of controlling the smoothness of functions in and of the corresponding transformed functions as in (1.1) on the torus T. Families of multivariate parameterized weight functions are defined as . For now we simplify the notation of the transformation, the weight function, and all related functions by omitting any parameter and just writing ψ(•), ω(•), etc.
We remain in the univariate setting. The system {ϕ k } k∈Z of weighted exponential functions forms an orthogonal system with respect to the scalar product and for k 1 , k 2 ∈ Z we have The weighted scalar product (3.14) induces the norm ,ω) and in a natural way we have Fourier coefficients of the form as well as the respective Fourier partial sum for I ⊂ Z given by For now we fix a constant weight function ω ≡ 1 and compare some of the orthogonal systems induced by the previously listed exemplary transformations: • For the logarithmic transformation (3.10) with the inverse transformation and the density function given by Re(ϕ k (y, 2))  .13) with the density function of the logarithmic transformation (3.9) and with constant weight function ω ≡ 1.
the orthogonal system functions ϕ k as in (3.13) are of the form The graphs of the real and imaginary parts of these ϕ k are shown for η = 2 and k = 0, 1, 2, 3 in Figure 3.2.
• For the sine transformation (3.11) with the inverse transformation and the density function given by the orthogonal system functions ϕ k as in (3.13) are of the form The graphs of the real and imaginary parts of these ϕ k are shown for k = 0, 1, 2, 3 in Re(ϕ k (y))  These conditions are stated for both univariate and multivariate functions. Afterwards we utilize the norm equivalence of the Sobolev space H m mix (T d ) and the subspace H β (T d ) of the Wiener Algebra A(T d ) for m = β as described in (2.4) and combine it with the embedding H β+λ (T d ) → A β (T d ) in (2.5) for all λ > 1 2 in order to discuss high dimensional approximation problems in which we apply rank-1 lattice based fast Fourier approximation methods. Throughout this section we still omit the parameters η, µ ∈ R d + in the notation of the torusto-cube transformations ψ and the weight functions ω.
For now we consider univariate transformed functions f ∈ L 2 − 1 2 , 1 2 of the form that are the result of applying a torus-to-cube transformation y = ψ(x) as defined in (3.1) to the L 2 − 1 2 , 1 2 , ω -norm of the given function h so that we have the identity It is generally rather difficult to check if such transformed functions f are smooth and lie in H m − 1 2 , 1 2 for some fixed m ∈ N 0 by calculating the individual L 2 − 1 2 , 1 2 -norms within the Sobolev norm f H m ([− 1 2 , 1 2 ]) . Therefore we propose a certain set of sufficient conditions such that f ∈ H m − 1 2 , 1 2 with m ∈ N 0 , that eliminates the necessity to evaluate L 2 -integrals of various derivatives of f by utilizing the product structure of the functions f in (3.17). Furthermore, once we consider particular parameterized families of torus-tocube transformations ψ(•, η) and families of weight functions ω(•, µ), these conditions enable us for each smoothness order m ∈ N 0 to explicitly calculate how large the parameters η, µ ∈ R + have to be in order to preserve the fixed degree of smoothness m when transform-  2 ) = f ( 1 2 ), which reads as after recalling the we have ψ ± 1 2 = ± 1 2 . One approach to achieve this equality is to choose transformations ψ whose first derivative ψ converges to 0 at x = ± 1 2 fast enough that it isn't counteracted by the function h or the weight function ω. Hence, we assume that ω(ψ(•))ψ (•) ∈ C 0 − 1 2 , 1 2 . We focus on this approach, even though there are obviously more ways to achieve the above equality. In higher dimensions we analogously assume Later on we will repeatedly choose a constant weight function ω ≡ 1 and make use of the logarithmic transformation (3.9) or the error transformation (3.10) for the purpose of achieving this behavior of the transformed functions f at the boundary points. While their first derivatives ψ (•, η) are always 0 at the boundary points, we need to increase the parameter η to achieve the same property for higher derivatives.
We proceed by proposing a set of univariate sufficient conditions such that we obtain smooth transformed functions f ∈ H m (T). For simplified notation we alternate between equivalent expressions for derivatives of the appearing functions and for improved readability we write explicit arguments within certain norms. We denote the k-th derivative of a function f (x) with respect to x by one of the equivalent expressions , and for k = 1, 2, 3 we sometimes use the notation f (x), f (x), and f (x). We apply the generalized Leibniz rule for the n-th derivative of a product of functions to the Sobolev norm of f , which leads to We leave h • ψ in the term corresponding to k = 0 untouched for now. For k = 1, . . . , m we use the Faá di Bruno formula to write the k-th derivative of the composition of functions h and ψ as ) and the well-known Bell polynomials B k, for k, ∈ N 0 are given by with z = (z 1 , . . . , z k− +1 ) . By assumption all derivatives of ψ are bounded on the interval [− 1 2 , 1 2 ], hence, each Bell polynomial B k, in (3.21) is bounded, too. To simplify the notation we write B k, (ψ(x)) := B k, (ψ (x), ψ (x), . . . , ψ (k− +1) (x)). We insert (3.21) into (3.20) and estimate that and the appearing L 2 -norms are estimated by their respective L ∞ -norms, so that .
With the boundedness of all appearing Bell polynomials B k, and the assumption that h is m-times continuously differentiable, the norm f H m ([− 1 2 , 1 2 ]) is finite if all L ∞ -norms of the first m derivatives of (ω(ψ(•)) ψ (•) exist.
Finally, the assumption that the first m derivatives of (ω(ψ(•)) ψ (•) also vanish at the boundary points implies that the first m derivatives of the transformed function f vanish at the boundary points, too. Hence, f is in H m (T).
Next, we prove the multivariate version of Theorem 3.4. Again, to simplify the notation in For the first and -th derivatives of univariate functions with ∈ N we keep using the notation and for which we have the identity

Similar to (3.17) we consider multivariate transformed functions
Again, we derive a set of sufficient L ∞ -conditions on the multivariate transformation ψ and the product weight ω, that determine when a function h ∈ L 2 − 1 2 , 1 Based on the product weight function in the transformed function f in (3.23) we have By applying the Leibniz formula as in (3.20) we obtain for all = 1, . . . , d and in total rewrite the expression in (3.26) as Next, we apply the Faá di Bruno formula (3.21) to each univariate j k -th derivative of h • ψ in (3.27) so that for = 1, . . . , d we have i.e., B 0,i (ψ (x ), ψ (x ), . . . , ψ (j −i +1) (x )) = 1. We With

Approximation of transformed functions
We establish two specific approximation error bounds for functions defined on Beforehand, we fix some notation of certain multivariate objects. Based on the definition of a rank-1 lattice Λ(z, M ) in (2.7) we define a transformed rank-1 lattice as (3.30) Accordingly, we denote the transformed reconstructing rank-1 lattice by Λ ψ (z, M, I).
Besides the weight function ω, also the density of the transformation ψ is of product form as defined in (3.4), i.e., it is the product of univariate densities j (y j ), j = 1, . . . , d. Hence, based on the functions ϕ k given in (3.13) this product form extends to Similar to (3.14), the multivariate weighted L 2 − 1 2 , 1 2 d , ω -scalar product reads as and similar to (3.15) the multivariate Fourier coefficientsĥ k are naturally given with respect to this scalar product asĥ Generally, the multivariate approximated Fourier coefficients of the form only approximate the multivariate Fourier coefficientsĥ k . Finally, the multivariate version of the approximated Fourier partial sum is given by Similar to the Hilbert space H β (T d ) given in (1.4) we define such a space of L 2 -functions on the cube with square summable Fourier coefficientsĥ k as in (3.32) by The existence of the Fourier coefficientsĥ k becomes apparent after applying the well-known Cauchy-Schwarz-Inequality so that Then there is an approximation error estimate of the form as stated in Theorem 2.4. With the inverse transformation x = ψ −1 (y) we havê as well as and .
• The weight function ω occuring in the error estimate in Theorem 3.6 is by definition non-negative. If it is bounded below by a value larger than zero we estimate Unless we use a weight function ω that quickly diverges at the boundary points, this estimate is certainly not possible for neither the logarithmic transformation (3.9), the error transformation (3.10), nor the sine transformation (3.11). This is due to the property stated in the definition in (3.2) and the observation that the first derivatives ψ of the formerly mentioned transformations converge to 0 at the boundary points.
• In general, if h has no continuous periodization then the approximation error h − S I d N h measured in an unweighted L ∞ -norm diverges as there are singularities at the boundary points of the domain of h. Therefore we need the additional weight function to eliminate these singularities.
Conversely, a similar straight-forward calculation as in (3.38) reveals that Then there is an approximation error estimate of the form By assumption the criteria of Theorem 3.5 are fulfilled and the transformed function f of the form (3.23) is continuously extendable to the torus T d . Thus, we have f ∈ H m mix (T d ). These f are also in H m (T d ), due to the norm equivalence (2.4) and they furthermore have a continuous representative, because of the inclusion H m (T d ) → C(T d ) as in (2.5). For f ∈ H m (T d ) ∩ C(T d ) Theorem 2.5 yields the approximation error bound of the form with some constant C d,β := C(d, β) > 0. With the inverse transformation x = ψ −1 (y) we haveĥ as in (3.37), as well as .
In total, by combining (3.44), (3.43), and (3.37) we estimated for f ∈ H m (T d ) ∩ C(T d ) that Remark 3.9. If the weight function ω is bounded below by a value larger than 0 we estimate which turns into an equality if we have a contant weight function ω ≡ 1.

Algorithms
In this chapter we start denoting the parameters η, µ ∈ R d + . Hence, families of multivariate parameterized weight functions are denoted by ω(•, µ) as in (3.12) and for families of multivariate torus-to-cube transformations we use the notation ψ(•, η) to represent all possible torus-to-cube transformations in the sense of definition (3.3) and not just the parameterized transformations in (3.5). Furthermore, also all related functions and objects are now written with a parameter argument.
We adapt the algorithms described in [17, Algorithm 3.1 and 3.2] that are based on onedimensional fast Fourier transforms (FFTs). They are used for the fast reconstruction of approximated Fourier coefficientsĥ Λ k and the evaluation of a transformed multivariate trigonometric polynomials, in particular the approximated Fourier series S Λ I h, both given in (3.35). This is denoted as matrix-vector-products of the form for y j ∈ Λ ψ(•,η) (z, M ),ĥ := (ĥ k ) k∈I N and the transformed Fourier matrices A ∈ C M ×|I| and A * ∈ C |I|×M given by We incorporate the previously described idea that an h ∈ L 2 − 1 2 , 1 depending on the particular choices for η, µ ∈ R d + . Remark 4.1. We identify T d with different cubes. On one hand, when defining rank-1 lattices Λ(z, M ) in (2.7) we identify it with [0, 1) d . On the other hand, in order to apply the transformations ψ we need to consider T d [− 1 2 , 1 2 ) d , which we achieve by reassigning all lattice points x j ∈ Λ(z, M ) via for all j = 0, . . . , M − 1.

Evaluation of transformed multivariate trigonometric polynomials
Given a frequency set I ⊂ Z d of finite cardinality |I| < ∞ we consider the multivariate trigonometric polynomial h ∈ Π I,ψ(•,η) as in (3.33) with Fourier coefficientsĥ k . The evaluation of h In total, the evaluation of such a function is realized by simply pre-computing (ĝ ) M −1 =0 and applying a one-dimensional inverse fast Fourier transform, see Algorithm 4.1.

Reconstruction of transformed multivariate trigonometric polynomials
For the reconstruction of a multivariate trigonometric polynomial h ∈ Π I,ψ(•,η) as in (3.33) from lattice points y j ∈ Λ ψ(•,η) (z, M, I) we utilize the exact integration property (3.34) and the fact that we have and thus A * A = M I with I ∈ C |I|×|I| being the identity matrix. For fixed parameters η, µ ∈ R d + we have input sample points of the form For the reconstruction of the Fourier coefficientsĥ k we use a single one-dimensional fast Fourier transform. The entries of the resulting vector (ĝ ) M −1 =0 are renumbered by means of the unique inverse mapping k → k · z mod M , see Algorithm 4.2.

Discrete approximation error
In order to use Algorithms 4.1 and 4.2 to illustrate the proposed error bounds of Theorems 3.6 and 3.8 we sample both the test function h and the approximated Fourier partial sum S Λ I h in order to discretize and thus approximate the . In [18,Corollary 1] and [21,Theorem 2.1] it was shown under mild assumptions that for each frequency set I ⊂ Z d that induces a reconstructing rank-1 lattice, there is an M ∈ N such that |I| ≤ M |I| 2 . The upper bound can be improved to M ≤ C|I| log |I| with high probability by using multiple rank-1 lattices as shown in [20,22]. Furthermore, in (4.1) we already observed that for a reconstructing rank-1 lattice Λ ψ(•,η) (z, M, I) we have A * A = M I with I ∈ C |I|×|I| being the identity matrix. However, AA * ∈ C M ×M is generally not an identity matrix. Hence, there is a gap between the initially given values h and the resulting vector h approx that we quantify with the relative discrete approximation error .

(4.2)
Thus, we have a discretization of the particular weighted L ∞ -norm appearing in Theorem 3.6 and for hyperbolic crosses for appropriately chosen parameters η, µ ∈ R d + . Hence, the theoretical results predict a certain decay rate of the discretized approximation error for increasing N ∈ N with fixed m ∈ N and suitably chosen parameter η and µ.
It's important to note, that the particular discretization (4.2) was exclusively sampled at the rank-1 lattice nodes, so that we don't measure the quality of the approximation at any point outside the rank-1 lattice. This limitation is overcome by oversampling.
On the other hand, for the L 2 -approximation error we lack a similar discretization approach. However, by Theorem 3.8 we know that for fixed m ∈ N and suitably chosen parameters η and µ the error h − S Λ Hence, we can evaluate the L 2 -approximation error if we used Algorithm 4.2 to reconstruct the approximated Fourier coefficientsf Λ k and if it is possible to calculate the Fourier coefficientŝ f k for all k ∈ I d N .

Examples
We always assume a constant weight function ω ≡ 1. In dimensions d ∈ {1, 2, 5} we consider certain choices for test functions h ∈ C m − 1 2 , 1 2 d in combination with the logarithmic transformation (3.9) and the sine transformation (3.11). For both transformations we discuss the proposed smoothness conditions (3.24) in Theorem 3.5 and when they are fulfilled. These smoothness conditions lead to ranges of the multivariate parameter η ∈ R d + appearing in the logarithmic transformation (3.9) for which the transformed functions f of the form (3.23) have a guaranteed Sobolev smoothness degree m ∈ N, i.e. f ∈ H m mix (T d ). For such functions we have proven L ∞ -approximation error bounds in Theorem 3.6. In the end we compare the corresponding relative discrete approximation errors ε lattice given in (4.2) for both the logarithmic transformation (3.9) and the sine transformation (3.11) with various values of η ∈ R d + . Throughout this section we repeatedly specify parameter vectors η = (η, . . . , η) that have the same number in each entry, for which we recall the short notation of just using a single bold number, that appeared earlier in the definition (2.7) of rank-1 lattices Λ(z, M ) in form of 1 = (1, . . . , 1) .

Univariate approximation
The univariate test function is and shown on the left of Figure 5.1. We choose a constant weight function ω(y, µ) ≡ 1 for all µ ∈ R + and the logarithmic transformation ψ(x, η) with x ∈ [− 1 2 , 1 2 ], the parameter η ∈ R + , being in the form of (3.1).
Due to the choice of a constant weight function ω ≡ 1 the test function h is simply in The test function h in (5.1) combined with the constant weight function ω ≡ 1 and the logarithmic transformations given in (3.9) by lead to transformed functions f (•, η, 1) =: f (•, η) in the sense of (3.23) of the form for any m ∈ N 0 , we proceed to check conditions (3.19) for a given m ∈ N 0 . For a constant weight function ω ≡ 1 these conditions simplify to the task of determining the values η ∈ R + for which we have as well as for all j = 0, . . . , m. We obtain the following: • For m = 0 we already mentioned in (3.9) that the functions ψ (•, η) are finite for η ≥ 1 but converge to 0 at the boundary points ± 1 2 only for η > 1.
Switching to the sine transformation (3.11) leads to a transformed function f as given in  Figure 5.2 we showcase that the approximation errors of both the sine transformed and the logarithmically transformed functions for η = 2 behave similarly because they are both H 0 (T)-functions and are thus not guaranteed to have any upper bound as in (4.3). By increasing the parameter to η = 4 it smoothed the logarithmically transformed function by one Sobolev smoothness degree, so that f ∈ H 1 (T), causing a faster decaying upper bound (4.3) and thus a faster decay of the relative approximation error ε lattice as in (4.2). Another parameter increase to η = 6 increases the Sobolev smoothness by another degree so that f ∈ H 2 (T) and for η = 8 we have f ∈ H 3 (T), resulting in even faster decays of the respective relative approximation errors ε lattice for large enough N ∈ N. if η > 2m + 1, too. Due to the now exponential density functions (•, η) we obtain an overall faster decay of the discretized approximation error. However, as with the logarithmic transformation (3.9) the rate of decay at first is only as fast as the decay obtained with the sine transformation (3.11) and increases rapidly once we increase the parameter values to η ≥ 3.

High-dimensional approximation
Before discussing a particular multivariate approximation setup we again stress the fact that we have the fast Algorithm 4.1 and Algorithm 4.2 that are based on a single one-dimensional inverse FFT and an one-dimensional FFT, respectively. We consider the test function h(y) = h(y 1 , . . . , y d ) = We choose a constant weight function ω(•, µ) ≡ 1 for all µ ∈ R d + and the logarithmic transformation ψ(x, η) = ((ψ j (x j , η j )) d j=1 ) with x ∈ [− 1 2 , 1 2 ] d , the parameter η ∈ R d + and its univariate components ψ j (x j , η j ) in the form of (5.2).
Due to the choice of a constant weight function ω ≡ 1 the test function h is simply in ( In Figure 5.3 we have a side-by-side comparison of the graphs of these transformed functions f (x, η) for d = 2 with the parameter η ∈ {1, 2, 4, 6}.
We proceed to determine the values η ∈ R d + for which f (•, η) as in (5.6) is element of H m mix (T d ) by investigating conditions (3.24) in Theorem 3.5 for the derivatives of ψ . First of all, we observe that for η 1 , . . . , η d > 1 the components ψ 1 , . . . , ψ d of the transformations ψ(•, η) in (5.2) are transformations as defined in (3.3). As the test function (5.5) is obviously in C m − 1 2 , 1 2 d for any m ∈ N 0 , we proceed to check conditions (3.24) for a given m ∈ N 0 .
For a constant weight function these conditions simplify to the task of determining the values η = (η 1 , . . . , η d ) ∈ R d + for which we have • For m = 0 we already mentioned in (3.9) that the functions ψ (•, η ) are finite for η ≥ 1 but converge to 0 at the boundary points ± 1 2 only for η > 1. if η > 2m + 1.
• For values 2m + 1 < η < 2m + 3 the (m + 1)-th and all higher derivatives of ψ (•, η ) are unbounded and in case of η = 2m + 3 they are bounded but not C 0 − 1 2 , 1 2 . Hence, according to the conditions in Finally, for dimensions d = 2 and d = 5 we compare the relative discrete approximation errors ε lattice as in (4.2) of the sine transformed function in (5.4) and of the logarithmically transformed functions in (5.6) with η = 2, η = 4 and in case of d = 2 also with η = 6. For this matter we consider hyperbolic crosses I d N as defined in (2.3) for N ∈ {8, 9, . . . , 200} in d = 2 and for N ∈ {8, 9, . . . , 100} in d = 5. We again emphasize the major advantage of Algorithms 4.1 and 4.2 in having a complexity of just O(M log M + d|I d N |) due to being based on a single univariate inverse FFT and univariate FFT, respectly. Thus, their computation time is rather quick considering the fact that we are dealing with up to |I 5 100 | = 665.145 frequencies. In Figure 5.4 we showcase that the approximation errors of both the sine transformed and the logarithmically transformed functions for η = 2 behave similarly because they are both H 0 (T d )-functions and are thus not guaranteed to have any upper bound as in (4.3). By increasing the parameter to η = 4 it smoothed the logarithmically transformed function by one Sobolev smoothness degree, so that f ∈ H 1 (T d ), causing a faster decaying upper bound (4.3) and thus a faster decay of the relative approximation error ε lattice as in (4.2). Another parameter increase to η = 6 for d = 2 increases the Sobolev smoothness by another degree so that f ∈ H 2 (T 2 ) and the relative approximation error ε lattice decays even faster for large enough N ∈ N.

Conclusion
In this paper we considered functions h ∈ L 2 − 1 2 , 1 particular periodization strategy such that they are transformed into functions f that are continuously extendable on the torus T d . The applied multivariate torus-to-cube transformations ψ : − 1 2 , 1