1 Introduction

We study reconstruction of an unknown function from its d-plane Radon transform on the flat torus \({\mathbb {T}}^n = {\mathbb {R}}^n /{\mathbb {Z}}^n\) when \(1\le d \le n-1\). The d-plane Radon transform of a function f on \({\mathbb {T}}^n\) encodes the integrals of f over all periodic d-planes. The usual d-plane Radon transform of compactly supported objects on \({\mathbb {R}}^n\) can be reduced into the periodic d-plane Radon transform, but not vice versa. This was demonstrated for the geodesic X-ray transform in the recent work of Ilmavirta et al. [11]. As general references on the Radon transforms, we point to [5, 6, 14, 15].

Reconstruction formulas for integrable functions and a family of regularization strategies considered in this article were derived in [11] for the geodesic X-ray transform (\(d=1\)) on \({\mathbb {T}}^2\). We extend these methods to the d-plane Radon transforms of higher dimensions, study new types of reconstruction formulas for distributions, and prove new stability estimates on the Bessel potential spaces. This article considers only the mathematical theory of Radon transforms on \({\mathbb {T}}^n\), whereas numerical algorithms (Torus CT) were implemented in [11, 13].

Injectivity, a reconstruction method and certain stability estimates of the d-plane Radon transform on \({\mathbb {T}}^n\) were proved for distributions by Ilmavirta in [7]. Our reconstruction formulas and stability estimates in this article are different than the ones in [7]. The first injectivity result for the geodesic X-ray transform on \({\mathbb {T}}^2\) was obtained by Strichartz in [19], and generalized to \({\mathbb {T}}^n\) by Abouelaz and Rouvière in [2] if the Fourier transform is \(\ell ^1({\mathbb {Z}}^n)\). Abouelaz proved uniqueness under the same assumption for the d-plane Radon transform in [1].

The X-ray transform and tensor tomography on \({\mathbb {T}}^n\) has been applied to other integral geometry problems. These examples include the broken ray transform on boxes [7], the geodesic ray transform on Lie groups [8], tensor tomography on periodic slabs [10], and the ray transforms on Minkowski tori [9]. We expect that the d-plane Radon transform on \({\mathbb {T}}^n\) has applications in similar and generalized geometric problems as well, but have not studied this possibility any further.

This article is organized as follows. The main results are stated in Sect. 1.1. We recall preliminaries and prove some basic properties in Sect. 2. We prove new inversion formulas in Sect. 3. We prove our stability estimates and theorems on Tikhonov regularization in Sect. 4.

1.1 Results

We describe our results next. Here we only briefly introduce the notation used, and more details are given in the subsequent sections. One can also find more details in [7, 11]. Let \(n,d \in {\mathbb {Z}}\) be such that \(n \ge 2\) and \(1 \le d \le n-1\). We define the d-plane Radon transform of \(f \in {\mathcal {T}} := C^\infty ({\mathbb {T}}^n)\) as

$$\begin{aligned} R_df(x,A) := \int _{[0,1]^d} f(x+t_1v_1+\cdots +t_dv_d)dt_1\cdots dt_d \end{aligned}$$
(1)

where \(A = \{v_1,\dots ,v_d\}\) is any set of linearly independent integer vectors \(v_i \in {\mathbb {Z}}^n\).

It can be shown that A spans a periodic d-plane on \({\mathbb {T}}^n\), and on the other hand, any periodic d-plane on \({\mathbb {T}}^n\) has a basis of integer vectors. We can identify all periodic d-planes on \({\mathbb {T}}^n\) by the elements in the Grassmannian space \(\mathbf {Gr}(d,n)\) which is the collection of all d-dimensional subspaces of \({\mathbb {Q}}^n\). We redefine the d-plane Radon transform on \({\mathbb {T}}^n\) as \(R_df: \mathbf {Gr}(d,n) \rightarrow {\mathbb {C}}^\infty ({\mathbb {T}}^n)\) without a loss of data. The definition of \(R_d\) extends to the periodic distributions \(f \in {\mathcal {T}}'\) such that \(R_df(\cdot ,A) \in {\mathcal {T}}'\) for any \(A \in \mathbf {Gr}(d,n)\). We use the shorter notations \(R_{d,A}f = R_df(\cdot ,A)\) and \(X_{d,n} = {\mathbb {T}}^n \times \mathbf {Gr}(d,n)\). More details are given in Sect. 2.1.

Let \(w: {\mathbb {Z}}^n \times \mathbf {Gr}(d,n) \rightarrow (0,\infty )\) be a weight function such that \(w(\cdot ,A)\) is at most of polynomial decay (20) for any fixed \(A \in \mathbf {Gr}(d,n)\). If not said otherwise, then a weight w is always assumed to be of this form. The associated Fourier multipliers on distributions are denoted by \(F_w\). We denote the weighted Bessel potential space on the image side by \(L_s^{p,l}(X_{d,n};w)\) where \(s \in {\mathbb {R}}\), \(p,l \in [1,\infty ]\). The usual Bessel potential spaces on \({\mathbb {T}}^n\) are denoted by \(L_s^p({\mathbb {T}}^n)\), and \(H^s({\mathbb {T}}^n) = L_s^2({\mathbb {T}}^n)\) is the fractional \(L^2\) Sobolev space. The \(L_s^{p,l}(X_{d,n};w)\) norms are \(\ell ^l\) norms over \(\mathbf {Gr}(d,n)\) of the w-weighted Bessel potential norms of \(L_s^p({\mathbb {T}}^n;w(\cdot ,A))\) with \(A \in \mathbf {Gr}(d,n)\). More details are given in Sect. 2.2.

We show that \(L_s^{p,l}(X_{d,n};w)\) are Banach spaces when \(p \in [1,\infty ]\) in Lemma 2.1. Many of our results consider the Hilbert spaces with \(p = l = 2\). Most of the theorems in this article would have been unreachable for \(R_d\) when \(d < n-1\) if we did not include weights in the data spaces. We construct weights which satisfy the assumptions of our theorems in Sect. 2.3.

Remark 1.1

If \(d =n-1\), then weights are not that important for the analysis of \(R_d\) since \(R_d\) maps \(f \in H^s({\mathbb {T}}^n)\) with \({\hat{f}}(0) = 0\) continuously to the natural image space \(H^s(X_{d,n})\) without setting any weight. Therefore weights are only required at the origin on the Fourier side of the data space. This was demonstrated in the case of \(n =2\) and \(d=1\) in [11], or for example in the special case (7) of Theorem 1.3.

Our first theorem considers the adjoint and the normal operators of \(R_d: H^s({\mathbb {T}}^n) \rightarrow L_s^{2,2}(X_{d,n};w)\). This generalizes [11, Proposition 11] into higher dimensions. Theorem 1.1 and Corollary 1.2 are proved in Sect. 2.4.3.

Theorem 1.1

(Adjoint and normal operators) Let \(s \in {\mathbb {R}}\) and suppose that there exists \(C_w > 0\) such that

$$\begin{aligned} \sum _{A \in \Omega _k} w(k,A)^2 \le C_w^2, \quad \Omega _k := \{\, A \in \mathbf {Gr}(d,n) \,;\, k\bot A\,\} \end{aligned}$$
(2)

for any \(k \in {\mathbb {Z}}^n\). Then the adjoint of \(R_{d}: H^s({\mathbb {T}}^n) \rightarrow L_s^{2,2}(X_{d,n};w)\) is given by

$$\begin{aligned} \widehat{R_d^*g}(k) = \sum _{A \in \Omega _k} w(k,A)^2{\hat{g}}(k,A) \end{aligned}$$
(3)

and the normal operator \(R_d^*R_d: H^s({\mathbb {T}}^n) \rightarrow H^s({\mathbb {T}}^n)\) is the Fourier multiplier associated with \(W_k := \sum _{A \in \Omega _k} w(k,A)^2\). In particular, the mapping \(F_{W_k^{-1}}R_d^*: R_d({\mathcal {T}}') \rightarrow {\mathcal {T}}'\) is the inverse of \(R_d\).

Theorem 1.1 gives a new inversion formula in terms of the adjoint and a Fourier multiplier. Its Corollary 1.2 gives new stability estimates on \(H^s({\mathbb {T}}^n)\). The stability estimates of \(R_1\) on \(H^s({\mathbb {T}}^2)\) were not explicitly written down in [11] but they can be found between the lines. We denote by \(R_d^{*,w}\) the adjoint of \(R_d\) associated to the weight w when the weight needs to be specified.

Corollary 1.2

(Stability estimates) Suppose that the assumptions of Theorem 1.1 hold, and that there exists \(c_w > 0\) such that \(W_k \ge c_w^2\) for any \(k \in {\mathbb {Z}}^n\).

  1. (i)

    Then \(F_{W_k^{-1}}R_d^*: L_s^{2,2}(X_{d,n};w) \rightarrow H^s({\mathbb {T}}^n)\) is \(1/c_w\)-Lipschitz.

  2. (ii)

    Let \(f \in {\mathcal {T}}'\). Then

    $$\begin{aligned} \Vert f \Vert _{H^s({\mathbb {T}}^n)} \le \frac{1}{c_w}\Vert R_df \Vert _{L_s^{2,2}(X_{d,n};w)}. \end{aligned}$$
    (4)
  3. (iii)

    Let \({\tilde{w}}(k,A) = \frac{w(k,A)}{\sqrt{W_k}}\) and \(p \in [1,\infty ]\). Then \(R_d^{*,{\tilde{w}}}R_df = f\) and \(\Vert f \Vert _{L_s^p({\mathbb {T}}^n)} = \Vert R_d^{*,{\tilde{w}}}R_df \Vert _{L_s^p({\mathbb {T}}^n)}\) for any \(f \in {\mathcal {T}}'\).

In order to prove \(L_s^p \lesssim L_s^p\) type stability (iii) for more general weights in terms of the normal operator, one would have to show that \(F_{W_k^{-1}}\) is a bounded \(L^p\) multiplier. Other stability estimates on \(L_s^p({\mathbb {T}}^n)\) are given in terms of \(R_df\) in Proposition 4.3. These stability estimates follow from Corollary 1.2 and the Sobolev inequality on \({\mathbb {T}}^n\). This method requires additional smoothness of \(R_df\) in order to control the norm of f due to the use of the Sobolev inequality.

We have proved three other new inversion formulas for \(R_d\) as well. The other two inversion formulas are given in Proposition 3.1 and its Corollary 3.3. Proposition 3.1 generalizes the inversion formula [11, Theorem 1] into higher dimensions. Its Corollary 3.3 generalizes the formula for all periodic distributions using the structure theorem. We state the third inversion formula here since we find it to be the most interesting one. Theorem 1.3 is proved in the end of Sect. 3.

Theorem 1.3

(Periodic filtered backprojections) Suppose that \(f \in {\mathcal {T}}'\). Let \(w: {\mathbb {Z}}^n \times \mathbf {Gr}(d,n) \rightarrow {\mathbb {R}}\) be a weight so that

$$\begin{aligned} \sum _{A \in \Omega _k} w(k,A) = 1, \quad \Omega _k := \{\, A \in \mathbf {Gr}(d,n) \,;\, k\bot A\,\} \end{aligned}$$
(5)

and the series is absolutely convergent for any \(k \in {\mathbb {Z}}^n\). (The weight does not have to generate a norm or have at most of polynomial decay.) Then

$$\begin{aligned} (f,h) = \sum _{A \in \mathbf {Gr}(d,n)}(F_{w(\cdot ,A)}R_{d,A}f,h), \quad \forall h \in C^\infty ({\mathbb {T}}^n). \end{aligned}$$
(6)

Moreover, if f has zero average and \(d = n-1\), then

$$\begin{aligned} f = \sum _{A \in \mathbf {Gr}(d,n)} R_{d,A}f. \end{aligned}$$
(7)

Remark 1.2

The author is not aware of a similar formula for the inverse Radon transform in earlier literature. We emphasize that this new result implies that a clever sum of the \((n-1)\)-plane Radon transform data is the target function. If \(n=2\), this holds true for the X-ray transform of compactly supported functions on the plane \({\mathbb {R}}^2\). We further remark that it is easy to recover the average of a function and filter it out from \(R_{n-1}f\).

Finally, we state our results on regularization. These results generalize [11, Theorems 2 and 3] into higher dimensions. The proofs are given in Sect. 4. Let \(g \in L_r^{2,l}(X_{d,n};w)\). We consider the Tikhonov minimization problem

$$\begin{aligned} \underset{f \in H^t({\mathbb {T}}^n)}{\arg \min } \left( \Vert R_df-g \Vert _{L_r^{2,l}(X_{d,n};w)}^l + \alpha \Vert f \Vert _{H^s({\mathbb {T}}^n)}^2\right) . \end{aligned}$$
(8)

for any \(n \ge 2\), \(1 \le d \le n-1\), \(\alpha > 0\), \(l = 2\), and \(r, s, t\in {\mathbb {R}}\). We do not fix the regularity of f a priori but the space \(H^t({\mathbb {T}}^n)\) will be found after solving the minimization problem for distributions in general.

Let w be a weight, \(z \in {\mathbb {R}}\), and \(\alpha >0\). We define the operator \(P_{w,z}^\alpha : {\mathcal {T}}' \rightarrow {\mathcal {T}}'\) to be the Fourier multiplier associated with

$$\begin{aligned} p_{w,z}^\alpha (k) := \frac{1}{W_k+\alpha \left\langle k\right\rangle ^{2z}}. \end{aligned}$$
(9)

Theorem 1.4

(Tikhonov minimization problem) Let w be a weight such that \(c_w^2 \le W_k \le C_w^2\) for some uniform constants \(c_w, C_w > 0\). Suppose that \(\alpha >0\), and \(s \ge r\). Then the unique minimizer of the Tikhonov minimization problem (8) with \(g \in L_r^{2,2}(X_{d,n};w)\) is given by \(f = P_{w,s-r}^\alpha R_d^*g \in H^{2s-r}({\mathbb {T}}^n)\).

The last theorem we state in the introduction generalizes the result [11, Theorem 3] on regularization strategies to higher dimensions.

Theorem 1.5

(Regularization strategy) Let w be a weight such that \(c_w^2 \le W_k \le C_w^2\) for some uniform constants \(c_w, C_w > 0\). Suppose \(r,t,s,\delta \in {\mathbb {R}}\) are constants such that \(2s+t \ge r\), \(\delta \ge 0\), and \(s > 0\). Let \(g \in L_t^{2,2}(X_{d,n}; w)\) and \(f \in H^{r+\delta }({\mathbb {T}}^n)\).

Then the Tikhonov regularized reconstruction operator \(P_{w,s}^\alpha R_d^*\) is a regularization strategy in the sense that

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \sup _{\Vert g \Vert _{L_t^{2,2}(X_{d,n}; w)}\le \epsilon } \Vert P_{w,s}^{\alpha (\epsilon )} R_d^*(R_df+g)-f \Vert _{H^r({\mathbb {T}}^n)} = 0 \end{aligned}$$
(10)

where \(\alpha (\epsilon ) =\sqrt{\epsilon }\) is an admissible choice of the regularization parameter.

Moreover, if \(\Vert g \Vert _{L_t^{2,2}(X_{d,n};w)} \le \epsilon \), \(0< \delta < 2s\), and \(0 < \alpha \le c_w^2(2s/\delta -1)\), we have a quantitative convergence rate

$$\begin{aligned} \begin{aligned}&\Vert P_{w,s}^\alpha R_d^*(R_d f +g) -f \Vert _{H^r({\mathbb {T}}^n)} \\&\le \alpha ^{\delta /2s}c_w^{-\delta /s}C(\delta /2s)\Vert f \Vert _{H^{r+\delta }({\mathbb {T}}^n)} + C_w^3c_w^{-2}\frac{\epsilon }{\alpha } \end{aligned} \end{aligned}$$
(11)

where \(C(x) = x(x^{-1}-1)^{1-x}\).

Remark 1.3

The optimal rate of convergence with respect to \(\epsilon >0\) can be found by choosing the regularization parameter \(\alpha (\epsilon )\) so that the terms on the right hand side of (11) are of the same order.

2 Preliminaries

2.1 Periodic Radon Transforms and Grassmannians

We denote by \({\mathcal {T}}\) the set \(C^\infty ({\mathbb {T}}^n)\) and \({\mathcal {T}}'\) its dual space, i.e. the space of periodic distributions. Denote by \(G_d^n\) the set of linearly independent unordered d-tuples in \({\mathbb {Z}}^n\setminus 0\). We may write any element \(A \in G_d^n\) as \(A = \{v_1,\dots ,v_d\}\). The elements in the set \(G_d^n\) span all periodic d-planes on \({\mathbb {T}}^n\).

Suppose that \(f \in {\mathcal {T}}\). We define the d-plane Radon transform of f as

$$\begin{aligned} R_df(x,A) := \int _{[0,1]^d} f(x+t_1v_1+\cdots +t_dv_d)dt_1\cdots dt_d. \end{aligned}$$
(12)

We remark that \(R_d: {\mathcal {T}} \rightarrow {\mathcal {T}}^{G_d^n}\), \(R_df: {\mathbb {T}}^n \times G_d^n \rightarrow {\mathbb {C}}\) and \(R_df(\cdot ,A): {\mathbb {T}}^n \rightarrow {\mathbb {C}}\).

Denote the duality pairing between \({\mathcal {T}}'\) and \({\mathcal {T}}\) by \((\cdot ,\cdot )\). If \(f,g \in {\mathcal {T}}\), it follows easily from Fubini’s theorem that

$$\begin{aligned} (f,R_dg(\cdot ,A)) = (R_df(\cdot ,A),g). \end{aligned}$$
(13)

We define the d-plane Radon transform for any \(f \in {\mathcal {T}}'\) and \(A \in G_d^n\) simply as

$$\begin{aligned} (R_df(\cdot ,A))(g) = (f,R_dg(\cdot ,A)) \quad \forall g \in {\mathcal {T}}. \end{aligned}$$
(14)

This is the unique continuous extension of \(R_d(\cdot ,A)\) to the periodic distributions. The Fourier series coefficients of \(R_df(\cdot ,A)\) are defined as usual.

We denote the Grassmannian of d-dimensional subspaces of \({\mathbb {Q}}^n\) by \(\mathbf {Gr}(d,n)\). If \(A, B \in G_d^n\) span the same subspace of \({\mathbb {Q}}^n\), then A and B represent the same element in \(\mathbf {Gr}(d,n)\), and \(R_df(\cdot ,A) = R_df(\cdot ,B)\) holds for any \(f \in {\mathcal {T}}'\) by Theorem 2.4. On the other hand, for every \(A \in \mathbf {Gr}(d,n)\) there exists \({\tilde{A}} \in G_d^n\) that spans A. This allows one to define the Radon transform as \(R_df: \mathbf {Gr}(d,n) \rightarrow {\mathcal {T}}'\) without data redundancy by setting \(R_df(\cdot ,A) := R_df(\cdot ,{\tilde{A}})\) where \({\tilde{A}} \in G_d^n\) spans \(A \in \mathbf {Gr}(d,n)\). This connection to the Grassmannians was mentioned earlier in [7] but was not directly used.

Remark 2.1

Let us denote the projective space \({\mathbb {P}}^{n-1} := \mathbf {Gr}(1,n)\). The height of \(P \in {\mathbb {P}}^{n-1}\) is defined by \(H(P) = \gcd (p)^{-1}\left| p \right| _{\ell ^\infty }\) using any representative p of P. The projective space \({\mathbb {P}}^1\) and the height were used in [11] to analyze the number of projection directions required to reconstruct the Fourier series coefficients of a phantom up to a fixed radius. This question reduces to Schanuel’s Theorem [17] in algebraic number theory. This analysis in [11] extends to higher dimensions when \(d = n-1\).

2.2 Bessel Potential Spaces and Data Spaces

Let \(f \in {\mathcal {T}}'\). We mean by the expression \(\sum _{k \in {\mathbb {Z}}^n} \left\langle k\right\rangle ^s {\hat{f}}(k)e^{2\pi i k \cdot x}\) the limit

$$\begin{aligned} {\tilde{f}}(x) := \lim _{r \rightarrow \infty } f_{r,s}(x),\quad f_{r,s}(x):= \sum _{\left| k \right| _{\ell ^\infty ({\mathbb {Z}}^n)} \le r} \left\langle k\right\rangle ^s {\hat{f}}(k)e^{2\pi i k \cdot x}, \end{aligned}$$
(15)

in the sense of distributions. If \(f \in L^p({\mathbb {T}}^n)\) with \(p \in (1,\infty )\), then \(f_{r,0} \rightarrow f\) in \(L^p({\mathbb {T}}^n)\) as \(r\rightarrow \infty \). Moreover, if \(p \in (1,\infty ]\), then \({\tilde{f}} = f\) almost everywhere as the pointwise limit by a higher dimensional Carleson–Hunt theorem. These facts are proved for example in [21, Theorems 4.2 and 4.3]. If \(p=1\), one can utilize the Cesàro sums to reconstruct a distribution in \(L^1({\mathbb {T}}^n)\) from its Fourier series.

For any Sobolev scale \(s \in {\mathbb {R}}\), we define the Bessel potential spaces \(L_s^p({\mathbb {T}}^n) \subset {\mathcal {T}}'\) by the relation \(f \in L_s^p({\mathbb {T}}^n)\) if and only if \((1-\Delta )^{s/2}f \in L^p({\mathbb {T}}^n)\) (see e.g. [3]). We define the Bessel potential norms by

$$\begin{aligned} \Vert f \Vert _{L_s^p({\mathbb {T}}^n)} := \Vert (1-\Delta )^{s/2}f \Vert _{L^p({\mathbb {T}}^n)}. \end{aligned}$$
(16)

Then the space \(L_s^p({\mathbb {T}}^n) \subset {\mathcal {T}}'\) consists of all \(f \in {\mathcal {T}}'\) with \(\Vert f \Vert _{L_s^p({\mathbb {T}}^n)} < \infty \). If \(p \in (1,\infty )\) and \(s \in {\mathbb {R}}\), then

$$\begin{aligned} \begin{aligned} \Vert f \Vert _{L_s^p({\mathbb {T}}^n)}&= \lim _{r\rightarrow \infty }\Vert \sum _{\left| k \right| _{\ell ^\infty ({\mathbb {Z}}^n)}\le r} \left\langle k\right\rangle ^s {\hat{f}}(k)e^{2\pi i k \cdot x} \Vert _{L^p({\mathbb {T}}^n)}, \\ \Vert f \Vert _{H^s({\mathbb {T}}^n)}&= \sqrt{\sum _{k \in {\mathbb {Z}}^k} \left\langle k\right\rangle ^{2s}\left| {\hat{f}}(k) \right| ^2} \end{aligned} \end{aligned}$$
(17)

where \(\left\langle k\right\rangle =(1+\left| k \right| ^2)^{1/2}\) as usual. When \(p \in (1,\infty )\), one has equivalently that \(f \in L_s^p({\mathbb {T}}^n)\) if and only if \((1-\Delta )^{s/2}f \in L^p({\mathbb {T}}^n)\) in terms of the \(L^p\) convergent Fourier series and \(f\in {\mathcal {T}}'\). Moreover, for any \(p \in (1,\infty ]\) and \(f \in L_s^p({\mathbb {T}}^n)\) it holds that

$$\begin{aligned} \Vert f \Vert _{L_s^p({\mathbb {T}}^n)} = \Vert \lim _{r\rightarrow \infty }\sum _{\left| k \right| _{\ell ^\infty ({\mathbb {Z}}^n)}\le r} \left\langle k\right\rangle ^s {\hat{f}}(k)e^{2\pi i k \cdot x} \Vert _{L^p({\mathbb {T}}^n)} \end{aligned}$$
(18)

where the limit is taken pointwise since the Fourier series converges almost everywhere. If \(p = 2\), then \(H^s({\mathbb {T}}^n) = L_s^p({\mathbb {T}}^n)\) is the fractional \(L^2\) Sobolev space. If \(p \in [1,\infty ]\) and \(s = 0\), then the \(L_0^p({\mathbb {T}}^n)\) and \(L^p({\mathbb {T}}^n)\) norms agree. The Bessel potential spaces are used as domains of \(R_d\) in this work, which extends studies of the case \(p = 2\) in [7, 11].

If \(\omega : {\mathbb {Z}}^n \rightarrow (0,\infty )\) and \(f \in {\mathcal {T}}'\), then we define the \(\omega \)-weighted norms by

$$\begin{aligned} \Vert f \Vert _{L_s^p({\mathbb {T}}^n; \omega )} := \Vert F_\omega f \Vert _{L_s^p({\mathbb {T}}^n)} \end{aligned}$$
(19)

where \(F_\omega \) is the Fourier multiplier of \(\omega \). We say that a weight \(\omega : {\mathbb {Z}}^n \rightarrow (0,\infty )\) is at most of polynomial decay if there exists \(C, N > 0\) such that

$$\begin{aligned} \omega (k) \ge C\left\langle k\right\rangle ^{-N} \quad \forall k \in {\mathbb {Z}}^n. \end{aligned}$$
(20)

We next define suitable data spaces that contain ranges of \(R_d\) when its domains are restricted to the Bessel potential spaces. Let us denote \(X_{d,n} := {\mathbb {T}}^n \times \mathbf {Gr}(d,n)\) to keep our notation shorter. We generalize the data space given in [11] to all \(n \ge 2\), \(1 \le d\le n-1\), and \(p \in [1,\infty ]\), using the Grassmannians, the Bessel potential spaces and weights.

Let \(1 \le d \le n-1\) and \(w: {\mathbb {Z}}^n \times \mathbf {Gr}(d,n) \rightarrow (0,\infty )\) be a weight function such that \(w(\cdot ,A)\) is at most of polynomial decay for any fixed \(A \in \mathbf {Gr}(d,n)\). We always assume in this work that the weight is at most of polynomial decay. We say that a (generalized) function \(g: X_{d,n} \rightarrow {\mathbb {C}}\) belongs to \(L_{s}^{p,l}(X_{d,n}; w)\) with \(1 \le l < \infty \) if the norm

$$\begin{aligned} \Vert g \Vert _{L_{s}^{p,l}(X_{d,n}; w)}^l := \sum _{A \in \mathbf {Gr}(d,n)} \Vert g(\cdot ,A) \Vert _{L_s^p({\mathbb {T}}^n; w(\cdot ,A))}^l \end{aligned}$$
(21)

is finite and \(g(\cdot ,A) \in {\mathcal {T}}'\) for any fixed \(A \in \mathbf {Gr}(d,n)\). Similarly, if \(l = \infty \), we define

$$\begin{aligned} \Vert g \Vert _{L_{s}^{p,\infty }(X_{d,n}; w)} := \sup _{A \in \mathbf {Gr}(d,n)} \Vert g(\cdot ,A) \Vert _{L_s^p({\mathbb {T}}^n; w(\cdot ,A))}. \end{aligned}$$
(22)

In the above definition, one can replace \(\mathbf {Gr}(d,n)\) by any countable set Y (cf. Lemma 2.1).

If \(p,l = 2\), then the norm is generated by the inner product

$$\begin{aligned} (h,g)_{L_s^{2,2}(X_{d,n};w)} := \sum _{A \in \mathbf {Gr}(d,n)} (F_{w(\cdot ,A)}h,F_{w(\cdot ,A)}g)_{H^s({\mathbb {T}}^n)} \end{aligned}$$
(23)

which makes \(L_s^{2,2}(X_{d,n};w)\) a Hilbert space. We prove that the spaces \(L_s^{p,l}(X_{d,n};w)\) are Banach spaces when \(p \in [1,\infty ]\) in Lemma 2.1. We emphasize that a weight does not have to have uniform coefficients for its at most of polynomial decay with respect to \(\mathbf {Gr}(d,n)\).

There is a connection to the norms used in [11]. Let w be any weight such that \(\sum _{A \in \mathbf {Gr}(1,2)} w(0,A)^2 = 1\), and \(w(k,A) \equiv 1\) if \(k\ne 0\). Now the results in [11] follow from the results of this article using the norm \(L_{s}^{2,2}(X_{1,2}; w)\) as the image side spaces in [11] are contained in \(L_{s}^{2,2}(X_{1,2}; w)\).

Yet another norm was used for the stability estimates in [7]. In the cases \(d = n-1\) and \(l = \infty \), our analysis of \(R_d\) would not require weights, and can be performed similarly to [7, 11]. The analysis of \(R_d|_{L_s^p({\mathbb {T}}^n)}\) has not been done before if \(p \ne 2\). The Bessel potential norms on the domain side are used to understand better the mapping properties of \(R_{d}\).

We state and prove the following lemma for the sake of completeness. We remark that without the decay condition on weights these weighted spaces would not be complete.

Lemma 2.1

Let Y be a countable set. Let \(w: {\mathbb {Z}}^n \times Y \rightarrow (0,\infty )\) be a weight that is at most of polynomial decay for any fixed \(y \in Y\). Suppose that \(s \in {\mathbb {R}}, p \in [1,\infty ], l \in [1,\infty ]\), and \(n \ge 1\). Then \(L_{s}^{p,l}({\mathbb {T}}^n \times Y; w)\) is a Banach space. In particular, \(L_{s}^{2,2}({\mathbb {T}}^n \times Y; w)\) is a Hilbert space.

Proof

Suppose that \(1\le l < \infty \). (If \(l = \infty \), the proof is similar.) We first show that \(L_{s}^{p,l}({\mathbb {T}}^n \times Y; w)\) is a vector space. Let \(c \in {\mathbb {C}}\) and \(f,g \in L_{s}^{p,l}({\mathbb {T}}^n \times Y; w)\). We have trivially that

$$\begin{aligned} \Vert cf \Vert _{L_{s}^{p,l}({\mathbb {T}}^n \times Y; w)}^l = \left| c \right| ^l\sum _{y \in Y} \Vert f(\cdot ,y) \Vert _{L_s^p({\mathbb {T}}^n;w)}^l. \end{aligned}$$
(24)

The Minkowski and triangle inequalities imply

$$\begin{aligned} \begin{aligned} \Vert f+g \Vert _{L_{s}^{p,l}({\mathbb {T}}^n \times Y; w)}&= \left( \sum _{y \in Y} \Vert F_{w(\cdot ,y)}f(\cdot ,y)+F_{w(\cdot ,y)}g(\cdot ,y) \Vert _{L_s^p({\mathbb {T}}^n)}^l\right) ^{1/l} \\&\le \Vert f \Vert _{L_{s}^{p,l}({\mathbb {T}}^n \times Y; w)} + \Vert g \Vert _{L_{s}^{p,l}({\mathbb {T}}^n \times Y; w)}. \end{aligned} \end{aligned}$$
(25)

This shows that \(L_{s}^{p,l}({\mathbb {T}}^n \times Y; w)\) is a vector subspace of all collections of distributions \(\{f(\cdot ,y)\}_{y \in Y}\) with \(f(\cdot ,y) \in {\mathcal {T}}'\).

We show next that \(L_{s}^{p,l}({\mathbb {T}}^n \times Y; w)\) is a complete space. Let \(f_i \in L_{s}^{p,l}({\mathbb {T}}^n \times Y; w)\) be a Cauchy sequence. It follows from the definition of the norm in \(L_{s}^{p,l}({\mathbb {T}}^n \times Y; w)\) that \(f_i(\cdot ,y) \in L_s^p({\mathbb {T}}^n; w(\cdot ,y))\) is a Cauchy sequence for any \(y \in Y\). Suppose that each \(L_s^p({\mathbb {T}}^n; w(\cdot ,y))\) is complete. It follows that \(f_i(\cdot ,y) \rightarrow f_y \in L_s^p({\mathbb {T}}^n; w(\cdot ,y))\) as \(i \rightarrow \infty \). This implies that there exists a limit of \(f_i\) in \(L_{s}^{p,l}({\mathbb {T}}^n \times Y; w)\) by standard arguments.

Let us prove that \(L_s^p({\mathbb {T}}^n; w(\cdot ,y))\) is complete for any \(y \in Y\). Take a Cauchy sequence \(f_i \in L_s^p({\mathbb {T}}^n; w(\cdot ,y))\). Now it follows that the distributions

$$\begin{aligned} g_i = (1-\Delta )^{s/2}F_{w(\cdot ,y)}f \end{aligned}$$
(26)

are in \(L^p({\mathbb {T}}^n)\) and form a Cauchy sequence. Therefore \(\lim _{i \rightarrow \infty } g_i =: g\) exists. We claim that the distribution defined on the Fourier side as \({\hat{f}}(k) := \frac{{\hat{g}}(k)}{\left\langle k\right\rangle ^s w(k,y)}\) is the limit of \(f_i\) in \(L_s^p({\mathbb {T}}^n; w(\cdot ,y))\).

We need to show two things, that \(f \in {\mathcal {T}}'\) and \(\Vert f_i-f \Vert _{L_s^p({\mathbb {T}}^n; w(\cdot ,y))} \rightarrow 0\) as \(i \rightarrow \infty \). We first notice that \((1-\Delta )^{s/2}F_{w(\cdot ,y)}f = g\) belongs to \(L^p({\mathbb {T}}^n)\). We can now calculate that

$$\begin{aligned} \begin{aligned} \Vert f_i-f \Vert _{L_s^p({\mathbb {T}}^n; w(\cdot ,y))}&= \Vert (1-\Delta )^{s/2}F_{w(\cdot ,y)}(f_i-f) \Vert _{L^p({\mathbb {T}}^n)} \\&=\Vert g_i -g \Vert _{L^p({\mathbb {T}}^n)} \end{aligned} \end{aligned}$$
(27)

for any \(i \in {\mathbb {N}}\). Therefore, \(\Vert f_i-f \Vert _{L_s^p({\mathbb {T}}^n; w(\cdot ,x))} \rightarrow 0\) as \(i \rightarrow \infty \).

It is enough that the Fourier coefficients of f have polynomial growth by the structure theorem of periodic distributions [18, Chapter 3.2.3]. We have \(\left| {\hat{g}}(k) \right| \le C_1 \left\langle k\right\rangle ^\alpha \) for some \(\alpha , C_1 > 0\) since \(g \in L^p({\mathbb {T}}^n) \subset {\mathcal {T}}'\). On the other hand, we assumed that \(w(k,y) \ge C_2\left\langle k\right\rangle ^{-N}\) for some \(C_2, N > 0\). Hence, we obtain that

$$\begin{aligned} \left| {\hat{f}}(k) \right| = \left| \frac{{\hat{g}}(k)}{\left\langle k\right\rangle ^s w(k,y)} \right| \le (C_1/C_2)\left\langle k\right\rangle ^{\alpha +N-s}. \end{aligned}$$
(28)

This shows that \(f \in {\mathcal {T}}'\). \(\square \)

Remark 2.2

One uses the fact that weights have at most of polynomial decay only to show that the limits of Cauchy sequences are in \({\mathcal {T}}'\). One could also allow more rapid decay for weights but in that case, the objects of the completion would not be distributions but ultra-distributions [18]. In the analysis of \(R_d\), such generality seems to be unnecessary and our assumptions avoid this.

2.3 On Constructions of Weights

In this section, we discuss how to construct weights that satisfy the assumptions of our theorems. The weights of this paper are of the form \(w: {\mathbb {Z}}^n \times \mathbf {Gr}(d,n) \rightarrow (0,\infty )\) with the following properties.

  1. (i)

    For any \(A \in \mathbf {Gr}(d,n)\) there exists \(C, N > 0\) such that \(w(k,A) \ge C\left\langle k\right\rangle ^{-N}\) for every \(k \in {\mathbb {Z}}^n\).

  2. (ii)

    There exists \(C > 0\) such that \(W_k \le C\) for every \(k \in {\mathbb {Z}}^n\) where \(W_k = \sum _{A \in \Omega _k} w(k,A)^2\) and \(\Omega _k = \{\, A \in \mathbf {Gr}(d,n) \,;\, k \bot A\,\}\).

  3. (iii)

    There exists \(c > 0\) such that \(c \le W_k\) for every \(k \in {\mathbb {Z}}^n\).

The property (i) is assumed for any weight in this article to guarantee that \(L_s^{p,l}(X_{d,n};w)\) are Banach spaces. The property (ii) is assumed for most of the weights to guarantee that \(R_d: L_s^p({\mathbb {T}}^n) \rightarrow L_s^{p,l}(X_{d,n};w)\) is continuous (with some restrictions if \(p,l \ne 2\)). The property (iii) is additionally assumed to prove the stability estimates and the theorems on regularization.

First of all, it is very easy to construct weights that satisfy (i) alone. It is not hard to construct weights that satisfy (i) and (ii). Since the set \(\mathbf {Gr}(d,n)\) is countable, we may write it with an enumeration \(\varphi : \mathbf {Gr}(d,n) \rightarrow {\mathbb {N}}\). For example, we construct a weight \(w(k,A) = 2^{-\varphi (A)}\left\langle k\right\rangle ^{-N}\) with large enough \(N > 0\) chosen such that \(\sum _{k \in {\mathbb {Z}}^n} \left\langle k\right\rangle ^{-2N} < \infty \). Then \(\sum _{A\in \mathbf {Gr}(d,n)}\sum _{k \in {\mathbb {Z}}^n} w(k,A)^2 < C\) for some \(C > 0\). This shows that both conditions (i) and (ii) hold.

We give next a nontrivial example of a weight satisfying (ii) and (iii) but not (necessarily) (i). Let \(\varphi _k: \Omega _k \rightarrow {\mathbb {N}}\) be an enumeration. Let \(Q := \{\,(k,A) \in {\mathbb {Z}}^n \times \mathbf {Gr}(d,n)\,;\, A \in \Omega _k\,\}\). For any \((k,A) \in Q\), we define the weight \(w(k,A) := \frac{h(k)}{\varphi _k(A)^{1/2 +\epsilon }}\) with some mapping \(h: {\mathbb {Z}}^n \rightarrow (a,b)\) with \(0<a\le b <\infty \) and \(\epsilon > 0\). If \((k,A) \notin Q\), we set \(w(k,A) = 1\). One has that \(\left| \Omega _k \right| = \infty \) if \(1\le d < n-1\) or \(k = 0\), and \(\left| \Omega _k \right| = 1\) if \(d = n-1\) and \(k \ne 0\). Now

$$\begin{aligned} \sum _{A\in \Omega _k} w(k,A)^2 = h^2(k)\sum _{i=1}^{\left| \Omega _k \right| } i^{-1-2\epsilon }. \end{aligned}$$
(29)

Hence, we get that \(a^2 \le W_k \le Cb^2\) where \(C= \sum _{i=1}^\infty i^{-1-2\epsilon }\).

The problem gets more difficult if the all three conditions must be satisfied at the same time. We solve this problem now by combining ideas from the both constructions above. We make a proposition about a concrete example, and more general methods are summarized in Remarks 2.3 and 2.4.

Proposition 2.2

Let \(\varphi _k: \Omega _k \rightarrow {\mathbb {N}}\) be an enumeration for any \(k \in {\mathbb {Z}}^n\), and let \(\varphi : \mathbf {Gr}(d,n) \rightarrow {\mathbb {N}}\) be an enumeration. Let \(h: {\mathbb {Z}}^k \rightarrow (a,b)\) with \(0<a\le b < \infty \) and \(g(k) = \left\langle k\right\rangle ^{-N}\) for some \(N \ge 0\). Then the weight

$$\begin{aligned} w(k,A) := {\left\{ \begin{array}{ll} \frac{h(k)}{\varphi _k(A)} + \frac{g(k)}{\varphi (A)} \quad &{} (k,A) \in Q \\ 1 \quad &{} (k,A) \in Q^c\end{array}\right. } \end{aligned}$$
(30)

satisfies the properties (i), (ii) and (iii).

Proof

Using the definition (30) and the positivity of the involved functions, we have that

$$\begin{aligned} W_k \ge h^2(k)\sum _{A \in \Omega _k} \varphi _k(A)^{-2} = h^2(k)\sum _{i=1}^{\left| \Omega _k \right| } i^{-2} \ge a^2. \end{aligned}$$
(31)

This shows (iii).

Suppose that \((k,A) \in Q\). We use

$$\begin{aligned} \frac{1}{2}w(k,A)^2 \le \frac{h^2(k)}{\varphi _k(A)^{2}}+\frac{g^2(k)}{\varphi (A)^{2}} \end{aligned}$$
(32)

to estimate \(W_k\) from above. The formula (32) gives

$$\begin{aligned} \frac{1}{2}W_k \le \sum _{A \in \Omega _k}\left( \frac{h^2(k)}{\varphi _k(A)^{2}}+\frac{g^2(k)}{\varphi (A)^{2}}\right) \le h^2(k)\sum _{i=1}^{\left| \Omega _k \right| } i^{-2} + \left\langle k\right\rangle ^{-2N}\sum _{i=1}^{\left| \Omega _k \right| } i^{-2}. \end{aligned}$$
(33)

Since \(\left\langle k\right\rangle ^{-2N} \le 1\) and \(h(k) \le b\) for any \(k \in {\mathbb {Z}}^n\), we obtain that \(W_k \le 2C(1+b^2)\) where \(C = \sum _{i=1}^{\infty } i^{-2} < \infty \). This shows (ii).

Using the definition (30) and the positivity of the involved functions, we can directly estimate that

$$\begin{aligned} \left| w(k,A) \right| \ge \min \{1,\frac{1}{\varphi (A)}\left\langle k\right\rangle ^{-N}\} =\frac{1}{\varphi (A)}\left\langle k\right\rangle ^{-N}. \end{aligned}$$
(34)

This shows that \(w(\cdot ,A)\) is at most of polynomial decay (i). \(\square \)

Remark 2.3

Proposition 2.2 generalizes for \(w(k,A)|_Q = h(k)\psi (k,A) + g(k)\omega (A)\) with the conditions that h(k) is bounded from above and below, g(k) has at most of polynomial decay and is bounded above, the sums of \(\omega (A)^2\) over \(\Omega _k\) are uniformly bounded from above, and the sums of \(\psi (k,A)^2\) over \(\Omega _k\) are uniformly bounded from below and above.

Remark 2.4

If a weight w satisfies the conditions (i) and (ii), then it can be normalized as \({\tilde{w}}(k,A) := \frac{w(k,A)}{\sqrt{W_k}}\). The normalized weight \({\tilde{w}}\) has the property that \({\tilde{W}}_k = 1\) for any \(k \in {\mathbb {Z}}^n\). Moreover, since w(kA) is at most of polynomial decay and \(\sqrt{W_k} \le C\) for some \(C>0\), it follows that \({\tilde{w}}\) is at most of polynomial decay.

We can construct weights that satisfy the assumptions of Theorem 1.3 by defining \(w(k,A) = 2^{-\varphi _k(A)}\) for any \((k,A) \in Q\) and \(w(k,A) = 1\) if \((k,A) \notin Q\). If \(d < n-1\), then \(\sum _{A \in \Omega _k} w(k,A) = 1\) for any \(k \in {\mathbb {Z}}^n\), and the series \(\sum _{A \in \Omega _k} w(k,A)\) are absolutely convergent.

2.4 Basic Properties of Periodic Radon Transforms

In this section, we state and prove some basic properties of \(R_d\). Some of these properties were used earlier in the special cases in [7, 11]. We have chosen to include most of the proofs here for completeness.

2.4.1 Periodic Radon Transforms for Integrable Functions

Let \(T = (t_1,\dots ,t_d) \in {\mathbb {R}}^d\) and \(A = \{v_1,\dots ,v_d\} \in G_d^n\). We can define \(R_df(\cdot ,A)\) for \(L^1({\mathbb {T}}^n)\) functions simply as

$$\begin{aligned} R_{d,A}f(x) := \int _{[0,1]^d} f(x+t_1v_1+\cdots +t_dv_d)dt_1\cdots dt_d \end{aligned}$$
(35)

where the formula is defined for a.e. \(x \in {\mathbb {T}}^n\). We lighten our notation by denoting the corresponding linear combinations by \(T\cdot A = t_1v_1+\cdots +t_dv_d\) with respect to the enumeration of A. The following basic properties are valid.

Lemma 2.3

Suppose that \(f \in L^1({\mathbb {T}}^n)\) and \(A \in G_d^n\). Then \(R_{d,A}f\) can be defined by the formula (35) for a.e. \(x \in {\mathbb {T}}^n\). Moreover,

  1. (i)

    this definition coincides with the distributional definition: for every \(f \in L^1({\mathbb {T}}^n)\) and \(g \in L^\infty ({\mathbb {T}}^n)\) it holds that \((R_{d,A} f,g) = (f,R_{d,A} g)\);

  2. (ii)

    \(R_{d,A}: L^p({\mathbb {T}}^n) \rightarrow L^p({\mathbb {T}}^n)\) is 1-Lipschitz for any \(p \in [1,\infty ]\).

  3. (iii)

    Suppose that \(f \in {\mathcal {T}}'\), \(A \in G_d^n\) and \(R_df(\cdot ,A) \in L^1({\mathbb {T}}^n)\). Then \(R_{d,A}f(x+S\cdot A) = R_{d,A}f(x)\) for a.e. \(x\in {\mathbb {T}}^n\) and every \(S \in {\mathbb {R}}^d\).

We postpone the proof of Lemma 2.3 for a while. We remark that Lemma 2.3 is a simple generalization of [11, Lemma 7], which was stated in [11] without a proof. We need to first introduce some useful notations.

Let \(q = n-d\) and V be the linear subspace of \({\mathbb {R}}^n\) spanned by A. Now there exist distinct unit vectors \(e_{1_A},\dots ,e_{q_A} \in {\mathbb {R}}^n\) along the positive coordinate axes, \(\{e_1,\dots ,e_n\}\), such that \(e_{i_A} \notin V\) and \(E_A := \{v_1,\dots ,v_d,e_{1_A},\dots ,e_{q_A}\}\) spans \({\mathbb {R}}^n\). We define \(\varphi _A: [0,1]^n \rightarrow {\mathbb {R}}^n\) by the formula

$$\begin{aligned} \varphi _A(t_1,\dots ,t_q,s_1,\dots ,s_d) = t_1e_{1_A}+\cdots +t_q e_{q_A} + s_1v_1+\cdots + s_dv_d. \end{aligned}$$
(36)

We may write \(T = (t_1,\dots ,t_q)\), \(S = (s_1,\dots ,s_d)\) and \(dx = dSdT = dTdS\) to shorten notation.

Remark 2.5

These coordinates are not unique, but we suppose that we have fixed some \(e_{1_A},\dots ,e_{q_A}\) for every \(A \in G_d^n\). The specific choice is not important in our method.

Next we discuss some elementary properties of the coordinates \(\varphi _A\). The image of \(\varphi _A\) is an n-parallelepiped when interpreted in \({\mathbb {R}}^n\). A simple calculation shows that \(\left| \det (D\varphi _A) \right| = \left| \det (v_1,\dots ,v_n,e_{1_A},\dots ,e_{{q_{A}}}) \right| \in {\mathbb {Z}}_+\), which is also equal to the volume of the n-parallelepiped spanned by \(E_A\). The corners of the parallelepiped, \(\varphi _A(T,S)\) with \(T \in \{0,1\}^q, S \in \{0,1\}^d\), have integer coordinates as well. It can be argued that the coordinates (36) wrap around the torus \(\left| \det (D\varphi _A) \right| \) times when projected into \({\mathbb {T}}^n\), i.e. \(\left| \det (D\varphi _A) \right| = \left| \varphi _A^{-1}(x) \right| \) for any \(x \in {\mathbb {T}}^n\).

Let us denote the Lebesgue measure on \({\mathbb {T}}^n\) by dm and on \([0,1]^n\) by dx. We thus have the change of coordinates formula for integrals of measurable functions in the form of

$$\begin{aligned} \begin{aligned} \int _{{\mathbb {T}}^n} fdm&= \frac{1}{\left| \det (D\varphi _A) \right| }\int _{[0,1]^n} f \circ \varphi _A \left| \det (D\varphi _A) \right| dx \\&= \int _{[0,1]^n} f \circ \varphi _A dx. \end{aligned} \end{aligned}$$
(37)

The formula (37), in a slightly different form, was used in the proofs given in [11]. The connection to [11] is explained with more details in Remark 2.6.

Remark 2.6

Let \(n = 2, d = 1\), \(v =(v^1,v^2) \in {\mathbb {Z}}^2 \setminus \{0\}\) and \(A = \{v\}\). Suppose that v is not parallel to \(e_1\), which in turn implies that \(v^2 \ne 0\). If we choose \(E_A = \{e_1\}\), then the formula \(\left| \det (D\varphi _A) \right| = \left| v^2 \right| \) holds and it is easy to check that the coordinates wrap \(\left| v^2 \right| \) times around \({\mathbb {T}}^2\). If v is parallel to \(e_1\), then one chooses \(E_A = \{e_2\}\) instead of \(e_1\). This is in-line with the formulas derived in [11] but there the coordinates were scaled so that they wrap around \({\mathbb {T}}^2\) exactly once.

Now we are ready to prove Lemma 2.3.

Proof of Lemma 2.3

The properties (i) and (iii) follow easily from the definitions, and the proofs are thus omitted.

We show first that the mapping \(R_{d,A}\) is well defined by the formula (35). Let \({\tilde{0}} = (0,\dots ,0) \in {\mathbb {R}}^d\). We get from Fubini’s theorem and the formula (37) that

$$\begin{aligned} \int _{{\mathbb {T}}^n} f dm = \int _{[0,1]^q} R_{d,A}f(\varphi _A(T,{\tilde{0}})) dT \end{aligned}$$
(38)

and \(R_{d,A}f(\varphi _A(T,{\tilde{0}})) \in L^1([0,1]^q)\). It follows from the definition (35) of \(R_{d,A}f\) that

$$\begin{aligned} R_{d,A}f(\varphi _A(T,{\tilde{0}})) = R_{d,A}f(\varphi _A(T,S)) \end{aligned}$$
(39)

for all \(S \in {\mathbb {R}}^d\).

We show that \(R_{d,A}f\) is a measurable function. Suppose for simplicity that f is real valued. Let \(\alpha > 0\) and define the sets

$$\begin{aligned} X_\alpha = \{\,T \in [0,1]^q \,;\, R_{d,A}f(\varphi _A(T,{\tilde{0}})) > \alpha \,\}. \end{aligned}$$
(40)

We have already proved that the set \(X_\alpha \) is measurable for any \(\alpha >0\). Now we get from the formula (39) that

$$\begin{aligned} \{\,p \in [0,1]^n \,;\, R_{d,A}f(\varphi _A(p)) > \alpha \,\} = X_\alpha \times [0,1]^d. \end{aligned}$$
(41)

The set \(X_\alpha \times [0,1]^d\) is measurable as a product of measurable sets. Since \(\varphi _A\) is a smooth change of coordinates, we first find that \(\varphi _A(X_\alpha \times [0,1]^d)\) is measurable, and thus \(R_{d,A}f\) is measurable. If f is complex valued, then the above argument can be done separately for the real and imaginary parts as \(R_{d,A}\) is linear.

Now we are ready to prove the property (ii). Suppose that \(f \in L^p({\mathbb {T}}^n)\) and \(p \in [1,\infty )\). The formulas (37) and (39), and Hölder’s inequality give

$$\begin{aligned} \begin{aligned} \int _{{\mathbb {T}}^n} \left| R_{d,A}f \right| ^p dm&= \int _{[0,1]^{q}}\int _{[0,1]^{d}} \left| R_{d,A}f \circ \varphi _A \right| ^{p} dx \\&= \int _{[0,1]^{q}} \left| (R_{d,A}f)(\varphi _A(T,{\tilde{0}})) \right| ^{pdT} \\&\le \int _{[0,1]^{q}} (R_{d,A}\left| f \right| ^p)(\varphi _A(T,{\tilde{0}}))dT \\&=\Vert f \Vert _{L^{p}({\mathbb {T}}^{n})}^{p} < \infty . \end{aligned} \end{aligned}$$
(42)

Hence Tonelli’s theorem implies that \(R_{d,A}f \in L^p({\mathbb {T}}^n)\). If \(p = \infty \), then trivially \(\Vert R_{d,A}f \Vert _{L^\infty ({\mathbb {T}}^n)} \le \Vert f \Vert _{L^\infty ({\mathbb {T}}^n)}\). \(\square \)

2.4.2 Mapping Properties of Periodic Radon Transforms

We first recall the inversion formula in [7]. If one writes the formula [7, Eq. (2)] in terms of the periodic subspaces, it gives the following theorem.

Theorem 2.4

(Eq. (2) in [7]) Let \(f \in {\mathcal {T}}'\), \(k \in {\mathbb {Z}}^n\) and \(A \in \mathbf {Gr}(d,n)\). Then \(\widehat{R_df}(k,A) = {\hat{f}}(k) \delta _{k\bot A}\), where

$$\begin{aligned} \delta _{k\bot A} = {\left\{ \begin{array}{ll} 1 &{} \text {if }k \bot A\\ 0 &{} \text {otherwise}.\end{array}\right. } \end{aligned}$$
(43)

It is evident that for every \(k \in {\mathbb {Z}}^n\) there exists \(A \in \mathbf {Gr}(d,n)\) such that \(k \bot A\), see [1, p. 11] and [7, Lemma 9]. This directly gives a reconstructive inversion procedure for \(R_d\). In Sect. 3, we derive new inversion formulas which might provide computational advantage in practice (cf. [11] when \(n = 2\) and \(d=1\)).

Lemma 2.5

Let \(A \in \mathbf {Gr}(d,n)\).

  1. (i)

    If \(P: {\mathcal {T}}' \rightarrow {\mathcal {T}}'\) acts as a Fourier multiplier \((p_k)_{k \in {\mathbb {Z}}^n}\), then \([P,R_{d,A}] = 0\).

  2. (ii)

    \(R_{d,A}: L_s^p({\mathbb {T}}^n) \rightarrow L_s^p({\mathbb {T}}^n)\) is 1-Lipschitz for any \(p \in [1,\infty ]\).

Proof

(i) This is a simple application of Theorem 2.4. We calculate that

$$\begin{aligned} \widehat{R_d(Pf)}(k,A) = \widehat{Pf}(k)\delta _{k\bot A} = p_k{\hat{f}}(k)\delta _{k\bot A} = \widehat{P(R_df)}(k,A). \end{aligned}$$
(44)

(ii) Suppose that \(f \in L_s^p({\mathbb {T}}^n)\). Now \( h := (1-\Delta )^{s/2}f \in L^p({\mathbb {T}}^n)\). Notice that \(R_{d,A}h \in L^p({\mathbb {T}}^n)\) by Lemma 2.3. We have by the property (i) that \((1-\Delta )^{s/2}R_{d,A}f = R_{d,A}h \in L^p({\mathbb {T}}^n).\) Hence \(R_{d,A}f \in L_s^p({\mathbb {T}}^n)\). We can conclude that

$$\begin{aligned} \Vert R_{d,A}f \Vert _{L_s^p({\mathbb {T}}^n)} = \Vert R_{d,A}h \Vert _{L^p({\mathbb {T}}^n)} \le \Vert h \Vert _{L^p({\mathbb {T}}^n)} = \Vert f \Vert _{L_s^p({\mathbb {T}}^n)} \end{aligned}$$
(45)

by Lemma 2.3. \(\square \)

The next lemma generalizes [11, Proposition 11] to many different directions.

Lemma 2.6

Let \(p \in [1,\infty ]\).

  1. (i)

    Let \(l \in [1,\infty )\). Suppose that for any \(A \in \mathbf {Gr}(d,n)\) there exists \(C_A > 0\) such that \(w(k,A) = C_A\) for every \(k \bot A\). Moreover, suppose that

    $$\begin{aligned} C_w^l := \sum _{A\in \mathbf {Gr}(d,n)} C_A^l < \infty . \end{aligned}$$
    (46)

    Then the Radon transform \(R_{d}: L_s^p({\mathbb {T}}^n) \rightarrow L_s^{p,l}(X_{d,n};w)\) is \(C_w\)-Lipschitz.

  2. (ii)

    Suppose that for any \(A \in \mathbf {Gr}(d,n)\) there exists \(C_A > 0\) such that \(w(k,A) = C_A\) for every \(k \bot A\). Moreover, suppose that

    $$\begin{aligned} C_w = \sup _{A \in \mathbf {Gr}(d,n)} C_A < \infty . \end{aligned}$$
    (47)

    Then the Radon transform \(R_{d}: L_s^p({\mathbb {T}}^n) \rightarrow L_s^{p,\infty }(X_{d,n};w)\) is \(C_w\)-Lipschitz.

  3. (iii)

    Suppose that there exists \(C_w > 0\) such that

    $$\begin{aligned} \sum _{A \in \Omega _k} w(k,A)^2 \le C_w^2, \quad \Omega _k := \{\, A \in \mathbf {Gr}(d,n) \,;\, k\bot A\,\} \end{aligned}$$
    (48)

    for any \(k \in {\mathbb {Z}}^n\). Then the Radon transform \(R_{d}: H^s({\mathbb {T}}^n) \rightarrow L_s^{2,2}(X_{d,n};w)\) is \(C_w\)-Lipschitz.

Proof

(i) We have that

$$\begin{aligned} \Vert R_{d,A}f \Vert _{L_s^p({\mathbb {T}}^n)} \le \Vert f \Vert _{L_s^p({\mathbb {T}}^n)} \end{aligned}$$
(49)

for any \(A \in \mathbf {Gr}(d,n)\) by Lemma 2.5. Theorem 2.4 implies that

$$\begin{aligned} F_{w(\cdot ,A)}R_{d,A}f(x) = \sum _{k \bot A} w(k,A){\hat{f}}(k)e^{2\pi i k\cdot x}. \end{aligned}$$
(50)

This gives that \(F_{w(\cdot ,A)}R_{d,A}f = C_AR_{d,A}f\). Now it follows from (49) and the definition of \(C_w^l\) that

$$\begin{aligned} \begin{aligned} \Vert R_df \Vert _{L_s^{p,l}(X_{d,n};w)}^l&= \sum _{A \in \mathbf {Gr}(d,n)}C_A^l \Vert R_{d,A}f \Vert _{L_s^p({\mathbb {T}}^n)}^l \\&\le C_w^l\Vert f \Vert _{L_s^p({\mathbb {T}}^n)}^l. \end{aligned} \end{aligned}$$
(51)

(ii) A calculation similar to the proof of (i) shows that

$$\begin{aligned} \Vert R_df \Vert _{L_s^{p,\infty }(X_{d,n};w)} \le \Vert f \Vert _{L_s^p({\mathbb {T}}^n)} \sup _{A \in \mathbf {Gr}(d,n)} C_A. \end{aligned}$$
(52)

(iii) We have that

$$\begin{aligned} \begin{aligned} \Vert R_df \Vert _{L_s^{2,2}(X_{d,n};w)}^2&= \sum _{A \in \mathbf {Gr}(d,n)}\Vert \sum _{k\bot A} w(k,A) \left\langle k\right\rangle ^s {\hat{f}}(k)e^{2\pi i k \cdot x} \Vert _{L^2({\mathbb {T}}^n)}^2 \\&= \sum _{A \in \mathbf {Gr}(d,n)} \sum _{k\bot A} w(k,A)^2 \left| \left\langle k\right\rangle ^s{\hat{f}}(k) \right| ^2\\&= \sum _{k \in {\mathbb {Z}}^n} \sum _{A \in \Omega _k} w(k,A)^2 \left\langle k\right\rangle ^{2s} \left| {\hat{f}}(k) \right| ^2 \\&\le C_w^2 \Vert f \Vert _{L_s^2({\mathbb {T}}^n)}^2 \end{aligned} \end{aligned}$$
(53)

where the order of summation can be interchanged by non-negativity of the terms. \(\square \)

Remark 2.7

If \(d = n-1\), then the only restriction on w in the case of (iii) is \(\sum _{A \in \mathbf {Gr}(n-1,n)} w(0,A)^2 < \infty \). This follows since each \(A \in \mathbf {Gr}(n-1,n)\) has a unique normal direction.

2.4.3 Adjoint and Normal Operators

Next, we study the adjoint and normal operators of \(R_d\) when the image side is equipped with the Hilbert space \(L_s^{2,2}(X_{d,n};w)\) satisfying the assumptions (iii) of Lemma 2.6. This generalizes the considerations in [11, Sect. 2.4] into higher dimensions and for any \(1 \le d \le n-1\).

Proof of Theorem 1.1

Let \(f \in H^s({\mathbb {T}}^n)\) and \(g \in L_s^{2,2}(X_{d,n};w)\). Using the definition of the inner product (23), we get

$$\begin{aligned} \begin{aligned} (R_df,g)_{L_s^{2,2}(X_{d,n};w)}&= \sum _{A \in \mathbf {Gr}(d,n)} (F_{w(\cdot ,A)}R_d f,F_{w(\cdot ,A)}g)_{H^s({\mathbb {T}}^n)}\\&= \sum _{A \in \mathbf {Gr}(d,n)} \sum _{k \bot A} w(k,A)^2\left\langle k\right\rangle ^{2s} {\hat{f}}(k){\hat{g}}(k,A)^* \\&= \sum _{k \in {\mathbb {Z}}^n} \sum _{A \in \Omega _k} w(k,A)^2\left\langle k\right\rangle ^{2s} {\hat{f}}(k){\hat{g}}(k,A)^*\\&= \sum _{k \in {\mathbb {Z}}^n} \left\langle k\right\rangle ^{2s} {\hat{f}}(k) \left( \sum _{A \in \Omega _k} w(k,A)^2{\hat{g}}(k,A)\right) ^*\\&=: (f,R_d^*g)_{H^s({\mathbb {T}}^n)} \end{aligned} \end{aligned}$$
(54)

where we can interchange the order of the summation by the Cauchy-Schwarz inequality as it implies that the series is absolutely convergent.

We have that

$$\begin{aligned} \begin{aligned} \widehat{R_d^*R_df}(k)&= \sum _{A \in \Omega _k} w(k,A)^2 \widehat{R_df}(k,A) \\&= \sum _{A \in \Omega _k} w(k,A)^2 {\hat{f}}(k)\delta _{k\bot A}\\&= {\hat{f}}(k)\sum _{A \in \Omega _k} w(k,A)^2 \end{aligned} \end{aligned}$$
(55)

by the formula for the adjoint and Theorem 2.4. \(\square \)

We prove Corollary 1.2 on inversion formulas and stability estimates next.

Proof of Corollary 1.2

(i) We first calculate that

$$\begin{aligned} \Vert F_{W_k^{-1}} R_d^* g \Vert _{H^s({\mathbb {T}}^n)}^2 = \sum _{k \in {\mathbb {Z}}^n} \left\langle k\right\rangle ^{2s}\frac{1}{W_k^2}\left| \sum _{A \in \Omega _k} w(k,A)^2 {\hat{g}}(k,A) \right| ^2 \end{aligned}$$
(56)

for any \(g \in L_s^{2,2}(X_{d,n};w)\). The triangle inequality and Hölder’s inequality for the sequences w(kA) and \(w(k,A)\left| {\hat{g}}(k,A) \right| \) over \(A \in \Omega _k\) gives that

$$\begin{aligned} \left| \sum _{A \in \Omega _k} w(k,A)^2 {\hat{g}}(k,A) \right| ^2 \le W_k \left( \sum _{A \in \Omega _k} w(k,A)^2 \left| {\hat{g}}(k,A) \right| ^2\right) . \end{aligned}$$
(57)

Recall that

$$\begin{aligned} \Vert g \Vert _{L_s^{2,2}(X_{d,n};w)}^2 = \sum _{k \in {\mathbb {Z}}^n} \left\langle k\right\rangle ^{2s} \sum _{A \in \mathbf {Gr}(d,n)} w(k,A)^2 \left| {\hat{g}}(k,A) \right| ^2 \end{aligned}$$
(58)

after a rearrangement of the series. We can conclude from the formulas (56), (57) and (58) that \(\Vert F_{W_k^{-1}}R_d^*g \Vert _{H^s({\mathbb {T}}^n)} \le \frac{1}{c_w}\Vert g \Vert _{L_s^{2,2}(X_{d,n};w)}\).

(ii) This is a simple calculation using the formula for the normal operator:

$$\begin{aligned} (R_df,R_df)_{L_s^{2,2}(X_{d,n};w)} =(f,F_{W_k}f)_{H^s({\mathbb {T}}^n)} \ge \inf _{k \in {\mathbb {Z}}^n}{W_k} \Vert f \Vert _{H^s({\mathbb {T}}^n)}^2 \end{aligned}$$
(59)

if \(f \in H^s({\mathbb {T}}^n)\).

(iii) We have by Remark 2.4 that \({\tilde{w}}\) is a weight that satisfies the assumptions of Theorem 1.1 and \({\tilde{W}}_k = 1\) for any \(k \in {\mathbb {Z}}^n\). Therefore, the corresponding adjoint \(R_d^{*,{\tilde{w}}}\) is well-defined, and \(R_d^{*,{\tilde{w}}}R_df = f\) for any \(f \in {\mathcal {T}}'\) by Theorem 1.1. \(\square \)

3 Inversion Formulas

We have already proved one new inversion formula in Corollary 1.2 for \(H^s({\mathbb {T}}^n)\) functions. In this section, we prove three other inversion formulas. One of the formulas generalizes the inversion formula for \(R_1\) on \(L^1({\mathbb {T}}^2)\) proved in [11, Theorems 1 and 8]. The second inversion formula is a corollary of the first one and remains valid for any distribution. The third inversion formula takes a slightly different approach and shows that a distribution \(f \in {\mathcal {T}}'\) is a weighted sum of the data \(R_{d,A}f\) over the set \(\mathbf {Gr}(d,n)\). These formulas might have practical value.

Proposition 3.1

(The first inversion formula) Let \(A\in \mathbf {Gr}(d,n)\) and \(k \in {\mathbb {Z}}^n\). Suppose that \(f \in {\mathcal {T}}'\) and \(R_{d,A}f \in L^1({\mathbb {T}}^2)\). If \(k\bot A\), then

$$\begin{aligned} {\hat{f}}(k) = \int _{[0,1]^q} R_{d,A}f(\varphi _A(T,0))\exp (-2\pi i (k_{1_A}t_{1_A}+\cdots +k_{q_A}t_{q_A}))dT. \end{aligned}$$
(60)

Proof

Fubini’s theorem, Theorem 2.4 and the formula (37) implies that

$$\begin{aligned} \begin{aligned}&\widehat{R_{d,A}f}(k) \\&\,\,= \int _{[0,1]^q}\int _{[0,1]^d} R_{d,A}f(\varphi _A(T,S))\exp (-2\pi i k \cdot \varphi _A(T,S))dSdT. \end{aligned} \end{aligned}$$
(61)

Since \(k\bot A\), a simple calculation shows that

$$\begin{aligned} k \cdot \varphi _A(T,S) = k_{1_A}t_{1_A}+\cdots +k_{q_A}t_{q_A}, \end{aligned}$$
(62)

and Lemma 2.3 implies that

$$\begin{aligned} R_{d,A}f(\varphi _A(T,S)) = R_{d,A}f(\varphi _A(T,0)) \end{aligned}$$
(63)

for a.e. \(T \in [0,1]^q\).

Hence, using the formulas (62) and (63), we may simplify the formula (61) into the form

$$\begin{aligned} \begin{aligned}&\widehat{R_{d,A}f}(k) \\&\,\,= \int _{[0,1]^q} R_{d,A}f(\varphi _A(T,0))\exp (-2\pi i (k_{1_A}t_{1_A}+\cdots +k_{q_A}t_{q_A}))dT. \end{aligned} \end{aligned}$$
(64)

\(\square \)

Remark 3.1

The proof shows that instead of choosing \(S =0\), we may choose any other values for the S-coordinates as well.

We immediately get the following corollary from Proposition 3.1 and Lemma 2.3.

Corollary 3.2

Suppose that \(f \in L^1({\mathbb {T}}^n)\). Then the inversion formula (60) is valid.

Remark 3.2

One could prove Corollary 3.2 directly without using Lemma 2.3 and Theorem 2.4 (or Proposition 3.1). This proof is given for the geodesic X-ray transform in [11] and it could be adapted to this setting as well.

Recall that the structure theorem of periodic distributions [16, Theorem 2.4.5] states that for any \(f \in {\mathcal {T}}'\) there exist \(h \in C({\mathbb {T}}^n)\) and \(s \ge 0\) such that

$$\begin{aligned} f=(1-\Delta )^sh. \end{aligned}$$
(65)

We get another Corollary of Proposition 3.1 and Lemma 2.5.

Corollary 3.3

(The second inversion formula) Let \(A\in \mathbf {Gr}(d,n)\) and \(k \in {\mathbb {Z}}^n\). Suppose that \(f \in {\mathcal {T}}'\) and \(f=(1-\Delta )^sh\), \(h \in C({\mathbb {T}}^n)\). If \(k \bot A\), then

$$\begin{aligned} {\hat{f}}(k) = \left\langle k\right\rangle ^{2s}\widehat{R_{d,A}h}(k) = \widehat{R_{d,A}f}(k) \end{aligned}$$
(66)

where \(\widehat{R_{d,A}h}(k)\) can be calculated by the formula (60).

We now prove our third inversion formula stated in the introduction.

Proof of Theorem 1.3

Using Theorem 2.4, we calculate that

$$\begin{aligned} {\mathcal {F}}(F_{w(\cdot ,A)}R_{d,A}f)(k) = w(k,A){\hat{f}}(k)\delta _{k\bot A}. \end{aligned}$$
(67)

Hence, we get

$$\begin{aligned} \begin{aligned} {\mathcal {F}}\left( \sum _{A \in \mathbf {Gr}(d,n)} F_{w(\cdot ,A)}R_{d,A}f\right) (k)&= \sum _{A \in \mathbf {Gr}(d,n)} w(k,A){\hat{f}}(k)\delta _{k\bot A} \\&={\hat{f}}(k) \sum _{A \in \Omega _k} w(k,A) \\&={\hat{f}}(k)\\ \end{aligned} \end{aligned}$$
(68)

Suppose now that \(d = n-1\) and \({\hat{f}}(0) = 0\). Notice that \(\left| \Omega _k \right| = 1\) if \(k\ne 0\) and \(\Omega _0 = \mathbf {Gr}(n-1,n)\). Hence, the formula (7) follows by choosing any weight w such that

$$\begin{aligned} \sum _{A \in \mathbf {Gr}(n-1, n)} w(0,A) = 1, w(0,A) \ge 0, \end{aligned}$$
(69)

and \(w(k,A) = 1\) for any \(A \in \mathbf {Gr}(n-1,n)\) and \(k\ne 0\). \(\square \)

4 Stability Estimates and Regularization Methods

In this section, we look at stability estimates for functions in the Bessel potential spaces when \(p \ne \infty \). We also generalize the Tikhonov regularization methods developed in [11]. In the Tikhonov regularization part, we restrict our study to the functions in \(H^s({\mathbb {T}}^n)\), as done in [11]. Our results on regularization are new for any \(1 \le d \le n-1\) when \(n \ge 3\), and the stability estimates are new in any dimension.

4.1 Stability Estimates and the Sobolev Inequality

Recall that in Corollary 1.2 we obtained the estimate

$$\begin{aligned} \Vert f \Vert _{H^s({\mathbb {T}}^n)}^2 \le \frac{1}{c_w^2}\Vert R_df \Vert _{L_s^{2,2}(X_{d,n};w)}^2 \end{aligned}$$
(70)

if the weight w is such that the normal operator \(R_d^*R_d\) has a uniform lower bound \(\frac{1}{c_w^2}\) as a Fourier multiplier. The condition on the weight w is that \(c_w^2 \le W_k = \sum _{A \in \Omega _k} w(k,A)^2 \le C_w^2\) for some uniform \(c_w, C_w > 0\). This implies stability on \(L_s^p({\mathbb {T}}^n)\) if \(p \le 2\), as we will show later. We can reach stability estimates for \(p > 2\) using the Sobolev inequality on \({\mathbb {T}}^n\).

Theorem 4.1

(Sobolev inequality [20]) Let \(f \in {\mathcal {T}}'\). Suppose that \(s > 0\) and \(1< q< p < \infty \) satisfy \(s/n \ge q^{-1} - p^{-1}\). Then

$$\begin{aligned} \Vert f \Vert _{L^p({\mathbb {T}}^n)} \le C \Vert f \Vert _{L_s^q({\mathbb {T}}^n)} \end{aligned}$$
(71)

for some \(C > 0\) that does not depend on f.

A proof of the Sobolev inequality on \({\mathbb {T}}^n\) is given in [3, Corollary 1.2].

Lemma 4.2

Let \(l \in [1,\infty ]\) and \(g: \mathbf {Gr}(d,n) \rightarrow {\mathcal {T}}'\).

  1. (i)

    If \(t \in {\mathbb {R}}\), \(s > 0\), and \(1< q< p < \infty \) satisfy \(s/n \ge q^{-1} -p^{-1}\), then

    $$\begin{aligned} \Vert g \Vert _{L_{t}^{p,l}(X_{d,n};w)} \le C\Vert g \Vert _{L_{t+s}^{q,l}(X_{d,n};w)} \end{aligned}$$
    (72)

    for some \(C > 0\) that does not depend on g.

  2. (ii)

    If \(1\le p < q \le \infty \), then for any \(s \in {\mathbb {R}}\) holds

    $$\begin{aligned} \Vert g \Vert _{L_s^{p,l}(X_{d,n};w)} \le \Vert g \Vert _{L_s^{q,l}(X_{d,n};w)}. \end{aligned}$$
    (73)

Proof

(i) We have

$$\begin{aligned} \Vert g(\cdot ,A) \Vert _{L^p({\mathbb {T}}^n;w(\cdot ,A))} \le C\Vert g(\cdot ,A) \Vert _{L_s^q({\mathbb {T}}^n;w(\cdot ,A))} \end{aligned}$$
(74)

for any \(A \in \mathbf {Gr}(d,n)\) by the Sobolev inequality where \(C > 0\) does not depend on f, A and w. Now (72) with \(t=0\) follows from the definition of the norms \(\Vert \cdot \Vert _{L_s^{q,l}(X_{d,n};w)}\) and the inequality (74).

Fix any \(z \in {\mathbb {R}}\). Define then the function \({\tilde{g}}: \mathbf {Gr}(d,n) \rightarrow {\mathcal {T}}'\) by the formula \({\tilde{g}}(\cdot ,A) = (1-\Delta )^{z/2}g(\cdot ,A)\). Now (72) with \(t=0\) implies

$$\begin{aligned} \Vert g \Vert _{L_{z}^{p,l}(X_{d,n};w)} = \Vert {\tilde{g}} \Vert _{L_{0}^{p,l}(X_{d,n};w)} \le C\Vert {\tilde{g}} \Vert _{L_{s}^{q,l}(X_{d,n};w)} = C\Vert g \Vert _{L_{z+s}^{q,l}(X_{d,n};w)}. \end{aligned}$$
(75)

(ii) The inequality (73) can be proved similarly. Now the Sobolev inequality is replaced by the inequality \(\Vert f \Vert _{L_s^p({\mathbb {T}}^n)} \le \Vert f \Vert _{L_s^q({\mathbb {T}}^n)}\), which holds since \(m({\mathbb {T}}^n) = 1\) and \(p \le q\). \(\square \)

Theorem 1.1 and Lemma 4.2 imply the following, slightly more general, shifted stability estimates.

Proposition 4.3

(Shifted stability estimates) Let w be a weight such that \(c_w^2 \le W_k \le C_w^2\) for some uniform constants \(c_w, C_w > 0\). Let \(f \in {\mathcal {T}}'\), \(s \in {\mathbb {R}}\), and \(s(p,n) := n\left| \frac{p-2}{2p} \right| \).

  1. (i)

    If \(1 < p \le 2\), then

    $$\begin{aligned} \Vert f \Vert _{L_s^p({\mathbb {T}}^n)} \le C_1\Vert R_df \Vert _{L_s^{2,2}(X_{d,n};w)} \le C_2\Vert R_df \Vert _{L_{s+s(p,n)}^{p,2}(X_{d,n};w)}, \end{aligned}$$
    (76)

    where \(C_1,C_2 > 0\) do not depend on f. If \(p = 1\), then the first inequality of (76) holds.

  2. (ii)

    If \(2 \le p < \infty \), then

    $$\begin{aligned} \Vert f \Vert _{L_s^p({\mathbb {T}}^n)} \le C_1\Vert R_df \Vert _{L_{s+s(p,n)}^{2,2}(X_{d,n};w)} \le C_2\Vert R_df \Vert _{L_{s+s(p,n)}^{p,2}(X_{d,n};w)}, \end{aligned}$$
    (77)

    where \(C_1,C_2 > 0\) do not depend on f.

Proof

(i) Suppose that \(f \in {\mathcal {T}}'\) and \(1 \le p \le 2\). Let \(h = (1-\Delta )^{s/2}f\). We have that \(\Vert h \Vert _{L^p({\mathbb {T}}^n)} \le \Vert h \Vert _{L^2({\mathbb {T}}^n)}\) since \(p \le 2\) and \(m({\mathbb {T}}^n) = 1\). This implies that \(\Vert f \Vert _{L_s^p({\mathbb {T}}^n)} \le \Vert f \Vert _{L_s^2({\mathbb {T}}^n)}\). Now the first inequality follows from Corollary 1.2.

Suppose additionally that \(1< p < 2\). Choose \(s^p = n\frac{2-p}{2p} >0\) in the part (i) of Lemma 4.2. Now it holds that

$$\begin{aligned} \Vert R_df \Vert _{L_s^{2,2}(X_{d,n};w)} \le \Vert R_df \Vert _{L_{s+s^p}^{p,2}(X_{d,n};w)} \end{aligned}$$
(78)

for any \(s \in {\mathbb {R}}\).

(ii) Suppose that \(f \in {\mathcal {T}}'\) and \(p > 2\). Choose in the Sobolev inequality (71) that \(q = 2\). Now we can calculate that the Sobolev inequality is valid if \(s \ge n\frac{p-2}{2p}\). Let us define that \(s_p = n\frac{p-2}{2p} > 0\). Hence, \(\Vert f \Vert _{L^p({\mathbb {T}}^n)} \le C \Vert f \Vert _{H^{s_p}({\mathbb {T}}^n)}\).

Let now \(s \in {\mathbb {R}}\) and \(f \in L_s^p({\mathbb {T}}^n)\). We then have that

$$\begin{aligned} \begin{aligned} \Vert f \Vert _{L_s^p({\mathbb {T}}^n)}&=\Vert (1-\Delta )^{s/2}f \Vert _{L^p({\mathbb {T}}^n)} \\&\le C\Vert (1-\Delta )^{s/2}f \Vert _{H^{s_p}({\mathbb {T}}^n)} = C\Vert f \Vert _{H^{s+s_p}({\mathbb {T}}^n)}. \end{aligned} \end{aligned}$$
(79)

Now the first inequality follows from the part (i) of the theorem. The second inequality follows from the part (ii) of Lemma 4.2 since \(p > 2\). \(\square \)

Remark 4.1

For any \(f \in {\mathcal {T}}'\) there exists \(s \ge 0\) such that \(f \in L_{-s}^p({\mathbb {T}}^n)\) for any \(p \in [1,\infty ]\) by the structure theorem of periodic distributions.

4.2 Tikhonov Minimization Problem

We will show that \(P_{w,s-r}^\alpha R_d^*g\) is the unique minimizer of (8) when \(l = 2\). We first analyze the regularity properties of \(P_{w,z}^\alpha \) and \(P_{w,s-r}^\alpha R_d^*\). Then we understand which space the regularized reconstruction \(P_{w,s-r}^\alpha R_d^*g\) lives in when \(g \in L_r^{2,2}(X_{d,n};w)\). First of all, \(R_d^*: L_r^{2,2}(X_{d,n};w) \rightarrow H^r({\mathbb {T}}^n)\). On the other hand, \(P_{w,z}^\alpha : H^r({\mathbb {T}}^n) \rightarrow H^{r+2z}({\mathbb {T}}^n)\) for any \(r, z \in {\mathbb {R}}\) since \(W_k\) is uniformly bounded from below. We conclude that \(P_{w,s-r}^\alpha R_d^*: L_r^{2,2}(X_{d,n};w) \rightarrow H^{2s-r}({\mathbb {T}}^n)\).

We are not ready to prove Theorem 1.4. The proof uses the same ideas as the proof of [11, Theorem 2]. The proof presented here also explains some missing details about the splitting of the minimization problem into the real and imaginary parts in (84), (85) and (86). This is one of the crucial parts of the proof of [11, Theorem 2] though it is not mentioned at all in [11].

Proof of Theorem 1.4

We have that

$$\begin{aligned} \begin{aligned}&\Vert R_df -g \Vert _{L_r^{2,2}(X_{d,n};w)}^2 \\&\,\,= \sum _{A \in \mathbf {Gr}(d,n)} \sum _{k \bot A} \left\langle k\right\rangle ^{2r}w(k,A)^2\left| {\hat{f}}(k) - {\hat{g}}(k,A) \right| ^2 \\&\quad + \sum _{A \in \mathbf {Gr}(d,n)}\sum _{k \not \bot A} \left\langle k\right\rangle ^{2r}w(k,A)^2\left| {\hat{g}}(k,A) \right| ^2. \end{aligned} \end{aligned}$$
(80)

Since the second term of (80) is independent of f, it can be neglected in the minimization problem (8). On the other hand,

$$\begin{aligned} \begin{aligned}&\sum _{A \in \mathbf {Gr}(d,n)} \sum _{k \bot A} \left\langle k\right\rangle ^{2r}w(k,A)^2 \left| {\hat{f}}(k) - {\hat{g}}(k,A) \right| ^2 \\&\,\,= \sum _{k \in {\mathbb {Z}}^n} \left\langle k\right\rangle ^{2r}\sum _{A \in \Omega _k} w(k,A)^2\left| {\hat{f}}(k) - {\hat{g}}(k,A) \right| ^2. \end{aligned} \end{aligned}$$
(81)

We next expand the term

$$\begin{aligned} \alpha \Vert f \Vert _{H^s({\mathbb {T}}^n)}^2 = \alpha \sum _{k \in {\mathbb {Z}}^n} \left\langle k\right\rangle ^{2s} \left| {\hat{f}}(k) \right| ^2. \end{aligned}$$
(82)

We can conclude that a solution to the minimization problem (8) is a minimizer of

$$\begin{aligned} \sum _{k \in {\mathbb {Z}}^n} \left\langle k\right\rangle ^{2r} \left( \alpha \left\langle k\right\rangle ^{2s-2r}\left| {\hat{f}}(k) \right| ^2 + \sum _{A \in \Omega _k} w(k,A)^2\left| {\hat{f}}(k)-{\hat{g}}(k,A) \right| ^2\right) . \end{aligned}$$
(83)

Hence, a minimizer of (83) must minimize

$$\begin{aligned} H_k(f) :=\alpha \left\langle k\right\rangle ^{2s-2r}\left| {\hat{f}}(k) \right| ^2 + \sum _{A \in \Omega _k} w(k,A)^2\left| {\hat{f}}(k)-{\hat{g}}(k,A) \right| ^2 \end{aligned}$$
(84)

for each \(k \in {\mathbb {Z}}^n\).

To proceed, we need to minimize the real part and the imaginary part of (84) separately. Let us write the real and imaginary parts of the involved terms simply as \(f_r(k) := \mathfrak {R}({\hat{f}}(k))\), \(f_i(k) := \mathfrak {I}({\hat{f}}(k))\), \(g_r(k,A) := \mathfrak {R}({\hat{g}}(k,A))\) and \(g_i(k,A) := \mathfrak {I}({\hat{g}}(k,A))\) to keep our notation shorter. Now, we define the operators

$$\begin{aligned} R_k(f) := \alpha \left\langle k\right\rangle ^{2s-2r} f_r(k)^2 + \sum _{A \in \Omega _k} w(k,A)^2(f_r(k)-g_r(k,A))^2 \end{aligned}$$
(85)

and

$$\begin{aligned} I_k(f) := \alpha \left\langle k\right\rangle ^{2s-2r} f_i(k)^2 + \sum _{A \in \Omega _k} w(k,A)^2(f_i(k)-g_i(k,A))^2. \end{aligned}$$
(86)

These functions have the property that \(R_k(f) + I_k(f) = H_k(f)\). Moreover, if \(H_k\) is minimized, then \(R_k\) and \(I_k\) are minimized, and vice versa.

We show how the minimization is done for the real part. As the minimization for the imaginary part is similar, we do not repeat the calculations twice. We expand the second term of (85), and get

$$\begin{aligned} \begin{aligned}&\sum _{A \in \Omega _k} w(k,A)^2(f_r(k)-g_r(k,A))^2 \\&\,\,= W_k f_r(k)^2 -2f_r(k)\sum _{A \in \Omega _k} w(k,A)^2 g_r(k,A) + \sum _{A \in \Omega _k}w(k,A)^2 g_r(k,A)^2. \end{aligned} \end{aligned}$$
(87)

The last term of (87) does not depend on f, so it can be neglected in the minimization. Thus, we have arrived to the minimization problem

$$\begin{aligned} -2f_r(k)\sum _{A \in \Omega _k} w(k,A)^2 g_r(k,A) + (W_k+\alpha \left\langle k\right\rangle ^{2s-2r})f_r(k)^2. \end{aligned}$$
(88)

Simple calculus shows that the minimizer of (88) is

$$\begin{aligned} f_r(k) = \frac{\sum _{A \in \Omega _k} w(k,A)^2 g_r(k,A)}{W_k+\alpha \left\langle k\right\rangle ^{2s-2r}} = \mathfrak {R}({\mathcal {F}}(P_{w,s-r}^\alpha R_d^*g)(k)). \end{aligned}$$
(89)

We can similarly calculate that the unique minimizer of the minimization problem associated to the imaginary part (86) is \(f_i(k) = \mathfrak {I}({\mathcal {F}}(P_{w,s-r}^\alpha R_d^*g)(k))\). This shows that the unique minimizer of (84) satisfies \({\hat{f}}(k) = {\mathcal {F}}(P_{w,s-r}^\alpha R_d^*g)(k)\).

Hence, the unique minimizer of (8) is \(f = P_{w,s-r}^\alpha R_d^*g\). The claimed regularity of f follows from the discussion preceding the proof. \(\square \)

Remark 4.2

If \(l \ne 2\), the analysis of the Tikhonov minimization problem becomes more difficult but it might still be possible to adapt the method also in that case (when \(p = 2\)).

4.3 Regularization Strategies

Let X and Y be subsets of Banach spaces and \(F: X \rightarrow Y\) a continuous mapping. A family of continuous maps \({\mathcal {R}}_\alpha : Y \rightarrow X\) with \(\alpha \in (0,\alpha _0]\), \(\alpha _0 > 0\), is called a regularization strategy if \(\lim _{\alpha \rightarrow 0} {\mathcal {R}}_\alpha (F(x)) = x\) for any \(x \in X\). A choice of regularization parameter \(\alpha (\epsilon )\) with \(\lim _{\epsilon \rightarrow 0}\alpha (\epsilon ) = 0\) is called admissible if

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \sup _{y \in Y} \left\{ \Vert {\mathcal {R}}_{\alpha (\epsilon )}y -x \Vert _X \,;\, \Vert y-F(x) \Vert _Y \le \epsilon \,\right\} = 0 \end{aligned}$$
(90)

holds for any \(x \in X\) [4, 12].

We will show that the solution found in Theorem 1.4 to the Tikhonov minimization problem (8) is an admissible regularization strategy with a quantitative stability estimate. Our proof follows that of [11, Theorem 3].

Proof of Theorem 1.5

Let \(\alpha > 0\). Theorem 1.1 implies that

$$\begin{aligned} P_{w,s}^{\alpha } R_d^*(R_df+g)-f = (P_{w,s}^{\alpha }F_{W_k} -\text {Id})f + P_{w,s}^{\alpha } R_d^*g. \end{aligned}$$
(91)

To estimate the first term on the right hand side of (91), we calculate that

$$\begin{aligned} P_{w,s}^{\alpha }F_{W_k} -\text {Id} = -\frac{\alpha W_k^{-1}\left\langle k\right\rangle ^{2s}}{1+\alpha W_k^{-1}\left\langle k\right\rangle ^{2s}} \end{aligned}$$
(92)

as a Fourier multiplier. This shows that \(\Vert P_{w,s}^{\alpha }F_{W_k} -\text {Id} \Vert _{H^r({\mathbb {T}}^n)\rightarrow H^r({\mathbb {T}}^n)} = 1\) as \(W_k\) is bounded from below and above. It follows from the dominated convergence theorem that \(\Vert (P_{w,s}^{\alpha }F_{W_k} -\text {Id})f \Vert _r^2 \rightarrow 0\) as \(\alpha \rightarrow 0\) if \(f \in H^r({\mathbb {T}}^n)\).

Suppose that \(\Vert g \Vert _{L_t^{2,2}(X_{d,n};w)} \le \epsilon \). We have that \(\Vert R_d^* \Vert = \Vert R_d \Vert = C_w\) by Lemma 2.6. Hence \(\Vert R_d^*g \Vert _{H^t({\mathbb {T}}^n)}^2 \le C_w^2\epsilon ^2\). This implies that

$$\begin{aligned} \begin{aligned} \Vert P_{w,s}^{\alpha } R_d^*g \Vert _{H^r({\mathbb {T}}^n)}^2&\le C_w^2\epsilon ^2\sup _{k \in {\mathbb {Z}}^n} \left( \frac{W_k^{-1}}{1+\alpha W_k^{-1}\left\langle k\right\rangle ^{2s}}\right) ^{2}\left\langle k\right\rangle ^{2r-2t} \\&\le C_w^2\epsilon ^2c_w^{-4}\sup _{k \in {\mathbb {Z}}^n} \left( \frac{1}{1+\alpha C_w^{-2}\left\langle k\right\rangle ^{2s}}\right) ^{2}\left\langle k\right\rangle ^{2r-2t} \\&\le C_w^6c_w^{-4}\alpha ^{-2}\epsilon ^2 \end{aligned} \end{aligned}$$
(93)

where the last inequality follows using \(-4s+2r-2t \le 0\). We can conclude that

$$\begin{aligned} \Vert P_{w,s}^{\alpha } R_d^*g \Vert _{H^r({\mathbb {T}}^n)} \le C_w^3c_w^{-2}\frac{\epsilon }{\alpha }. \end{aligned}$$
(94)

This shows that choosing \(\alpha = \sqrt{\epsilon }\) gives a regularization strategy.

Suppose now that \(\delta > 0\). The proof of the estimate (11) is similar to that of [11]. Using the formula (92), we get that

$$\begin{aligned} \Vert P_{w,s}^{\alpha }F_{W_k} -\text {Id} \Vert _{H^{r+\delta }({\mathbb {T}}^n)\rightarrow H^r({\mathbb {T}}^n)} = \sup _{k \in {\mathbb {Z}}^n} \frac{\alpha W_k^{-1}\left\langle k\right\rangle ^{2s-\delta }}{1+\alpha W_k^{-1}\left\langle k\right\rangle ^{2s}}. \end{aligned}$$
(95)

We can estimate the norm by defining the functions

$$\begin{aligned} F_k(x) := \frac{\alpha W_k^{-1} x^{2s-\delta }}{1+\alpha W_k^{-1}x^{2s}}. \end{aligned}$$
(96)

The formula [11, Eq. (38)] implies that the maximum value of \(F_k\) is \((W_k^{-1}\alpha )^{\delta /2s}C(\delta /2s)\) if \(\alpha \le W_k(2s/\delta -1)\). We see that \(\alpha \le W_k(2s/\delta -1)\) holds as we assumed that \(\alpha \le c_w^2(2s/\delta -1)\).

We obtain that

$$\begin{aligned} \begin{aligned}&\Vert (P_{w,s}^\alpha F_{W_k} -\text {Id}) \Vert _{H^{r+\delta }({\mathbb {T}}^n)\rightarrow H^r({\mathbb {T}}^n)} \\&\,\,\le \sup _{k \in {\mathbb {Z}}^n, x \in {\mathbb {R}}} F_k(x) \le (c_w^{-2}\alpha )^{\delta /2s}C(\delta /2s). \end{aligned} \end{aligned}$$
(97)

Hence

$$\begin{aligned} \Vert (P_{w,s}^\alpha F_{W_k} -\text {Id})f \Vert _{H^r({\mathbb {T}}^n)} \le (c_w^{-2}\alpha )^{\delta /2s}C(\delta /2s)\Vert f \Vert _{H^{r+\delta }({\mathbb {T}}^n)}. \end{aligned}$$
(98)

Now the formulas (94) and (98) imply the quantitative estimate (11). \(\square \)