Dynamical Sampling in \(PW_c\)
In this section, we recall some of the results on dynamical sampling from [4, 5] and adapt them for problems studied in this paper.
For \(\phi \in L^1\), consider the function
$$\begin{aligned} {\widehat{\phi }}_p(x)=\sum _{k\in {{\mathbb {Z}}}}{\widehat{\phi }}(x-2ck)\mathbf {1}_{[-c,c)}(x-2ck), \end{aligned}$$
that is, the 2c-periodization of the piece of \({\widehat{\phi }}\) supported in \([-c,c)\). Recall that we consider kernels from the set \(\Phi \) given by (1.1). Hence,
$$\begin{aligned} \kappa _\phi \le {\widehat{\phi }}_p(\xi )\le 1, \qquad \xi \in {\mathbb {R}}. \end{aligned}$$
We also write
$$\begin{aligned} \widehat{f_t}(\xi ) := \widehat{f}(\xi ) \widehat{\phi }^t(\xi ), \ f\in PW_c. \end{aligned}$$
Next, we introduce the sampled diffusion matrix, which is the \(m\times m\) matrix-valued function given by
$$\begin{aligned} {\mathcal {B}}_m(\xi )= & {} \left( \int \limits ^1_0\overline{(\widehat{\phi })_p^t\left( \frac{2c}{m}(\xi +j)\right) }(\widehat{\phi })_p^t\left( \frac{2c}{m}(\xi +k)\right) \,\mathrm {d}t\right) _{0\le j,k\le m-1} \nonumber \\= & {} \int _0^1{\mathcal {A}}_m^* (\xi ,t){\mathcal {A}}_m(\xi ,t)\,\mathrm {d}t, \end{aligned}$$
(2.12)
where
$$\begin{aligned} {\mathcal {A}}_{m}(\xi ,t)= & {} \begin{pmatrix}\displaystyle (\widehat{\phi })_p^t\left( \frac{2c}{m}(\xi +k)\right) \end{pmatrix}_{k=0,\ldots ,m-1}\\= & {} \begin{pmatrix}\displaystyle (\widehat{\phi })_p^t\left( \frac{2c}{m}\xi \right)&\cdots&\displaystyle (\widehat{\phi })_p^t\left( \frac{2c}{m}(\xi +m-1)\right) \end{pmatrix} \in {\mathcal {M}}_{1,m}(\mathbb {C}). \end{aligned}$$
Remark 2.1
Observe that the matrix function \({\mathcal {B}}_m\) is m-periodic. Its eigenvalues, however, are 1-periodic because the matrices \({\mathcal {B}}_m(\xi )\) and \({\mathcal {B}}_m(\xi +k)\), \(k\in {{\mathbb {Z}}}\), are similar via a circular shift matrix.
The following lemma explains the role of the sampled diffusion matrix. In the lemma, we let
$$\begin{aligned} \mathbf{f}(\xi )=\begin{pmatrix}\displaystyle (\widehat{f})_p\left( \frac{2c}{m}(\xi +j)\right) \end{pmatrix}_{j=0,\ldots ,m-1} =\begin{pmatrix}\displaystyle (\widehat{f})_p\left( \frac{2c}{m}\xi \right) \\ \vdots \\ \displaystyle (\widehat{f})_p\left( \frac{2c}{m}(\xi +m-1)\right) \end{pmatrix} \in {\mathcal {M}}_{m,1}(\mathbb {C}). \end{aligned}$$
(2.13)
Note that if we recover \(\mathbf{f}(\xi )\) for \(\xi \in [0,1]\) then we can recover \(f_p\). Observe also that
$$\begin{aligned} \begin{aligned} \int _0^1\Vert \mathbf{f}(\xi )\Vert ^2\,\text{ d }\xi&=\sum _{j=0}^{m-1}\int _0^1\left| (\widehat{f})_p\left( \frac{2c}{m}(\xi +j)\right) \right| ^2\,\text{ d }\xi =\frac{m}{2c}\sum _{j=0}^{m-1}\int _{2cj/m}^{2c(j+1)/m}|(\widehat{f})_p(u)|^2\,\text{ d }u\\&=\frac{m}{2c}\int _{0}^{2c}|(\widehat{f})_p(s)|^2\,\text{ d }s =\frac{m}{2c}\int _{-c}^{c}|\widehat{f}(s)|^2\,\text{ d }s \end{aligned} \end{aligned}$$
(2.14)
In other words, \(f\mapsto \sqrt{\frac{2c}{m}}\mathbf{f}: PW_c\rightarrow L^2([0,1],{\mathcal {M}}_{m,1}(\mathbb {C}))\) is an isometric isomorphism.
Lemma 2.2
For \(f \in PW_c\),
$$\begin{aligned} \int _0^1 \sum _{k\in {{\mathbb {Z}}}}\left| f_t\left( \frac{m \pi }{c}k\right) \right| ^2\,\mathrm {d}t =\left( \frac{c}{m\pi }\right) ^2\int _0^1\mathbf{f}(\xi )^*{\mathcal {B}}_m(\xi )\mathbf{f}(\xi )\,\mathrm {d}\xi . \end{aligned}$$
(2.15)
Proof
Observe that it suffices to prove the result in \(PW_c\cap {\mathcal S}({{\mathbb {R}}})\) (the Schwarz class). Consider the function
$$\begin{aligned} b(\xi ,t)=\sum _{k\in {{\mathbb {Z}}}}f_t\left( \frac{m \pi }{c}k\right) e^{-2i\pi k\xi }. \end{aligned}$$
Using the Poisson summation formula and the definition of \(f_t\), we get
$$\begin{aligned} b(\xi ,t)= & {} \frac{c}{m\pi }\sum _{j\in {{\mathbb {Z}}}}\widehat{f_t}\left( \frac{2c}{m}(\xi +j)\right) =\frac{c}{m\pi }\sum _{-\frac{m}{2}-\xi \le j< \frac{m}{2}-\xi } {\widehat{\phi }}^t\left( \frac{2c}{m}(\xi +j)\right) \widehat{f}\left( \frac{2c}{m}(\xi +j)\right) \\= & {} \frac{c}{m\pi }\sum _{j=0}^{m-1} ({\widehat{\phi }})_p^t\left( \frac{2c}{m}(\xi +j)\right) (\widehat{f})_p\left( \frac{2c}{m}(\xi +j)\right) , \end{aligned}$$
Note that the functions \(b(\cdot , t)\) are 1-periodic,
$$\begin{aligned} b(\xi ,t)=\frac{c}{m\pi }{\mathcal {A}}_m(\xi ,t)\mathbf{f}(\xi ), \end{aligned}$$
(2.16)
and thus
$$\begin{aligned} \int _0^1|b(\xi ,t)|^2\,\mathrm {d}t=\left( \frac{c}{m\pi }\right) ^2\mathbf{f}(\xi )^*{\mathcal {B}}_m(\xi )\mathbf{f}(\xi ),\ \xi \in {{\mathbb {R}}}. \end{aligned}$$
Combining the last equation with the Parseval’s relation
$$\begin{aligned} \int _0^1|b(\xi ,t)|^2d\xi =\sum _{k\in {{\mathbb {Z}}}}\left| f_t\left( \frac{m \pi }{c}k\right) \right| ^2. \end{aligned}$$
(2.17)
yields the desired conclusion. \(\square \)
Remark 2.3
Lemma 2.2 shows that the stability of reconstruction from spatio-temporal samples is controlled by the condition number of the self-adjoint matrices \({\mathcal {B}}_m(\xi )\) in (2.12). For symmetric \(\phi \in \Phi \) and \(m \ge 2\), however,
$$\begin{aligned} \inf _{\xi \in [0,1]} \lambda _{\mathrm {min}} \big ({\mathcal {B}}_m(\xi ) \big ) = \lambda _{\mathrm {min}} \big ({\mathcal {B}}_m(0) \big ) = 0, \end{aligned}$$
which precludes the stable reconstruction of all \(f \in PW_c\), see, e.g., [4]. This adds to our explanation of the phenomenon of blind spots in Sect. 1.2. We can nonetheless hope to find a large set \(\widetilde{E} \subseteq [0,1]\) such that \(\lambda _{\mathrm {min}} \big ({\mathcal {B}}_m(\xi ) \big )\ge \kappa \) for \(\xi \in \widetilde{E}\). Then, repeating the computation in (2.14), we get
$$\begin{aligned} \begin{aligned} \int _0^1 \sum _{k\in {{\mathbb {Z}}}}\left| f_t\left( \frac{m \pi }{c}k\right) \right| ^2\,\mathrm {d}t =&\left( \frac{c}{m\pi }\right) ^2\int _0^1\mathbf{f}(\xi )^*{\mathcal {B}}_m(\xi )\mathbf{f}(\xi )\,\mathrm {d}\xi . \ge \kappa \left( \frac{c }{m\pi }\right) ^2\int _{\tilde{E}}\Vert \mathbf{f}(\xi )\Vert ^2\,\mathrm {d}\xi \\ =&\frac{c\kappa }{2m\pi ^2}\int _{E}\Vert \widehat{f}(\xi )\Vert ^2\,\mathrm {d}\xi \end{aligned} \end{aligned}$$
(2.18)
where \(E=\displaystyle \left( \frac{2c}{m}(\tilde{E}+{{\mathbb {Z}}})\right) \cap [-c,c]\).
In the following example, we offer some numerics. To simplify the computations, we represent \({\mathcal {B}}_m(\xi )\) in (2.12) as a Pick matrix (see, e.g., [7, 10]). For \(\xi \in [-c,c)\), we write \(\widehat{\phi }(\xi ) = e^{-\psi (\xi )},\) so that \(\psi \ge 0\) and \(\psi (0)= 0,\) and obtain for \(j, k = 0,\ldots , m-1\),
$$\begin{aligned} ({\mathcal {B}}_m)_{jk}(\xi ) = \int _0^1 \widehat{\phi }^t\left( \frac{2c}{m}(\xi +j^\prime )\right) \, \widehat{\phi }^t \left( \frac{2c}{m}(\xi +k^\prime )\right) \, \mathrm {d}t \end{aligned}$$
where the indices \(j^\prime ,k^\prime \) are in the set
$$\begin{aligned} I_\xi =\left\{ n\in {\mathbb {Z}}:\frac{\xi + n}{m} \in [-1/2, 1/2)\right\} , \end{aligned}$$
(2.19)
m divides \(|j-j^\prime |\) and \(|k-k^\prime |\), and j, k, and \(\xi \) are not 0 simultaneously. Thus
$$\begin{aligned} \begin{aligned} ({\mathcal {B}}_m)_{jk}(\xi )&= \int _0^1\ e^{-t\left( \psi \left( \frac{2c}{m}(\xi +j^\prime )\right) + \psi \left( \frac{2c}{m}(\xi +k^\prime )\right) \right) } \,\mathrm {d}t \\&= \left( \psi \left( \frac{2c}{m}(\xi +j^\prime )\right) + \psi \left( \frac{2c}{m}(\xi +k^\prime )\right) \right) ^{-1}\,\left( 1-e^{-\left( \psi \left( \frac{2c}{m}(\xi +j^\prime )\right) + \psi \left( \frac{2c}{m}(\xi +k^\prime )\right) \right) } \right) \end{aligned} \end{aligned}$$
(2.20)
Observe that \(({\mathcal {B}}_m)_{00}(0) = 1\).
Example 2.4
Here, we choose \(\phi \) to be the Gaussian function, i.e.,
$$\begin{aligned} {\widehat{\phi }}(\xi ) = {\widehat{\phi }}_1(\xi ) = e^{-{{\sigma ^2} \xi ^2}} \end{aligned}$$
for various values of \(\sigma \not = 0\). Hence, \(\psi (\xi ) = \sigma ^2\xi ^2\), and we get
$$\begin{aligned} ({\mathcal {B}}_m)_{jk}(\xi ) = \frac{m^2}{4c^2\sigma ^2}\cdot \frac{1-e^{ - \left( \frac{\sigma ^2}{m^2} \left( {(\xi +j^\prime )^2} + {(\xi +k^\prime )^2}\right) \right) }}{(\xi +j^\prime )^2+(\xi +k^\prime )^2} \end{aligned}$$
with \(j^\prime \), \(k^\prime \), and \(({\mathcal {B}}_m)_{00}(0)\) as above.
In Figure 1, we show the condition numbers of the matrices \({\mathcal {B}}_m(\xi )\) with \(\xi =0.45\), \(c=1/2\), \(m\in \{2,3,5\}\), and \(\sigma \) varying from 1 to 200.
In Figure 2, we also show the condition numbers of the matrices \({\mathcal {B}}_m(\xi )\). This time, however, still \(c=1/2\), the parameter \(\sigma \) is fixed to be 200, whereas the point \(\xi \) is allowed to vary from 0.35 to 0.49. We still have \(m\in \{2,3,5\}\).
Estimating the Minimal Eigenvalue of the Sampled Diffusion Matrix
In this subsection, we use Vandermonde matrices to obtain a lower estimate for the eigenvalue \(\lambda ^{(m)}_{\min }(\xi )\) of the matrices \({\mathcal {B}}_m(\xi )\) in (2.12). We also present an upper estimate for \(\lambda ^{(m)}_{\min }(\xi )\), which follows from the general theory of Pick matrices [7, 10].
We begin with the following auxiliary result.
Lemma 2.5
Let \(v_0, v_1, \ldots v_{m-1}\) be m distinct non-zero real numbers and let \(\mathbf {v}=(v_0,\ldots ,v_{m-1})\). For \(k\in {\mathbb {N}}\), define a function \(\Psi _k:{{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\) by \(\Psi _k(t)=\tfrac{1-t^2}{1-t^{2/k}}\) if \(t\ne 1\) and \(\psi _k(1)=k\). For \(j=0,\ldots ,m-1\), define
$$\begin{aligned} \sigma _j^2=\sum _{k=0}^{m-1}v_j^{2k}= \Psi _m(v_j^m) = {\left\{ \begin{array}{ll}m&{}\text{ if } v_j=1\\ \frac{1-v_j^{2m}}{1-v_j^2}&{}\text{ otherwise }\end{array}\right. }. \end{aligned}$$
Let \(\sigma =\left( \sum _{j=0}^{m-1}\sigma _j^2\right) ^{1/2}\), \(\gamma _- = \min _{j} |v_j| >0\), \(\gamma _+=\max _{j} |v_j|\) and let
$$\begin{aligned} \alpha = \left( \frac{m-1}{\sigma ^2}\right) ^{\frac{m-1}{2}}\prod _{0\le j< k\le m-1}|v_j-v_k|. \end{aligned}$$
For \(N\in \mathbb N\), let \(W_N\) be the \((mN)\times m\) Vandermonde matrix associated to \(\mathbf {v}_N = (v_0^{\frac{1}{N}}, v_1^{\frac{1}{N}}, \ldots v_{m-1}^{\frac{1}{N}})\), i.e.,
$$\begin{aligned} W_N=\left[ v_j^{\frac{i-1}{N}}\right] _{1\le i\le mN,0\le j\le m-1}. \end{aligned}$$
Then for each \(x\in \mathbb {C}^m\), we have
$$\begin{aligned} \alpha ^2 \Psi _N(\gamma _-)\Vert x\Vert ^2 \le \Vert W_N x\Vert ^2 \le \sigma ^2\Psi _N(\gamma _+)\Vert x\Vert ^2. \end{aligned}$$
Proof
let V be the \(m\times m\) Vandermonde matrix associated to \(\mathbf {v}\):
$$\begin{aligned} V=[v_j^{i}]_{0\le i\le m-1,0\le j\le m-1}. \end{aligned}$$
Note that the Frobenius norm of V and its determinant are given by
$$\begin{aligned} \Vert V\Vert _F=\sigma \quad \text{ and }\quad |\det V|=\prod _{0\le j< k\le m-1}|v_j-v_k|. \end{aligned}$$
Recall from [23] an estimate for the minimal singular value of an \(m\times m\) matrix A:
$$\begin{aligned} \sigma _{\min }(A) \ge \left( \frac{m-1}{\Vert A\Vert ^2_F}\right) ^{(m-1)/2}|\det A|. \end{aligned}$$
(2.21)
Specifying this to V we get \(\sigma _{\min }(V)\ge \alpha \). As \(\Vert V\Vert \le \Vert V\Vert _F\), it follows that, for all \(x\in \mathbb {C}^m\),
$$\begin{aligned} \alpha ^2\Vert x\Vert ^2\le \Vert Vx\Vert ^2\le \sigma ^2\Vert x\Vert ^2. \end{aligned}$$
(2.22)
Let \(D_N\) be the diagonal matrix with \(\mathbf{v}_{N}\) on the main diagonal. Since
$$\begin{aligned} \Vert W_N x\Vert ^2 = \langle W_N^* W_N x, x\rangle = \sum _{\ell =0}^{N-1} \langle (D_N^\ell )^* V^*VD_N^\ell x, x\rangle = \sum _{\ell =0}^{N-1}\Vert VD_N^\ell x\Vert ^2, \end{aligned}$$
we deduce from (2.22) that
$$\begin{aligned} \sum _{\ell =0}^{N-1}\alpha ^2 \Vert D_N^\ell x\Vert ^2\le \Vert W_N x\Vert ^2 \le \sum _{\ell =0}^{N-1}\sigma ^2 \Vert D_N^\ell x\Vert ^2. \end{aligned}$$
Moreover, we have \(\gamma _-^{\frac{2\ell }{N}}\Vert x\Vert ^2\le \Vert D_N^\ell x\Vert ^2 \le \gamma _+^{\frac{2\ell }{N}}\Vert x\Vert ^2\) by definition of \(D_N\). The conclusion now follows by summing the two geometric sequences. \(\square \)
Note that the function \(\Psi _N\) is increasing on \((0,+\infty )\) and that, for \(t\ne 1\), \(t>0\)
$$\begin{aligned} \lim _{N\rightarrow \infty }\frac{1}{N}\Psi _N(t)=\frac{1-t^2}{\lim _{N\rightarrow \infty }N(1-e^{2\ln t/N})}=\left| \frac{1-t^2}{2\ln t} \right| . \end{aligned}$$
(2.23)
Corollary 2.6
With the notation of Lemma 2.5, assume further that \(0<\nu \le v_j\le 1\) and \(m\ge 2\). Let
$$\begin{aligned} {\widetilde{\alpha }}=e^{-1/2}m^{-\frac{m-1}{2}}\prod _{0\le j< k\le m-1}|v_j-v_k|. \end{aligned}$$
(2.24)
Then for each \(x\in \mathbb {C}^m\), we have
$$\begin{aligned} {\widetilde{\alpha }}^2 \Psi _N(\nu )\Vert x\Vert ^2 \le \Vert W_N x\Vert ^2 \le m^2N\Vert x\Vert ^2. \end{aligned}$$
Proof
Indeed, \(\nu \le \gamma _-\le \gamma _+\le 1\) so \(\Psi _N(\nu )\le \Psi _N(\gamma _-)\) and \(\Psi _N(\gamma _+)\le \Psi _N(1)=N\).
Further, since \(v_j\le 1\), \(\sigma ^2\le m^2\). Moreover, the derivative of \(\left( \frac{t-1}{t}\right) ^{(t-1)/2}=\left( 1-\frac{1}{t}\right) ^{(t-1)/2}\) is
$$\begin{aligned} \frac{1}{2}\left( 1-\frac{1}{t}\right) ^{(t-1)/2}\left( \frac{1}{t}+\ln \left( 1-\frac{1}{t}\right) \right) \le 0 \end{aligned}$$
for \(t\ge 1\). Thus,
$$\begin{aligned} \left( \frac{m-1}{m}\right) ^{(m-1)/2}\ge \lim _{t\rightarrow +\infty }\exp \left[ \frac{t-1}{2}\ln \left( 1-\frac{1}{t}\right) \right] =e^{-1/2}. \end{aligned}$$
It follows that \(\alpha \) in the statement of Lemma 2.5 satisfies
$$\begin{aligned} \alpha \ge \frac{\prod _{0\le j< k\le m-1}|v_j-v_k|}{\sqrt{e}m^{(m-1)/2}}, \end{aligned}$$
and the result is established. \(\square \)
Proposition 2.7
Let \( \phi \in \Phi \). Define
$$\begin{aligned} \Delta _m(\xi )= \prod _{0\le j< k\le m-1}\left| {\widehat{\phi }}_p\left( \frac{2c}{m}(\xi +j)\right) -{\widehat{\phi }}_p\left( \frac{2c}{m}(\xi +k)\right) \right| . \end{aligned}$$
Then, for each \(x\in \mathbb {C}^m\), we have
$$\begin{aligned} \frac{1}{2e m^{m^2}}\Delta _m(\xi )^2 \cdot \frac{1-\kappa _\phi ^{2/m}}{|\ln \kappa _\phi |}\Vert x\Vert ^2\le \langle {\mathcal {B}}_m(\xi ) x, x \rangle \le m\Vert x\Vert ^2. \end{aligned}$$
Proof
We fix \(\xi \) and apply Corollary 2.6 to \(v_j =\displaystyle ({\widehat{\phi }})_p\left( \frac{2c}{m}(\xi +j)\right) ^{\frac{1}{m}}\). With \({\widetilde{\alpha }}\) given by (2.24),
$$\begin{aligned} {\widetilde{\alpha }}= e^{-1/2}m^{-\frac{m-1}{2}} \prod _{0\le j< k\le m-1}\left| {\widehat{\phi }}_p\left( \frac{2c}{m}(\xi +j)\right) ^{1/m} -{\widehat{\phi }}_p\left( \frac{2c}{m}(\xi +k)\right) ^{1/m} \right| , \end{aligned}$$
we get
$$\begin{aligned} {\widetilde{\alpha }}^2\Psi _N(\kappa _\phi ^{1/m})\Vert x\Vert ^2\le \Vert W_Nx\Vert ^2\le m^2 N \Vert x\Vert ^2. \end{aligned}$$
On the other hand, \(\frac{1}{mN} W_N^*W_N\) equals the left-end mN-term Riemann sum for the integral defining \({\mathcal {B}}_m(\xi )\). It follows that
$$\begin{aligned} \langle {\mathcal {B}}_m(\xi ) x, x \rangle = \lim _{N\rightarrow \infty }\frac{1}{mN} \langle W_N^*W_Nx, x \rangle =\lim _{N\rightarrow \infty }\frac{1}{mN} \Vert W_Nx\Vert ^2. \end{aligned}$$
Using (2.23), we get
$$\begin{aligned} {\widetilde{\alpha }}^2\frac{1-\kappa _\phi ^{2/m}}{2m|\ln \kappa _\phi |}\Vert x\Vert ^2\le \langle {\mathcal {B}}_m(\xi ) x, x \rangle \le m\Vert x\Vert ^2. \end{aligned}$$
Finally, note that if \(0<a,b\le 1\), using the mean value theorem, there is an \(\eta \in (a,b)\) such that
$$\begin{aligned} |a^{1/m}-b^{1/m}|=\frac{1}{m}|a-b|\eta ^{-1+1/m}\ge \frac{1}{m}|a-b|. \end{aligned}$$
Therefore
$$\begin{aligned} {\widetilde{\alpha }}= & {} e^{-1/2}m^{-\frac{m-1}{2}} \prod _{0\le j< k\le m-1}\left| {\widehat{\phi }}_p\left( \frac{2c}{m}(\xi +j)\right) ^{1/m} -{\widehat{\phi }}_p\left( \frac{2c}{m}(\xi +k)\right) ^{1/m} \right| \\\ge & {} e^{-1/2}m^{-\frac{m-1}{2}-\frac{m(m-1)}{2}}\Delta (\xi ) =e^{-1/2}m^{-\frac{m^2-1}{2}}\Delta (\xi ) \end{aligned}$$
establishing the postulated estimates. \(\square \)
For an upper estimate of the minimal eigenvalue \(\lambda ^{(m)}_{\min }(\xi )\) we use the estimates of the singular values of Pick matrices by Beckerman-Townsend [7]. For \(p_j\in \mathbb {C}\), \( j = 1 , \dots , m\), and \(0<a\le x_1< x_2< \dots < x_m \le b\) let
$$\begin{aligned} (P_m)_{jk} = \frac{p_j+ p_k}{x_j+ x_k}, \qquad j,k = 1 , \dots , m, \end{aligned}$$
(2.25)
be the corresponding Pick matrix. Then the smallest singular value \(s_{\min }\) of \(P_m\) is bounded above by
$$\begin{aligned} s_{\min } \le \min \left\{ 1, 4 \left[ \exp \left( \frac{\pi ^2}{2\ln \left( \frac{4b}{a}\right) } \right) \right] ^{-2\lfloor m/2 \rfloor }\right\} s_{\max }, \end{aligned}$$
(2.26)
where \(s_{\max }\) is the largest singular value.
If \((\widetilde{ P}_m)_{jk} = \frac{1-c_jc_k}{x_j+x_k}\), then \(\widetilde{P}_m\) is related to a Pick matrix of the form (2.25) via the diagonal matrix \(D = \mathrm {diag} (1+c_j)\):
$$\begin{aligned} \frac{1}{2} D ^{-1}\widetilde{P}_m D ^{-1}= P_m \end{aligned}$$
with \(p_j = \frac{1-c_j}{1+c_j}\), \(c_j\ne -1\).
In our case, see (2.20), \(x_j= \psi \left( \frac{2c}{m}(\xi +j)\right) \) and \(c_j = e^{-x_j} \in (0,1]\), so \(\mathrm {Id}\le D \le 2\mathrm {Id}\) and the singular values of \(\mathcal {B}_m (\xi )\) and the corresponding Pick matrix \(P_m\) differ at most by a factor 4. Therefore, (2.26) holds with \(a (\xi ) = \min \left\{ \psi \left( \frac{2c}{m}(\xi +k)\right) : k\in I_\xi \right\} \) and \(b (\xi ) = \max \left\{ \psi \left( \frac{2c}{m}(\xi +k)\right) : k\in I_\xi \right\} \), \(I_\xi \) defined in (2.19), and an additional factor 4 provided that \(a(\xi ) \ne 0\).
For our main examples, we have \(\psi (\xi ) = |\xi |^\alpha \), \(\alpha > 0\). This yields
$$\begin{aligned} b(\xi ) \le c^\alpha \quad \text{ and }\quad a(\xi ) \!=\! \min \left\{ \left| \frac{2c}{m}(\xi -k)\right| ^\alpha : \frac{2c}{m}|\xi \!-\! k|\le c,\ |\xi |\le \frac{1}{2}\right\} = \left( \frac{2c}{m}|\xi |\right) ^\alpha \end{aligned}$$
So for the smallest singular value of \(\mathcal {B}_m(\xi )\) we obtain the estimate
$$\begin{aligned} \begin{aligned} \lambda ^{(m)}_{\min }(\xi )&\le 4^2\left[ \exp \left( \frac{\pi ^2}{2\ln 4\left( \frac{m}{2|\xi |}\right) ^\alpha } \right) \right] ^{-2\lfloor m/2 \rfloor } m\\&\le 16 m \, \exp \left( -\frac{(m-1)\pi ^2}{\ln 16 + 2\alpha \ln \frac{m}{2|\xi |}} \right) . \end{aligned} \end{aligned}$$
(2.27)
Observe that the Beckerman-Townsend estimate (2.26) holds for all Pick matrices with the same values for \(a=\min x_j\) and \(b=\max x_j\) and is completely independent of the particular distribution of the \(x_j\). Regardless, it shows that the condition number grows nearly exponentially with m, establishing limitations on how well the space–time trade-off can work numerically. Of course, the condition number may be much worse if two values \(x_j\) and \(x_{j+1}\) are close together (if \(x_j = x_{j+1}\), then \(P_m\) is singular). Thus, (2.27) is an optimistic upper estimate for \(\lambda ^{(m)}_{\min }(\xi )\). By comparison, our lower estimate in Proposition 2.7 depends crucially on the distribution of the parameters \(x_j\) and is much harder to obtain. It does, however, establish an upper bound on the condition number and, thus, shows that the space–time trade-off may be useful. A precise result is formulated in the following subsection.
Partial Recoverability
Theorem 2.8
Let \( \phi \in \Phi \), \(m\ge 2\) an integer and \(\widetilde{E}\subseteq I = [0,1] \) be a compact set. Assume that there exists \(\delta >0\) such that, for every \(0\le j<k\le m-1\) and every \(\xi \in \widetilde{E}\)
$$\begin{aligned} \left| {\widehat{\phi }}_p\left( \frac{2c}{m}(\xi + j)\right) -{\widehat{\phi }}_p\left( \frac{2c}{m}(\xi +k)\right) \right| \ge \delta . \end{aligned}$$
Let \(E=\displaystyle \left( \frac{2c}{m}(\tilde{E}+{{\mathbb {Z}}})\right) \cap [-c,c]\). Then for any \(f\in PW_c\), the function \(\widehat{f}\mathbf {1}_{ E}\) can be recovered from the samples
$$\begin{aligned} {\mathcal {M}} = \left\{ f_t\left( \frac{m \pi }{c}k\right) : k \in {\mathbb {Z}}, 0 \le t \le 1\right\} \end{aligned}$$
(2.28)
in a stable way. Moreover, we have
$$\begin{aligned} A\Vert \widehat{f}\mathbf {1}_{ E}\Vert ^2 \le \int _0^1 \sum _{k\in {\mathbb {Z}}}\left| f_t\left( \frac{m \pi }{c}k\right) \right| ^2\,\mathrm {d}t \le \frac{c}{2\pi ^2}\Vert \widehat{f}\Vert ^2, \end{aligned}$$
(2.29)
where
$$\begin{aligned}A=\frac{c }{4e\pi ^2}\frac{\delta ^{m(m-1)}}{m^{1+m^2}} \frac{\kappa _\phi ^{2/m}-1}{\ln \kappa _\phi }. \end{aligned}$$
Proof
Recall from (2.15) that we need to estimate
$$\begin{aligned} \int _0^1 \sum _{k\in {\mathbb {Z}}}\left| f_t\left( \frac{m \pi }{c}k\right) \right| ^2\,\mathrm {d}t=\left( \frac{c}{m\pi }\right) ^2\int _0^1\mathbf{f}(\xi )^*{\mathcal {B}}_m(\xi )\mathbf{f}(\xi )\,\mathrm {d}\xi . \end{aligned}$$
The upper bound follows directly from Proposition 2.7, and (2.14):
$$\begin{aligned} \int _0^1\mathbf{f}(\xi )^*{\mathcal {B}}_m(\xi )\mathbf{f}(\xi )\,\mathrm {d}\xi \le m\int _0^1\Vert \mathbf{f}(\xi )\Vert ^2\,\mathrm {d}\xi =\frac{m^2}{2c}\Vert \widehat{f}\Vert ^2. \end{aligned}$$
Let us now prove the lower bound using (2.18). First, \(\Delta _m(\xi )\ge \delta ^{\frac{m(m-1)}{2}}\). It follows from Proposition 2.7 that, if \(\xi \in \widetilde{E}\) then
$$\begin{aligned} \mathbf{f}(\xi )^*{\mathcal {B}}_m(\xi )\mathbf{f}(\xi ) \ge \frac{\kappa _\phi ^{2/m}-1}{2e m^{m^2}\ln \kappa _\phi } \delta ^{m(m-1)} \Vert \mathbf{f}(\xi )\Vert ^2. \end{aligned}$$
Taking \(\displaystyle \kappa =\frac{\kappa _\phi ^{2/m}-1}{2e m^{m^2}\ln \kappa _\phi } \delta ^{m(m-1)}\) in (2.18) gives the result. \(\square \)
Remark 2.9
The condition number implied by the above theorem is not the best possible one can obtain through this method. For instance, a better estimate for the \(\sigma _{\min }\) of a Vandermonde matrix may be used in place of (2.21).
However, the method will always lead to a deteriorating estimate of the condition number as m increases. This follows from the Beckerman-Townsend estimate (2.26) we discussed in the previous subsection.
Corollary 2.10
Assume that \(\phi \in \Phi \), \({\widehat{\phi }}\) is even and strictly decreasing on \({{\mathbb {R}}}_+\), and \(m\ge 2\) is an integer. Given \(\eta \in (0,\frac{1}{4})\), let \(\widetilde{E} = [-\frac{1}{2} + \eta , -\eta ]\cup [\eta , \frac{1}{2} -\eta ]\) and \(E=\displaystyle \left( \frac{2c}{m}(\tilde{E}+{{\mathbb {Z}}})\right) \cap [-c,c]\). Then there exists \(A>0\) such that, for any \(f\in PW_c\),
$$\begin{aligned} A\Vert \widehat{f}\mathbf {1}_{ E}\Vert ^2 \le \int _0^1 \sum _{k\in {\mathbb {Z}}}\left| f_t\left( \frac{m \pi }{c}k\right) \right| ^2\,\mathrm {d}t \le \Vert \widehat{f}\Vert ^2. \end{aligned}$$
Proof
We look into the main condition of Theorem 2.8: there exists \(\delta >0\) such that, for every \(0\le j<k\le m-1\) and every \(\xi \in \widetilde{E}\)
$$\begin{aligned} \left| {\widehat{\phi }}_p\left( \frac{2c}{m}(\xi +j)\right) - {\widehat{\phi }}_p\left( \frac{2c}{m}(\xi +k)\right) \right| \ge \delta . \end{aligned}$$
(2.30)
For a general \(\phi \in \Phi \), the function \({\widehat{\phi }}\) is continuous and, therefore, \({\widehat{\phi }}_p\) is continuous, except possibly on \(\displaystyle c+2c{{\mathbb {Z}}}\) where a jump discontinuity occurs if \({\widehat{\phi }}(-c)\ne {\widehat{\phi }}(c)\). Under current assumptions, however, \({\widehat{\phi }}\) is even and, therefore \({\widehat{\phi }}_p\) is continuous everywhere.
For \(0\le \ell \le m-1\) and \(\xi \in I\), we have \(\displaystyle -\frac{1}{2m} \le \frac{\xi + \ell }{m}\le 1- \frac{1}{2m} \) and
$$\begin{aligned} {\widehat{\phi }}_p\left( \frac{2c}{m}(\xi + \ell )\right) ={\left\{ \begin{array}{ll}\displaystyle {\widehat{\phi }}\left( \frac{2c}{m}(\xi + \ell )\right) &{}\text{ if } \displaystyle \frac{\xi + \ell }{m}< 1/2\\ \displaystyle {\widehat{\phi }}\left( \frac{2c}{m}(\xi + \ell -m)\right) &{}\text{ if } \displaystyle \frac{\xi + \ell }{m}\ge 1/2 \end{array}\right. }. \end{aligned}$$
Thus, the condition of Theorem 2.8 would be satisfied with \(\widetilde{E}=I\) if \(|{\widehat{\phi }}|\) were one-to-one on I, that is, either strictly decreasing or strictly increasing. However, \({\widehat{\phi }}\) is even and strictly decreasing on \({{\mathbb {R}}}_+\), so that \({\widehat{\phi }}_p\) is continuous, strictly decreasing on [0, c] and strictly increasing on \([-c,0]\). It follows that (2.30) may only fail in small intervals around the points \(\xi \in I\) where \(\displaystyle {\widehat{\phi }}_p\left( \frac{2c}{m}(\xi +j)\right) -{\widehat{\phi }}_p\left( \frac{2c}{m}(\xi + k)\right) = 0\) for some \(j,k\in {{\mathbb {Z}}}\). Such points must satisfy
$$\begin{aligned} \frac{\xi + j}{m} = 1-\frac{\xi +\ell }{m},\ 0\le j \le \frac{m-1}{2} < \ell \le m-1. \end{aligned}$$
Thus, we need \(\xi = \frac{1}{2}(m-j-\ell )\), i.e. \(\xi \in \{0,\pm \frac{1}{2} \}\). In view of the continuity of \({\widehat{\phi }}_p\), it follows that there exists \(\eta > 0\) such that (2.30) holds for \(\xi \in \widetilde{E} = [-\frac{1}{2} + \eta , -\eta ]\cup [\eta , \frac{1}{2} -\eta ]\). It remains to observe that with any given \(\eta \in (0,\frac{1}{4})\) inequality (2.30) will hold for \(\delta \) sufficiently small. \(\square \)
Explicit Quantitative Estimates for the Gaussian
To obtain explicit estimates, we need to establish a precise relation between \(\eta \) and \(\delta \) in the proof of Corollary 2.10. In other words, we need to estimate \(\min \limits _{\xi \in \widetilde{E}} \psi (\xi )\), where, as above, \(\widetilde{E} = [-\frac{1}{2} + \eta , -\eta ]\cup [\eta , \frac{1}{2} -\eta ]\), \(\eta \in (0, \frac{1}{4})\), and the function \(\psi \) is given by
$$\begin{aligned} \omega (\xi ) = \min _{j,k\in {{\mathbb {Z}}}} \left| {\widehat{\phi }}_p\left( \frac{2c}{m}(\xi + j)\right) -{\widehat{\phi }}_p\left( \frac{2c}{m}(\xi +k)\right) \right| . \end{aligned}$$
Lemma 2.11
Let E and \(\widetilde{E}\) be as in Corollary 2.10. Assume that the kernel \(\phi \in \Phi \) is such that \({\widehat{\phi }}\) is differentiable on E and
$$\begin{aligned} \min _{\xi \in E} \left| {\widehat{\phi }}^\prime (\xi ) \right| \ \ge R. \end{aligned}$$
Then
$$\begin{aligned} \min \limits _{\xi \in \widetilde{E}}\omega (\xi ) \ge \frac{4cR\eta }{m}. \end{aligned}$$
Proof
Observe that
$$\begin{aligned} \min _{\xi \in \widetilde{E}}\min _{j,k\in {{\mathbb {Z}}}} \left| \left| \frac{2c}{m}(\xi + j)\right| -\left| \frac{2c}{m}(\xi + k)\right| \right| = \frac{2c}{m}2\eta . \end{aligned}$$
With this, the assertion of the lemma follows immediately from the mean value theorem. \(\square \)
The above observation leads to the following explicit estimate for the Gaussian kernel.
Proposition 2.12
Let \({\widehat{\phi }}(\xi )=e^{-\sigma ^2\xi ^2}\), \(\sigma \not =0\), and \(m\ge 2\) be an integer. Given \(\eta \in (0,\frac{1}{4})\), let \(\widetilde{E} = [-\frac{1}{2} + \eta , -\eta ]\cup [\eta , \frac{1}{2} -\eta ]\) and \(E=\displaystyle \left( \frac{2c}{m}(\tilde{E}+{{\mathbb {Z}}})\right) \cap [-c,c]\). Then, for any \(f\in PW_c\), we have
$$\begin{aligned} A\Vert \widehat{f}\mathbf {1}_{E}\Vert ^2 \le \int _0^1 \sum _{k\in {\mathbb {Z}}}\left| f_t\left( \frac{m\pi }{c}k\right) \right| ^2\,\mathrm {d}t \le \Vert \widehat{f}\Vert ^2, \end{aligned}$$
(2.31)
where
$$\begin{aligned} A= \frac{c}{2e\pi ^2(2(\sigma c)^2+m)}\frac{(4cR\eta )^{m(m-1)}}{{m^{1-m+2m^2}}} \quad \text{ with }\quad R = 2\sigma ^2\min \left\{ \eta e^{-(\sigma \eta )^2}, ce^{-(\sigma c)^2}\right\} . \end{aligned}$$
(2.32)
Proof
Observe that Lemma 2.11 applies with R given by (2.32). It remains to apply Theorem 2.8 with \(\kappa _\phi = e^{-(\sigma c)^2}\) and \(\delta = {4cR\eta /m}\). We deduce that (2.31) holds with
$$\begin{aligned} A&=\frac{c}{4e\pi ^2}\frac{\delta ^{m(m-1)}}{ m^{1+m^2}} \frac{\kappa _\phi ^{2/m}-1}{\ln \kappa _\phi } =\frac{c}{2e\pi ^2}\frac{1-e^{-\tfrac{2(\sigma c)^2}{m}}}{\tfrac{2(\sigma c)^2}{m}}\cdot \frac{(4cR\eta )^{m(m-1)}}{{m^{2-m+2m^2}}}. \end{aligned}$$
Using \(\displaystyle \frac{1-e^{-t}}{t}\ge \frac{1}{t+1}\), we obtain the claimed bound. \(\square \)
We remark that the estimate in the above proposition is quite pessimistic. Our numerical experiments showed that the true bound may be much better.