1 Introduction

Sensing and information retrieval in highly non-stationary environments are challenging inverse problems in radar and sonar applications, and their fundamental understanding is also required for future wireless communication in very rapidly time-varying mobile scenarios. In such problems, the task is to identify or estimate channel parameters in a robust manner by probing the channel with a particular identifier signal w of finite duration, also called pilot signal. In radar, for example, a known radar waveform is transmitted and from the received reflections, distance and relative velocity of a target can be obtained by estimating delay and Doppler shifts. Several reflections superimpose at the receiver, hence the core task consists in estimating the multiple time-frequency shifts from finitely many samples of the received signal:

$$\begin{aligned} y(t) = \sum _{s=1}^S \eta _s w(t - \tau _s) \mathrm {e}^{2 \pi \mathrm {i}\nu _s t} \end{aligned}$$

taken within a finite observation interval. Here each triplet \((\eta _s, \tau _s, \nu _s)\) can be interpreted as a particular transmission path with a delay \(\tau _s\) and Doppler-shift \(\nu _s\) due to relative distance and velocity, respectively, with a complex-valued attenuation factor \(\eta _s\). This so called tapped delay-line model, is a special case of a doubly-dispersive (or linear time-variant) channel, where the spreading function is a (finite) point measure. For more details on this terminology, see for example classical works [1, 25]. Intuitively, it is clear that simultaneous accuracy in time and frequency are governed by the uncertainty relation and that the shape of the waveform should fit time and frequency dispersion of the channel. However, often only few scatterers are affecting the wave propagation and therefore the number of time-frequency shifts is rather small compared to the number of samples one may acquire at the receiver.

In so-called coherent communication the wireless channel needs to be estimated to equalize unknown data signals consecutively or simultaneously transmitted with the pilot signal. This principle is used for example in orthogonal frequency-division multiplexing (OFDM) modulation scheme [7] which is implemented in many of today’s communication technologies like WiFi, LTE and 5G standards, as well as broadcasting systems like DAB and certain DVB standards [28]. Thus, the first goal here is to estimate the action of the channel operator on a particular restricted class of data signals. A channel which is exclusively time- or frequency-selective, reduces to convolutions or multiplication operators and equalization (inverting the action of the channel) is then often possible via conventional deconvolution techniques. In the doubly-selective case however, more advanced equalization approaches are necessary to deal with self-interference effects. For this purpose the delay-Doppler shifts are usually approximated to lie on a-priori fixed lattices leading to leakage effects [10]. In essence, the intrinsic sparsity of the channel does not carry over to the approximated model, rendering compressed sensing methods like [34, 40] much less effective.

In radar instead it is important to achieve high resolution on the time-frequency shift parameters itself. However, in future high mobility vehicular communication [29] and automotive applications both aspects will become relevant, i.e., discover the instantaneous neighborhood using radar and simultaneous communicating with other vehicles or road side units. In particular, combined radar and communication transceivers which simultaneously shall use the same hardware and frequency band for both tasks, are recently proposed and investigated in the literature, see exemplary [32]. However, since the propagation environment may change in such vehicular applications as well on a short time-scale and usually in an almost unpredictable manner, it is also important to perform channel estimation and radar in short time cycles with short signals. The traffic type in automotive applications also enforces to ensure strict latency requirements in communication for decoding the equalized data signals.

Beside the practical needs for advanced signal processing algorithms in this challenging engineering field, the estimation problem itself has been attracted researchers working in harmonic analysis. First works in this field and from the perspective of channel identification are due Pfander et al. [35]. Identifiying a linear operator with restricted spreading, i.e., with bandlimited symbol has been investigated in [27].

Finally, we like to mention that there exist other methods for super-resolution as Prony-like methods [13, 30, 31, 37, 38]. These are spectral methods which perform spike localization from low frequency measurements. They not need any discretization and recover the initial signal as long as there are enough observations. So far we have not examined if and how such methods could be applied for our specific modulation-translation setting.

Main contribution The main contribution of this paper is twofold. First, we establish an exact sampling formula for operators which are sparse complex linear combinations of modulation and translation operators

$$\begin{aligned} H = \sum _{s=1}^S \eta _s M_{\nu _s}T_{\tau _s} \end{aligned}$$

applied to (truncated) trigonometric polynomials w as identifiers. The basic resampling idea goes back to the work of Heckel et al. [23], where the problem to identify the parameters \(\eta _s\), \(\nu _s\), \(\tau _s\) of the unknown operator H is approximated by a discrete formulation without explicitly accounting for the employed function spaces and by applying an approximate sampling formula. Using trigonometric polynomials as identifiers, we derive an explicit resampling formula for the continuous problem such that we can completely avoid the approximation errors in [23]. By this, we also overcome particular parameter limitations in the original proof since we not directly couple time-bandwidth limitation of operator and the identifier.

As a second main result we provide explicit algorithmic reconstruction approaches. Our sampling reformulation allows the straightforward application of standard modifications of the conditional gradient method, also known as Frank-Wolfe algorithm, to determine the amplitudes \(\eta _s \in \mathbb {C}\) and the two-dimensional positions \((\tau _s,\nu _s)\). Here we focus on the alternating direction conditional gradient (ADCG) algorithm proposed by Boyd et al. [2]. The corresponding optimization problem takes noise into account and penalizes the sparsity of the above linear combination by the \(\ell _1\)-norm of the amplitudes. The optimization problem can be rephrased in terms of atomic measures, where the \(\ell _1\)-norm is directly related to the total variation norm of the measure, resp. to the atomic norm of a certain set of atoms. Such problems are known as BLASSO [16]. Besides Frank-Wolfe like algorithms that minimize the location parameters over a continuous domain, a common approach consists in constraining the locations to lie on a grid. This leads to a finite dimensional convex optimization problems, known as LASSO [41] or basis pursuit [8] , for which there exist numerous solvers [12, 14, 20, 43]. We will compare the ADCG applied to our resampled problem with a grid method, where we incorporate an adaptive grid refinement. As a third group of methods, we like to mention the reformulation of the optimization problem via its dual into an equivalent finite dimensional semi-definite program (SDP). This technique was first proposed in [5] and then adapted by many other authors. However, the equivalence of formulations are only true in the one-dimensional setting and in higher dimensions one needs to use e.g. the so-called Lassere hierarchy [16]. An SDP approach for our two-dimensional setting based on a results of [18] was also proposed in the paper of Heckel et al. [23]. Since this approach appears to be highly expensive both in time and memory requirement and has moreover to fight with many non specific local maxima related to the so-called dual certificate, it is not appropriate for our setting.

This paper is organized as follows: In Sect. 2, we collect the basic notation and results from Fourier analysis and measure theory which are needed in the following sections. At the end of the section we establish a theorem which relates trigonometric polynomials with periodic functions arising from the Fourier transform of compactly supported measures. The proof of the theorem is given in Appendix A. In Sect. 3, we formulate our super-resolution problem for doubly-dispersive channel estimations. More precisely, we are interested in the two-dimensional parameter detection of sparse linear combinations of translation-modulation operators. Instead of treating the original problem, we give a sampling reformulation of the involved translation-modulation operators for identifiers which are trigonometric polynomials. Here the relation between these polynomials and Fourier transforms of measures will play a role. Since the identifiers have only to be evaluated at points lying in a compact interval, our choice implies no restriction for practical purposes. In Sect. 3, we prove the sampling theorem for translation-modulation operators applied to trigonometric polynomials. Then, in Sect. 5, we show how an alternating descent conditional gradient algorithm can be applied to solve the reformulated problem. Finally, we demonstrate the performance of this algorithm in comparison with simple adaptive grid refinement algorithm and an orthogonal matching pursuit method in Sect. 6.

2 Preliminaries

Function spaces Let I be an open finite interval of \(\mathbb {R}\) or \(\mathbb {R}\) itself. By C(I) we denote the space of complex-valued, continuous functions on I, by \(C_b(I)\) the Banach space of bounded, complex-valued, continuous functions endowed with the norm \(\Vert f\Vert _\infty = \sup _{x \in I} |f(x)|\). Further, let \(C_0(\mathbb {R}) \subset C_b(\mathbb {R})\) be the closed subspace of complex-valued, continuous functions vanishing at infinity. Let \(L^r(I)\), \(r \in [1,\infty ]\) be the Banach space of (equivalence classes) of complex-valued Borel measurable functions with finite norm

$$\begin{aligned} \Vert f\Vert _r = {\left\{ \begin{array}{ll} \left( \int _{I} |f(x)|^r \,\mathrm {d}x \right) ^\frac{1}{r}, &{} 1 \le r < \infty ,\\ \mathrm {ess} \sup _{x \in I} |f(x)|, &{} r = \infty . \end{array}\right. } \end{aligned}$$

For compact I, it holds \(L^1(I) \supset L^r(I) \supset L^s(I) \supset L^\infty (I)\), \(r < s\). The r-norm \(\Vert \cdot \Vert _r\) on higher dimensional domains, sequences and vectors are analogously defined.

An entire (holomorphic) function \(f : \mathbb {C}\rightarrow \mathbb {C}\) is of exponential type if there exist positive constants \(a, b > 0\) such that

$$\begin{aligned} | f(z) | \le a \mathrm {e}^{b |z|} \quad \text {for all }z \in \mathbb {C}. \end{aligned}$$

The exponential type of f is then defined as the number

$$\begin{aligned} \sigma :=\limsup _{t \rightarrow \infty } \frac{\log M(t)}{t}, \qquad M(t) :=\sup _{|z| = t} |f(z)|. \end{aligned}$$

The Bernstein space \(B_\sigma ^r\), \(r \in [1,\infty ]\), consist of all entire functions f of exponential type \(\sigma \) whose restriction to \(\mathbb {R}\) belongs to \(L^r(\mathbb {R})\). Endowed with the \(L^r\) norm, \(B_\sigma ^r\) becomes a Banach space, too. We will need the following sampling result of Nikol’skiĭ [33].

Theorem 1

(Nikol’skiĭ’s Inequality [33, Thm 3.3.1]) Let \(r \in [1, \infty ]\). Then, for every \(f \in B_\sigma ^r\) and \(a > 0\), we have

$$\begin{aligned} \Vert f\Vert _r^r \le \sup _{x \in \mathbb {R}} \left\{ a \sum _{k \in \mathbb {Z}} | f(x - a k)|^r \right\} \le (1 + a \sigma )^r \Vert f\Vert _r^r. \end{aligned}$$

Fourier transform of functions The Fourier transform \({\mathscr {F}}: L^1(\mathbb {R}) \rightarrow C_0 (\mathbb {R}) \subset L^\infty (\mathbb {R})\) defined by

$$\begin{aligned} {\mathscr {F}}f(\xi ) := \int _\mathbb {R}f(x) \mathrm {e}^{-2 \pi \mathrm {i}\xi x} \,\mathrm {d}x = \lim _{R \rightarrow \infty } \int _{-R}^R f(x) \mathrm {e}^{-2 \pi \mathrm {i}\xi x} \,\mathrm {d}x \end{aligned}$$

is a bounded linear operator. For \(1 < r \le 2\), this operator can be extended as \({\mathscr {F}}: L^r(\mathbb {R}) \rightarrow L^s (\mathbb {R})\), \(\frac{1}{r} + \frac{1}{s} = 1\) via the limit in the norm of \(L^s(\mathbb {R})\) of

$$\begin{aligned} {\mathscr {F}}f (\xi ) = {\hat{f}}(\xi ) = \lim _{R \rightarrow \infty } \int _{-R}^R f(x) \mathrm {e}^{-2 \pi \mathrm {i}\xi x} \,\mathrm {d}x. \end{aligned}$$

By Plancherel’s equality, the Fourier transform is an isometry on \(L^2(\mathbb {R})\). Note that the Fourier transform of a function \(f \in L^r(\mathbb {R})\) with \(r > 2\) can be defined in terms of tempered distributions. However, the distributional Fourier transform \({\hat{f}}\) does in general not correspond to a function. A special role plays the sinus cardinalis defined as

$$\begin{aligned} {{\,\mathrm{\mathrm {sinc}}\,}}(x) := \left\{ \begin{array}{ll} \frac{\sin (\pi x)}{\pi x}&{} \mathrm {for} \; x \ne 0 ,\\ 1&{} \mathrm {for} \; x = 0. \end{array} \right. \end{aligned}$$

The sinc function is in \(L^2(\mathbb {R})\) but not in \(L^1(\mathbb {R})\). Further, we have

$$\begin{aligned} {\hat{\chi }}_{[-L,L]} (\xi ) = 2L {{\,\mathrm{\mathrm {sinc}}\,}}(2L \xi ), \end{aligned}$$

where \(\chi _{C}\) denotes the characteristic function of a set \(C \subseteq \mathbb {R}\), i.e., \(\chi _{C}(x) = 1\) if \(x \in A\) and \(\chi _{C}(x) = 0\) if \(x \notin C\). The counterpart of scaled sinc functions in the periodic setting are the Nth Dirichlet kernels given by

$$\begin{aligned} D_N(x) = \sum _{n = -N}^N \mathrm {e}^{2 \pi \mathrm {i}n x} = \frac{\sin \left( (2N+1) \pi x\right) }{\sin (\pi x)}, \qquad x \in \mathbb {R}. \end{aligned}$$

For arbitrary \(f\in L^1(\mathbb {R})\) with \({\hat{f}}\in L^1(\mathbb {R})\), the Fourier inversion formula

$$\begin{aligned} f(x) = ({\hat{f}})^\vee (x) := \int _\mathbb {R}{\hat{f}}(\xi ) \, \mathrm {e}^{2 \pi \mathrm {i}\xi x} \,\mathrm {d}\xi \end{aligned}$$

holds true almost everywhere and, moreover, pointwise if the function f is continuous. For two functions \(f \in L^1(\mathbb {R})\) and \(g \in L^r(\mathbb {R})\), \(r\in [1,\infty ]\), the convolution \(f*g\) is defined almost everywhere by

$$\begin{aligned} (f*g)(x) = \int _{\mathbb {R}} f(y) \, g(x-y) \, \,\mathrm {d}y \end{aligned}$$

and is contained in \(L^r(\mathbb {R})\). For \(r \in [1,2]\), the relation between convolution and Fourier transform is given by \(\widehat{f*g} = {\hat{f}} \, {\hat{g}}\).

For \(\sigma >0\) and \(r \in [1,\infty ]\), we denote by \(\mathrm {PW}_\sigma ^r\) the Paley-Wiener class of functions \(f : \mathbb {C} \rightarrow \mathbb {C}\) of the form

$$\begin{aligned} f(z) = \int _{-\sigma }^\sigma g(\xi ) \mathrm {e}^{2 \pi \mathrm {i}z \xi } \,\mathrm {d}\xi , \quad z \in \mathbb {C}, \end{aligned}$$

for some \(g \in L^r(-\sigma ,\sigma )\). We have the inclusion \(\mathrm {PW}_\sigma ^r \subset \mathrm {PW}_\sigma ^s\) for \(1 \le s < r\). Functions of the class \(\mathrm {PW}_\sigma ^r\) are holomorphic and of exponential type \(2 \pi \sigma \) by

$$\begin{aligned} |f(z)| \le \int _{-\sigma }^\sigma |{\hat{f}}(\xi )| \mathrm {e}^{2 \pi |\xi | |z|} \,\mathrm {d}\xi \le \Vert {\hat{f}}\Vert _1 \mathrm {e}^{2 \pi \sigma |z|}, \qquad z \in \mathbb {C}. \end{aligned}$$

For \(r \in [1,2]\), we further have \(\mathrm {PW}_{\sigma }^r\subset B^s_{2 \pi \sigma }\) with \(\frac{1}{r} + \frac{1}{s} = 1\), see [24].

Measure spaces Let X be a compact subset of \(\mathbb {R}^d\) or \(\mathbb {R}^d\) itself. By \({\mathscr {M}}(X)\) we denote all regular, finite, complex-valued measures, i.e., all mappings \(\mu : {\mathscr {B}}(X) \rightarrow \mathbb {C}\) from the Borel \(\sigma \)-algebra of \(\mathbb {R}^n\) to \(\mathbb {C}\) with \(|\mu (X)| < \infty \) and

$$\begin{aligned} \mu \left( \bigcup _{k=1}^\infty B_k \right) = \sum _{k=1}^\infty \mu (B_k) \end{aligned}$$

for any sequence \(\{B_k\}_{k \in \mathbb {N}} \subset {\mathscr {B}}(X)\) of pairwise disjoint sets. We suppose that the series on the right-hand side converges absolutely, so that the indices of the sets \(B_k\) can be arbitrarily reordered. The support of a complex measure \(\mu \in {\mathscr {M}}(X)\) is defined by

$$\begin{aligned} {{\,\mathrm{\mathrm {supp}}\,}}(\mu ) = {{\,\mathrm{\mathrm {supp}}\,}}(\rho ^+) \cup {{\,\mathrm{\mathrm {supp}}\,}}(\rho ^-) \cup {{\,\mathrm{\mathrm {supp}}\,}}(\iota ^+) \cup {{\,\mathrm{\mathrm {supp}}\,}}(\iota ^-), \end{aligned}$$

where \(\rho ^+ - \rho ^- = \mathfrak {R}(\mu )\) and \(\iota ^+ - \iota ^- = \mathfrak {I}(\mu )\) are the Hahn decompositions of the real and imaginary part into non-negative measures. The support of a non-negative measure \(\nu \) is the closed set

$$\begin{aligned} {{\,\mathrm{\mathrm {supp}}\,}}(\nu ) :=\bigl \{ x \in X: B \subset X \text { open, } x \in B \implies \nu (B) >0\bigr \}. \end{aligned}$$

The total variation of a measure \(\mu \in \mathscr {M}(X)\) is defined by

$$\begin{aligned} |\mu |(B) :=\sup \Bigl \{ \sum _{k=1}^\infty |\mu (B_k)|: \bigcup \limits _{k=1}^\infty B_k = B, \, B_k \; \text{ pairwise } \text{ disjoint }\Bigr \}. \end{aligned}$$

With the norm \(\Vert \mu \Vert _{{\mathscr {M}}(X)} :=|\mu |(X)\) the space \({\mathscr {M}}(X)\) becomes a Banach space. The space \({\mathscr {M}}(X)\) can be identified via Riesz’s representation theorem with the dual space of \(C_0(X)\) and the weak-\(*\) topology on \({\mathscr {M}}(X)\) gives rise to the weak convergence of measures.

We will need that, for a bounded Borel-measurable function g, the measure \(g \mu \) defined by \(g \mu (B) := \int _B g(x) \,\mathrm {d}\mu (x)\) for open \(B \subset \mathbb {R}^d\) is again in \({\mathscr {M}}(\mathbb {R}^d)\) and \(\Vert g \mu \Vert _{{\mathscr {M}}(X)} \le \Vert g\Vert _\infty \Vert \mu \Vert _{{\mathscr {M}}(X)}\).

Fourier transform of measures For our purposes, it is enough to consider the Fourier transform of measures on \(X = \mathbb {R}\). If we consider the open balls \(B_R :=\{x : |x| < R\}\) of radius \(R > 0\), then

$$\begin{aligned} |\mu |(B_R) \rightarrow \Vert \mu \Vert _{{\mathscr {M}}(\mathbb {R})} \qquad \text {and}\qquad |\mu |(\mathbb {R} \setminus B_R) \rightarrow 0 \qquad \mathrm {as} \quad R\rightarrow \infty . \end{aligned}$$

Indeed, the integral with respect to a measure \(\mu \in {\mathscr {M}}(\mathbb {R})\) is also well defined for every \(\varphi \in C_b(\mathbb {R})\) and

$$\begin{aligned} \langle \mu ,\varphi \rangle \le \Vert \mu \Vert _{{\mathscr {M}}(\mathbb {R})} \, \Vert \phi \Vert _{\infty }. \end{aligned}$$

Consequently, we can define the Fourier transform \({\mathscr {F}} :{\mathscr {M}}(\mathbb {R}) \rightarrow C_{b}(\mathbb {R})\) by

$$\begin{aligned} {\mathscr {F}}\mu ( \xi ) :=\hat{\mu } ( \xi ) :=\langle \mu , \mathrm {e}^{-2 \pi \mathrm {i}x \xi }\rangle = \int _{\mathbb {R}} \mathrm {e}^{-2 \pi \mathrm {i}x \xi } d\mu (x). \end{aligned}$$

The Fourier transform is a linear, bounded operator from \({\mathscr {M}}(\mathbb {R})\) into \(C_b(\mathbb {R})\) with operator norm one. Moreover, it is unique in the sense that \(\mu \in {\mathscr {M}}(\mathbb {R})\) with \({\hat{\mu }} \equiv 0\) implies that \(\mu \) is the zero measure. We are especially interested in the Fourier transform of atomic measures \(\mu :=\sum _{k\in \mathbb {Z}} c_k \delta (\cdot - t_k)\) with \(c_k \in \mathbb {C}\), \(t_k \in \mathbb {R}\) given by

$$\begin{aligned} {\hat{\mu }}(\xi ) = \sum _{k \in \mathbb {Z}} c_k\, \mathrm {e}^{-2\pi \mathrm {i}\xi t_k}. \end{aligned}$$

If the point masses are equispaced located at \(t_k = \frac{k}{K}\) with \(K \in \mathbb {N}\), the Fourier transform becomes a K-periodic Fourier series. Moreover, restricting the support of \(\mu \) to \([-\sigma , \sigma ]\), we obtain the K-periodic trigonometric polynomial

$$\begin{aligned} {\hat{\mu }}(\xi ) = \sum _{k=-N}^N c_k\, \mathrm {e}^{-2\pi \mathrm {i}\frac{\xi k}{K}}, \end{aligned}$$

where \(N = \lfloor \sigma K \rfloor \) and \(t_k = \frac{k}{K}\), \(k=-N,\ldots ,N\). The following theorem shows that also the reverse direction is true, i.e., every periodic function given as the Fourier transform of a compact measure is a finite trigonometric polynomials.

Theorem 2

Let \(f = {\hat{\mu }}_f\) with \(\mu _f \in {\mathscr {M}}(\mathbb {R})\) fulfill \({{\,\mathrm{\mathrm {supp}}\,}}\mu _f \subseteq [-\sigma , \sigma ]\) for some \(\sigma > 0\). Suppose that f is K-periodic for \(K \in \mathbb {N}\). Then f is a trigonometric polynomial of the form

$$\begin{aligned} f(\xi ) = \sum _{k = -N}^N {\hat{f}}(k) \mathrm {e}^{2 \pi \mathrm {i}\frac{k \xi }{K}}, \qquad {\hat{f}}(k) :=\frac{1}{K} \int _0^K f(\xi ) \mathrm {e}^{-2 \pi \mathrm {i}\frac{k t}{K}} \,\mathrm {d}t, \end{aligned}$$
(1)

where \(N = \lfloor \sigma K \rfloor \).

The proof of the theorem is given in Appendix A.

3 Super-resolution in doubly-dispersive channel estimation

In doubly-dispersive channel estimation we are both interested in the detection of shifts and modulations of signals. Recall that the shift operator \(T_\tau \) and the modulation operator \(M_\nu \) are defined for \(x, \tau , \nu \in \mathbb {R}\) by

$$\begin{aligned} T_\tau f(x) := f(x - \tau ) \quad \mathrm {and} \quad M_\nu f(x) := f(x) \mathrm {e}^{2 \pi \mathrm {i}\nu x}, \end{aligned}$$

respectively. Their concatenation is given by

$$\begin{aligned} M_\nu T_\tau f(x) = \mathrm {e}^{2\pi \mathrm {i}\nu x} f(x-\tau ) \quad \text {and} \quad T_\tau M_\nu f(x) = \mathrm {e}^{2\pi \mathrm {i}\nu (x-\tau )} f(x-\tau ). \end{aligned}$$

Similarly, for \(f \in L^r(\mathbb {R})\) with \(r \in [1,2]\), it holds

$$\begin{aligned} \widehat{T_\tau f} = M_{-\tau } {\hat{f}} \quad \text {and} \quad \widehat{M_\nu f} = T_\nu {\hat{f}}. \end{aligned}$$

Both operators are unitary on \(L^2(\mathbb {R})\). Note that a similar definition of shifts and modulations can be given for tempered distributions, see, e.g., [36, Section 4.3.1]. For \(S \in \mathbb {N}\) and \({\mathscr {T}}, \varOmega >0\), we consider the operator

$$\begin{aligned} H := \sum _{s=1}^{S} \eta _s M_{\nu _s} T_{\tau _s}, \qquad \eta _s \in \mathbb {C}_*, \tau _s \in \left[ -\tfrac{{\mathscr {T}}}{2},\tfrac{{\mathscr {T}}}{2} \right] , \nu _s \in \left[ -\tfrac{\varOmega }{2},\tfrac{\varOmega }{2} \right] \end{aligned}$$
(2)

with \(\mathbb {C}_* := \mathbb {C}\setminus \{0\}\). We are interested in the following super-resolution problem: for a known function \(w \in C_b(\mathbb {R})\), determine the amplitudes \(\eta _s \in \mathbb {C}_*\) and positions \(\tau _s , \nu _s \in \mathbb {R}\), \(s=1,\ldots ,S\) from certain samples of

$$\begin{aligned} H w = \sum _{s=1}^{S} \eta _s M_{\nu _s} T_{\tau _s} w. \end{aligned}$$
(3)

In this context, the function w is often called identifier.

Our solution will be based on an exact sampling formula of Hw which contains sparse linear combination of certain real-valued “atoms”. The idea to use such a reformulation for later computations originates from a paper of Heckel et al. [23]. However, the approach of those authors uses only an approximate sampling formula without given error bound and not an exact one, see Remark  2. The main sampling result is given in the following theorem.

Theorem 3

(Sampling Formula for Translation-Modulation Operators) Choose \({\mathscr {T}}, \varOmega > 0\), \(N_1, N_2 \in \mathbb {N}\) and set \(L_1 :=2N_1 + 1\), \(L_2 :=2N_2 + 1\). Let

$$\begin{aligned} w(x) = \sum _{n=-N_1}^{N_1} w_n \mathrm {e}^{2\pi \mathrm {i}\frac{\varOmega n x}{L_1}}, \qquad x\in \mathbb {R}, \end{aligned}$$
(4)

be an \(\frac{L_1}{\varOmega }\)-periodic trigonometric polynomial. Then, we have for \(\tau , \nu \in \mathbb {R}\) and \(x_j = \frac{{\mathscr {T}}j}{L_2}\), \(j = -N_2, \dots , N_2\) that

$$\begin{aligned} M_\nu T_\tau w (x_j )&= \sum _{n_1 = -N_1}^{N_1} \sum _{n_2 = -N_2}^{N_2} \mathrm {e}^{2 \pi \mathrm {i}\frac{x_j n_2}{{\mathscr {T}}}} w \left( x_j - \frac{n_1}{\varOmega }\right) [A(\tau ,\nu )]_{(n_1,n_2)}, \end{aligned}$$
(5)

with so-called atoms \(A : \mathbb {R}^2 \rightarrow \mathbb {C}^{L_1 L_2}\) given by

$$\begin{aligned}{}[A(\tau ,\nu )]_{(n_1,n_2)} :=\frac{1}{L_1 L_2} D_{N_1} \left( \frac{n_1 - \varOmega \tau }{L_1} \right) D_{N_2} \left( \frac{n_2 - {\mathscr {T}}\nu }{L_2} \right) , \end{aligned}$$
(6)

where \((n_1,n_2)\) denotes the corresponding, unique index in \(\mathbb {C}^{L_1 L_2}\) for \(n_1 = -N_1,\ldots ,N_1\), \(n_2 = -N_2, \ldots ,N_2\).

Figuratively, an atom \(A(\tau ,\nu )\) may be interpreted as a vectorized \(L_1\times L_2\)-dimensional matrix. The proof of Theorem 3 is the content of the next section.

By Theorem 3, we can rewrite the super-resolution problem (3) with an identifier of the form (4) for given samples \(y_j = H w(x_j)\), \( x_j = \frac{{\mathscr {T}}j}{L_2}\), \(j=-N_2,\ldots ,N_2\) as

$$\begin{aligned} y_j = H w (x_j) = \sum _{n_1 = -N_1}^{N_1} \sum _{n_2 = -N_2}^{N_2} \mathrm {e}^{2 \pi \mathrm {i}\frac{x_j n_2}{{\mathscr {T}}}} w \left( x_j - \frac{n_1}{\varOmega }\right) \sum _{s=1}^{S} \eta _s \, [A(\tau _s,\nu _s)]_{(n_1,n_2)}. \end{aligned}$$
(7)

By periodicity of the atoms (6), it makes indeed sense to restrict ourselves to

$$\begin{aligned} (\tau , \nu ) \in X :=\left[ -\tfrac{{\mathscr {T}}}{2},\tfrac{{\mathscr {T}}}{2} \right] \times \left[ -\tfrac{\varOmega }{2},\tfrac{\varOmega }{2} \right] , \end{aligned}$$

and to choose \(L_k \ge {\mathscr {T}} \varOmega \), \(k=1,2\). In this case, all points \(x_j - \frac{n_1}{\varOmega }\) at which the periodic identifier w in (7) must be evaluated, belong to the interval \(I = (-{\mathscr {T}},{\mathscr {T}})\).

In practice, we would like to use a compactly supported identifier whereas our theory is based on periodic identifiers. Since only the function values w(x) with \(x \in (-{\mathscr {T}},{\mathscr {T}})\) are involved in the sampling process of Hw, we may theoretically replace the periodic identifier w by the compactly supported and partially periodic function \(\chi _I\, w\) without changing the obtained samples. Consequently, we may apply the resampling formula to identify a doubly-dispersive channel H using compactly supported and partially periodic signals like \(\chi _I \, w\), which links our theory to the real-world setting.

Setting \( y :=(y_j)_{j=-N_2}^{N_2}\) and introducing the operator

$$\begin{aligned} G = \left[ G_{j,(n_1,n_2)} \right] _{j,(n_1,n_2)}: \mathbb {C}^{L_1 L_2} \rightarrow \mathbb {C}^{L_2} \end{aligned}$$

with entries

$$\begin{aligned} G_{j,(n_1,n_2)} :=\mathrm {e}^{2 \pi \mathrm {i}\frac{x_j n_2}{{\mathscr {T}}}} w \left( x_j - \frac{n_1}{\varOmega }\right) , \end{aligned}$$

where \((n_1,n_2)\) is again the corresponding index in \(\mathbb {C}^{L_1L_2}\) as for the atoms, we can rewrite the super-resulution problem (7) as

$$\begin{aligned} y = G \sum _{s=1}^{S} \eta _s A(\tau _s,\nu _s). \end{aligned}$$

In practical applications, the measurements y are often corrupted by noise so that we finally intend to solve the regularized problem

$$\begin{aligned} {{\,\mathrm{\mathrm {argmin}}\,}}_{\eta \in \mathbb {C}_*^S, (\tau ,\nu ) \in X^S} \Vert G \sum _{s=1}^{S} \eta _s A(\tau _s,\nu _s) - y\Vert _2^2 + \lambda \Vert \eta \Vert _{1}, \qquad \lambda > 0, \end{aligned}$$
(8)

where \(\eta = (\eta _s)_{s=1}^S\) and \(\tau = (\tau _s)_{s=1}^S\), \(\nu = (\nu _s)_{s=1}^S\). Indeed, we may choose S larger than the number of expected translation-modulations, minimize over \(\eta \in \mathbb {C}^S\), and hope that the regularization term enforces the sparsest solution. Especially in the numerics, we allow \(\eta _s\) to become zero; the captured triple \((\eta _s, \tau _s, \nu _s)\) with \(\eta _s = 0\) may then be neglected.

Remark 1

The above problem is closely related to an inverse problem in the space of measures. To this end, we consider the linear, continuous operator \({\mathbf {A}}: \mathbb {C}^{L_1L_2} \rightarrow C(X)\) defined by \({\mathbf {A}} y := \{ (\tau ,\nu ) \mapsto \langle A(\tau ,\nu ) , y \rangle \}\) for \(y \in \mathbb {C}^{L_1L_2}\). Its adjoint \({\mathbf {A}}^*: {\mathscr {M}}(X) \rightarrow \mathbb {C}^{L_1L_2}\) is given by

$$\begin{aligned} {\mathbf {A}}^* \mu :=\int _X A(\tau ,\nu ) \, d \mu (\tau ,\nu ). \end{aligned}$$

Then, we may consider the inverse problem

$$\begin{aligned} \min _{\mu \in {\mathscr {M}}(X)} \frac{1}{2} \Vert G {\mathbf {A}}^* \mu - y \Vert _2^2 + \lambda \Vert \mu \Vert _{{\mathscr {M}}(X)}. \end{aligned}$$
(9)

Problems of this kind are also known as BLASSO [5, 16] and were studied in several papers, e.g., by Bredies and Pikkarainen [3] and Denoyelle et al. [17]. In particular, it was shown that the problem has a solution. Since \(G {\mathbf {A}}^*\) is not injective, the solution is in general not unique. Restricted to atomic measures in \({\mathscr {M}}(X)\), i.e. \(\mu = \sum _{s=1}^S \eta _s \delta \left( \cdot - (\tau _s,\nu _s \right) )\), problem (9) takes the form (8).

The super-resolution problem may be also seen from the point of view the so-called atomic norm formulation addressed in a couple of papers [6, 9, 17, 19, 39]. Since \(\eta _s = |\eta _s| \mathrm {e}^{2\pi \mathrm {i}\phi _s}\) is complex-valued, the set of atoms must be redefined as \(\{ \mathrm {e}^{2\pi \mathrm {i}\phi } A(\tau ,\nu ): \phi \in [0,1), (\tau ,\nu ) \in X\}\) to take real linear combinations of atoms.

As already mentioned, super-resolution problem (3) has been already considered by Heckel et al. [23]. However, these authors proposed to use a different identifier, an issue addressed in the next remark.

Remark 2

(Relation to the work of Heckel et al. [23]) The authors of [23] considered the case \(N_1=N_2=N\) and \(L_1 = L_2 = L := {\mathscr {T}}\varOmega \), so that the resampling formula (7) becomes

$$\begin{aligned} \begin{aligned} H w \left( \frac{j}{\varOmega } \right)&= \frac{1}{L^2} \sum _{s=1}^S \eta _s \sum _{n_1 = -N}^{N} \sum _{n_2 = -N}^{N} \mathrm {e}^{2 \pi \mathrm {i}\frac{jn_2}{L}} w \left( \frac{j - n_1}{\varOmega }\right) \\&\times D_{N} \left( \frac{n_1 - \varOmega \tau _s}{L} \right) D_{N} \left( \frac{n_2 - {\mathscr {T}}\nu _s}{L} \right) . \end{aligned} \end{aligned}$$
(10)

However, as identifier they propose

$$\begin{aligned} w(x) = \sum _{k=-K}^K \sum _{n=-N}^N w_n {{\,\mathrm{\mathrm {sinc}}\,}}\left( \varOmega \left( x- \frac{kL+n}{\varOmega } \right) \right) \end{aligned}$$

with some \(K \in \mathbb {N}\). Actually, \(K=1\) was applied in [23]. Since the sinc function is not periodic, the resampling formula (10) does not hold exactly and only gives an approximation.

4 Resampling results for translation-modulation operators

In this section, we prove Theorem 3. The basis is a Sampling Theorem 4 for \(L^1\) functions. Then we prove certain sampling formulas which are of interest on their own. First, in Lemma 2, we show a sampling formula for \(p\,H ({\hat{q}} * w)\), where pq are compactly supported functions with Fourier transform in \(L^1(\mathbb {R})\), for general \(w \in L^\infty (\mathbb {R})\) using certain compactly supported helper functions \(\phi \) and \(\psi \). Restricting to identifiers w which are Fourier transforms of measures, we will see in Theorem 5 that the helper functions can be avoided. Finally, we will use this theorem together with approximation arguments involving sequences of compactly supported Schwartz functions \(\{p_n\}_n\) and \(\{q_n\}_n\) to prove Theorem 3. We start by recalling a sampling theorem for \(L^1\)-functions, which is an extension of the classical sampling theorem of Shannon, Whittaker, and Kotelnikov, see for instance [36, Thm 2.29], by the \(L^1\)-convergence of the interpolation formula for \(L^1\)-sampling functions.

Theorem 4

(Sampling Theorem for \(L^1\)-functions) Let \(f \in L^1(\mathbb {R}) \cap C_0(\mathbb {R})\) be a band-limited function with \({{\,\mathrm{\mathrm {supp}}\,}}{\hat{f}} \subseteq [-\frac{\varOmega }{2}, \frac{\varOmega }{2}]\). Choose \(0< a < 1/\varOmega \). Then for any low-pass kernel \(\phi \in L^1(\mathbb {R}) \cap C_0(\mathbb {R})\) satisfying

$$\begin{aligned} {\hat{\phi }}(\xi ) = {\left\{ \begin{array}{ll} a, &{} |\xi | \le \frac{\varOmega }{2},\\ 0, &{} |\xi | \ge \frac{1}{2a}, \end{array}\right. } \end{aligned}$$

we have

$$\begin{aligned} f(x) = \sum _{k \in \mathbb {Z}} f(a k) \phi (x - ak) \end{aligned}$$

for all \(x \in \mathbb {R}\) with absolute and uniform convergence on \(\mathbb {R}\) and convergence in \(L^1(\mathbb {R})\).

For convenience, the proof is given in Appendix B. In the classical sampling theorem of Shannon, Whittaker, and Kotelnikov, the function \(\phi \) is the sinus cardinalis, which however prevents the convergence in \(L^1\). In the following, we will further need the next auxiliary lemma.

Lemma 1

Let \(w \in L^\infty (\mathbb {R})\) and \(p, q \in L^1(\mathbb {R})\) with \({\hat{p}}, {\hat{q}} \in L^1(\mathbb {R})\). For \(F \in L^1(\mathbb {R}^2)\), we define the linear operator \({\mathscr {D}}_w : L^1(\mathbb {R}^2) \rightarrow L^\infty (\mathbb {R})\) by

$$\begin{aligned} ({\mathscr {D}}_w F)(x) := \iint _{\mathbb {R}^2} F(t, \xi ) w(x - t) \mathrm {e}^{2 \pi \mathrm {i}\xi t} \,\mathrm {d}t \,\mathrm {d}\xi . \end{aligned}$$

Then \({\mathscr {D}}_w\) is continuous and for all \(\tau , \nu \in \mathbb {R}\) we have

$$\begin{aligned} p(x) M_\nu T_\tau ({\hat{q}} *w)(x) = {\mathscr {D}}_w (T_\tau {\hat{q}} \otimes T_\nu {\hat{p}})(x) \qquad \text {a.e.} \end{aligned}$$
(11)

Proof

For any \(x \in \mathbb {R}\), we have

$$\begin{aligned} |({\mathscr {D}}_w F)(x)| \le \iint _{\mathbb {R}^2} |F(t, \xi )| |w(x - t)| \,\mathrm {d}t \,\mathrm {d}\xi \le \Vert w\Vert _{\infty }\, \Vert F\Vert _{1}. \end{aligned}$$

Thus \(\Vert {\mathscr {D}}_w\Vert _{L^1(\mathbb {R}^2) \rightarrow L^\infty (\mathbb {R})} \le \Vert w\Vert _{\infty }\) and the first claim follows.

For the left-hand side of (11) we have by Young’s convolution inequality, see [36], that

$$\begin{aligned} \Vert {\hat{q}} *w\Vert _\infty \le \Vert {\hat{q}}\Vert _1 \Vert w\Vert _\infty . \end{aligned}$$

Since \({\hat{p}} \in L^1(\mathbb {R})\), we know that \(p \in L^\infty (\mathbb {R})\). This implies \(p \, M_\nu T_\tau ({\hat{q}} *w) \in L^\infty (\mathbb {R})\). Using that \(p(x) = \int _\mathbb {R}{\hat{p}}(\xi ) \mathrm {e}^{2 \pi \mathrm {i}\xi x} \,\mathrm {d}\xi \) a.e., we obtain for almost every \(x \in \mathbb {R}\) that

$$\begin{aligned} p(x) M_\nu T_\tau ({\hat{q}} *w) (x)&= p(x) \mathrm {e}^{2 \pi \mathrm {i}x \nu } \int _{\mathbb {R}} {\hat{q}}(t) w(x-\tau - t) \,\mathrm {d}t\\&= \int _\mathbb {R}{\hat{p}}(\xi ) \mathrm {e}^{2 \pi \mathrm {i}\xi x} \,\mathrm {d}\xi \, \mathrm {e}^{2 \pi \mathrm {i}x \nu } \, \int _{\mathbb {R}} {\hat{q}}(t) w(x-\tau - t) \,\mathrm {d}t\\&= \int _\mathbb {R}\int _\mathbb {R}{\hat{p}}(\xi ) \mathrm {e}^{2 \pi \mathrm {i}x (\xi + \nu )} {\hat{q}}(t) w(x-\tau - t) \,\mathrm {d}t \,\mathrm {d}\xi \\&= \int _\mathbb {R}\int _\mathbb {R}{\hat{q}}(t - \tau ) {\hat{p}}(\xi - \nu ) w(x - t) \mathrm {e}^{2 \pi \mathrm {i}\xi x} \,\mathrm {d}t \,\mathrm {d}\xi \\&= {\mathscr {D}}_w (T_\tau {\hat{q}} \otimes T_\nu {\hat{p}}) (x). \end{aligned}$$

\(\square \)

We use the above lemma to show the following intermediate sampling formula.

Lemma 2

Let H be given by (2). Let \(w \in L^\infty (\mathbb {R})\) and \(p, q \in L^1(\mathbb {R}) \cap C_0(\mathbb {R})\) with \({\hat{p}}, {\hat{q}} \in L^1(\mathbb {R})\) and \({{\,\mathrm{\mathrm {supp}}\,}}p \subseteq [-\frac{{\mathscr {T}}_p}{2}, \frac{{\mathscr {T}}_p}{2}]\) as well as \({{\,\mathrm{\mathrm {supp}}\,}}q \subseteq [-\frac{\varOmega _q}{2}, \frac{\varOmega _q}{2}]\). Choose step-sizes \(0< a < 1 / \varOmega _q\) and \(0< b < 1 / {\mathscr {T}}_p\). Then for any \(\phi , \psi \in L^1(\mathbb {R}) \cap C_0(\mathbb {R})\) with \({\hat{\phi }}, {\hat{\psi }} \in L^1(\mathbb {R})\) obeying

$$\begin{aligned} \psi (x) = {\left\{ \begin{array}{ll} b, &{} \mathrm {for} \; |x| \le \tfrac{{\mathscr {T}}_p}{2},\\ 0, &{} \mathrm {for} \; |x| \ge \tfrac{1}{2b}, \end{array}\right. } \qquad \phi (x) = {\left\{ \begin{array}{ll} a, &{} \mathrm {for} \; |x| \le \tfrac{\varOmega _q}{2},\\ 0, &{} \mathrm {for} \; |x| \ge \tfrac{1}{2a}. \end{array}\right. } \end{aligned}$$

we have

$$\begin{aligned} p(x) H ({\hat{q}} *w)(x) = \psi (x) \sum _{k_1 \in \mathbb {Z}} \sum _{k_2 \in \mathbb {Z}} c_{k_1, k_2} M_{b k_2} T_{a k_1} ({\hat{\phi }} *w)(x) \end{aligned}$$
(12)

for all \(x \in \mathbb {R}\), where

$$\begin{aligned} c_{k_1, k_2} := \sum _{s=1}^S \eta _s {\hat{q}}(a k_1 - \tau _s) {\hat{p}}(b k_2 - \nu _s), \qquad k_1, k_2 \in \mathbb {Z}. \end{aligned}$$

The series on the right side of (12) converges uniformly on \(\mathbb {R}\).

Proof

By linearity it suffices to consider the case \(H = M_\nu T_\tau \). Since \(p,q \in L^1(\mathbb {R})\), we have \({\hat{p}},{\hat{q}} \in C_0(\mathbb {R})\) so that \(F :=T_\tau {\hat{q}} \otimes T_\nu {\hat{p}} \in L^1(\mathbb {R}^2) \cap C_0(\mathbb {R}^2)\). Moreover, by the support properties of p and q, we get \({{\,\mathrm{\mathrm {supp}}\,}}{\hat{F}} \subset [-\tfrac{\varOmega _q}{2}, \tfrac{\varOmega _q}{2}] \times [-\tfrac{{\mathscr {T}}_p}{2}, \tfrac{{\mathscr {T}}_p}{2}]\). Consequently, we can apply Theorem 4 to F along each dimension w.r.t. the step-sizes a and b and low-pass kernels \({\hat{\phi }}\) and \( {\hat{\psi }}\) to obtain

$$\begin{aligned} F(x, y) = \sum _{k_1\in \mathbb {Z}} \sum _{k_2 \in \mathbb {Z}} F(ak_1, b k_2) {\hat{\phi }}(x - ak_1) {\hat{\psi }}(y - b k_2), \end{aligned}$$

which converges absolutely and uniformly. For the \(L^1\)-convergence, we have to show that

$$\begin{aligned}&\int _{\mathbb {R}} \int _{\mathbb {R}} \biggl | \sum _{|k_1|\ge K} \sum _{|k_2| \ge K} F(ak_1, b k_2) {\hat{\phi }}(x - ak_1) {\hat{\psi }}(y - b k_2) \biggr | \,\mathrm {d}y \,\mathrm {d}x \\&\qquad \le \int _{\mathbb {R}} \biggl | \sum _{|k_1|\ge K} {\hat{q}}(ak_1 - \tau ) {\hat{\phi }}(x - ak_1) \biggr | \,\mathrm {d}x \cdot \int _{\mathbb {R}} \biggl | \sum _{|k_2| \ge K} {\hat{p}}( b k_2 - \nu ) {\hat{\psi }}(y - b k_2) \biggr | \,\mathrm {d}y \end{aligned}$$

vanishes for \(K \rightarrow \infty \), which follows for both integrals as discussed in the proof of Theorem 4.

As the operator \({\mathscr {D}}_w : L^1(\mathbb {R}^2) \rightarrow L^\infty (\mathbb {R})\) defined in Lemma 1 is continuous we conclude

$$\begin{aligned} p(x) \, H({\hat{q}} *w)(x) = {\mathscr {D}}_w (F)(x) = \sum _{k_1\in \mathbb {Z}} \sum _{k_2 \in \mathbb {Z}} F(ak_1, b k_2) {\mathscr {D}}_w ( T_{ak_1} {\hat{\phi }} \otimes T_{bk_2} {\hat{\psi }} )(x) \quad \text {a.e.} \end{aligned}$$

By applying Lemma 1 once again, we obtain

$$\begin{aligned} {\mathscr {D}}_w \bigl ( T_{ak_1} {\hat{\phi }} \otimes T_{bk_2} {\hat{\psi }} \bigr )(x) = \psi (x) \, M_{bk_2} T_{ak_1} ({\hat{\phi }} *w)(x) \qquad \text {a.e.} \end{aligned}$$

Consequently we get for almost every \(x \in \mathbb {R}\) that

$$\begin{aligned} p(x)\, H ({\hat{q}} *w)(x)&= \sum _{k_1\in \mathbb {Z}} \sum _{k_2 \in \mathbb {Z}} F(ak_1, b k_2) \psi (x)\, M_{bk_2} T_{ak_1} ({\hat{\phi }} *w)(x)\nonumber \\&= \psi (x) \, \sum _{k_1\in \mathbb {Z}} \sum _{k_2 \in \mathbb {Z}} \bigl [ {\hat{q}}(ak_1 - \tau ) {\hat{p}}(b k_2 - \nu ) \bigr ] M_{bk_2} T_{ak_1} ({\hat{\psi }} *w)(x). \end{aligned}$$
(13)

Note that by Theorem 1 the sequences \(\bigl ({\hat{q}}(ak_1 - \tau )\bigr )_{k_1 \in \mathbb {Z}}\) and \(\bigl ({\hat{p}}(b k_2 - \nu )\bigr )_{k_2 \in \mathbb {Z}}\) are absolutely summable. The functions \(M_{bk_2}T_{ak_1} ({\hat{\psi }} *w)\) are bounded by

$$\begin{aligned} \Vert M_{bk_2}T_{ak_1} ({\hat{\phi }} *w)\Vert _\infty \le \Vert {\hat{\phi }} *w \Vert _\infty \le \Vert {\hat{\phi }}\Vert _1 \Vert w\Vert _\infty . \end{aligned}$$

Thus, the series (13) converges uniformly on \(\mathbb {R}\) and, since the partial sums in (13) are continuous functions, we conclude that the series converges to a continuous bounded function. As p and \({\hat{q}} *w\) are also continuous and bounded, we see that (12) holds for all \(x \in \mathbb {R}\). \(\square \)

Although Theorem 2 works on arbitrary bounded identifiers \(w \in L^\infty (\mathbb {R})\), the fact that the left side of (12) does not depend on \(\phi \) and \(\psi \) suggests that there might be a way to avoid the use of these functions. For this purpose, we restrict our attention to a subset of \(L^\infty (\mathbb {R})\), namely functions \(f = {\hat{\mu }}_f\) with \(\mu _f \in {\mathscr {M}}(\mathbb {R})\). Having the Fourier convolution theorem in mind, for a Borel measurable, bounded function \(\phi \), we define the convolution

$$\begin{aligned} (\phi \star _{{\mathscr {F}}} f) (x) :=\widehat{(\phi \mu _f)} (x) = \int _{\mathbb {R}} \phi (\xi ) \mathrm {e}^{-2\pi \mathrm {i}x \xi } \,\mathrm {d}\mu _f(\xi ), \end{aligned}$$

which yields a continuous and bounded function. If \(\phi \in L^1(\mathbb {R}) \cap C_0(\mathbb {R})\) such that \({\hat{\phi }} \in L^1(\mathbb {R})\), then our convolution may be expressed by the Fourier convolution as

$$\begin{aligned} \phi \star _{{\mathscr {F}}} f = {\hat{\phi }} * f. \end{aligned}$$

We have the following convergence result.

Lemma 3

Let \(f = {\hat{\mu }}_f\) with \(\mu _f \in {\mathscr {M}}(\mathbb {R})\) and let g be a bounded Borel-measurable function. Assume that the uniformly bounded and Borel measurable functions \(g_m : \mathbb {R}\rightarrow \mathbb {C}\) converge pointwise to \(g : \mathbb {R}\rightarrow \mathbb {C}\). Then \(g_m \star _{{\mathscr {F}}} f\) converges uniformly to \(g \star _{{\mathscr {F}}} f\), i.e.,

$$\begin{aligned} \Vert g_m \star _{{\mathscr {F}}} f - g \star _{{\mathscr {F}}} f \Vert _\infty \rightarrow 0, \qquad \text {as } m \rightarrow \infty . \end{aligned}$$

Proof

Applying Fatou’s lemma, we obtain

$$\begin{aligned} \limsup _{m \rightarrow \infty } \Vert g_m \star _{{\mathscr {F}}} f - g \star _{{\mathscr {F}}} f \Vert _\infty&= \limsup _{m \rightarrow \infty } \left\{ \sup _{x \in \mathbb {R}} \bigr | \int _\mathbb {R}(g_m(\xi ) - g(\xi )) \mathrm {e}^{-2\pi \mathrm {i}x \xi } \,\mathrm {d}\mu _f(\xi ) \bigr | \right\} \\&\le \limsup _{m \rightarrow \infty } \left\{ \int _\mathbb {R}|g_m(\xi ) - g(\xi )| \,\mathrm {d}|\mu _f|(\xi ) \right\} \\&\le \int _\mathbb {R}\underbrace{\limsup _{m \rightarrow \infty } |g_m(\xi ) - g(\xi )|}_{= 0} \,\mathrm {d}|\mu _f|(\xi ) = 0. \end{aligned}$$

The lemma of Fatou is applicable since \(\Vert g - g_m\Vert _\infty \le 2M\) for some \(M > 0\) and constant functions are integrable w.r.t. \(\mu _f \in {\mathscr {M}}(\mathbb {R})\). \(\square \)

Theorem 5

Let H be given by (2). Let \(w = {\hat{\mu }}_w\) with \(\mu _w \in {\mathscr {M}}(\mathbb {R})\) and \(p, q \in L^1(\mathbb {R}) \cap C_0(\mathbb {R})\) with \({\hat{p}}, {\hat{q}} \in L^1(\mathbb {R})\) and \({{\,\mathrm{\mathrm {supp}}\,}}p \subseteq [-\frac{{\mathscr {T}}_p}{2}, \frac{{\mathscr {T}}_p}{2}]\) and \({{\,\mathrm{\mathrm {supp}}\,}}q \subseteq [-\frac{\varOmega _q}{2}, \frac{\varOmega _q}{2}]\). Choose \(0< a < 1 /\varOmega _q\) and \(0< b < 1 / {\mathscr {T}}_p\). Then, for all \(x \in \mathbb {R}\), we have

$$\begin{aligned} p(x) H({\hat{q}} *w)(x) = ab\, \chi _{(-\frac{1}{2b}, \frac{1}{2b})}(x)\, \sum _{k_1 \in \mathbb {Z}} \sum _{k_2 \in \mathbb {Z}} c_{k_1, k_2} M_{b k_2} T_{a k_1} \bigl (\chi _{(-\frac{1}{2a}, \frac{1}{2a})} \star _{{\mathscr {F}}} w \bigr )(x),\nonumber \\ \end{aligned}$$
(14)

where

$$\begin{aligned} c_{k_1, k_2} = \sum _{s=1}^S \eta _s {\hat{q}}(a k_1 - \tau _s) {\hat{p}}(b k_2 - \nu _s), \qquad k_1, k_2 \in \mathbb {Z}. \end{aligned}$$

The series on the right-hand side of (14) converges uniformly on \(\mathbb {R}\).

Proof

Let \((\psi _m)_{m \in \mathbb {Z}}\) and \((\phi _m)_{m\in \mathbb {N}}\) be uniformly bounded sequences of Schwartz functions with

$$\begin{aligned} \psi _m(x) = {\left\{ \begin{array}{ll} b, &{} \hbox { for}\ |x| \le \tfrac{{\mathscr {T}}_p}{2},\\ 0, &{} \hbox { for}\ |x| \ge \tfrac{1}{2b}, \end{array}\right. } \qquad \phi _m(x) = {\left\{ \begin{array}{ll} a, &{} \hbox { for}\ |x| \le \tfrac{\varOmega _q}{2},\\ 0, &{} \hbox { for}\ |x| \ge \tfrac{1}{2a} \end{array}\right. } \end{aligned}$$

for all \(m \in \mathbb {N}\) which converge for \(m\rightarrow \infty \) pointwise as

$$\begin{aligned} \psi _m(x) \rightarrow {\left\{ \begin{array}{ll} b, &{} \hbox { for}\ |x|< \tfrac{1}{2b},\\ 0, &{} \hbox { for}\ |x| \ge \tfrac{1}{2b}, \end{array}\right. } \qquad \phi _m(x) \rightarrow {\left\{ \begin{array}{ll} a, &{} \hbox { for}\ |x| < \tfrac{1}{2a},\\ 0, &{} \hbox { for}\ |x| \ge \tfrac{1}{2a}. \end{array}\right. } \end{aligned}$$

Abbreviating \(y :=p H ({\hat{q}} *w)\), we obtain by Theorem 2 that

$$\begin{aligned} y(x) = \psi _{m_1} (x) \sum _{k_1 \in \mathbb {Z}} \sum _{k_2 \in \mathbb {Z}} c_{k_1, k_2} M_{b k_2} T_{a k_1} ({\hat{\phi }}_{m_2} *w)(x) \quad \text {for all }x\in \mathbb {R}\text { and } m_1,m_2 \in \mathbb {N}. \end{aligned}$$

Note that neither y(x) nor \(c_{k_1,k_2}\) depend on \(m_1\) or \(m_2\). Letting \(m_1 \rightarrow \infty \), we immediately obtain the pointwise limit

$$\begin{aligned} y(x) = b\, \chi _{(-\frac{1}{2b}, \frac{1}{2b})} (x) \sum _{k_1 \in \mathbb {Z}} \sum _{k_2 \in \mathbb {Z}} c_{k_1, k_2} M_{b k_2} T_{a k_1} ({\hat{\phi }}_{m_2} *w)(x) \quad \text {for all }x\in \mathbb {R}\text { and }m_2 \in \mathbb {N}. \end{aligned}$$

Now consider the series: We already used in the proof of Theorem 2 that by Theorem 1 the coefficients \((c_{k_1,k_2})_{k_1,k_2 \in \mathbb {Z}} \in \ell ^1(\mathbb {Z}^2)\) are absolutely summable. Moreover, writing \(\phi :=a \chi _{(-\frac{1}{2a}, \frac{1}{2a})}\) we know by construction that \(\phi _{m_2}(x) \rightarrow \phi (x)\) as \(m_2 \rightarrow \infty \) for every \(x \in \mathbb {R}\) and \((\phi _{m_2})_{m_2 \in \mathbb {Z}}\) is uniformly bounded. We can therefore apply Lemma 3 to obtain

$$\begin{aligned} \Vert \phi _{m_2} \star _{{\mathscr {F}}} w - \phi \star _{{\mathscr {F}}} w\Vert _\infty \rightarrow 0, \qquad \text {as} \qquad m_2 \rightarrow \infty . \end{aligned}$$

Since we have \({\hat{\phi }}_{m_2} *w = \phi _{m_2} \star _{{\mathscr {F}}} w\) for all \(m_2 \in \mathbb {N}\), we estimate

$$\begin{aligned}&\bigr \Vert \sum _{k_1 \in \mathbb {Z}} \sum _{k_2 \in \mathbb {Z}} c_{k_1, k_2} M_{bk_2} T_{ak_1} ( {\hat{\phi }}_{m_2} *w - \phi \star _{{\mathscr {F}}} w ) \bigr \Vert _\infty \\&\quad \le \sum _{k_1 \in \mathbb {Z}} \sum _{k_2 \in \mathbb {Z}} |c_{k_1,k_2}| \Vert {\hat{\phi }}_{m_2} *w - \phi \star _{{\mathscr {F}}} w \Vert _\infty \\&\quad = \Vert (c_{k_1,k_2})_{k_1,k_2\in \mathbb {Z}} \Vert _1 \Vert \phi _{m_2} \star _{{\mathscr {F}}} w - \phi \star _{{\mathscr {F}}} w \Vert _\infty . \end{aligned}$$

Letting \(m_2 \rightarrow \infty \) the right side converges to 0 which proves that

$$\begin{aligned} y(x) = b\, \chi _{(-\frac{1}{2b}, \frac{1}{2b})} (x) \sum _{k_1 \in \mathbb {Z}} \sum _{k_2 \in \mathbb {Z}} c_{k_1, k_2} M_{b k_2} T_{a k_1} (\phi \star _{{\mathscr {F}}} w)(x) \end{aligned}$$

for all \(x\in \mathbb {R}\), which is equivalent to (14).

The uniform convergence of the series follows immediately from \((c_{k_1,k_2})_{k_1,k_2 \in \mathbb {Z}} \in \ell ^1(\mathbb {Z}^2)\) and \(\chi _{(-\frac{1}{2a}, \frac{1}{2a})} \star _{{\mathscr {F}}} w \in C_b(\mathbb {R})\).\(\square \)

Now we can prove our main theorem.

Proof (Theorem 3)

1. Since \(\frac{|n|\varOmega }{L_1} \le \frac{(L_1-1)\varOmega }{2L_1}\) for \(n = -N_1, \dots , N_2\) in the representation (4) of the identifier w, we see that \({{\,\mathrm{\mathrm {supp}}\,}}\mu _w \subset [-\frac{L_1-1}{2L_1}\varOmega , \frac{L_1-1}{2L_1}\varOmega ]\). Choose \(\max \{\frac{L_1-1}{L_1}, \frac{L_2-1}{L_2} \}< \beta < 1\) and let \((\gamma _m)_{m\in \mathbb {N}}\) and \((\lambda _m)_{m \in \mathbb {N}}\) be sequences of positive numbers such that \(1 < \gamma _m\) and \(\beta< \lambda _m < 1\) and \(\gamma _m \lambda _m < 1\) for all \(m \in \mathbb {N}\) that converge to 1 as \(m \rightarrow \infty \). Then, for \(m \in \mathbb {N}\), define

$$\begin{aligned} {\mathscr {T}}_m :=\gamma _m {\mathscr {T}}, \qquad \varOmega _m :=\gamma _m \varOmega , \qquad a_m :=\frac{\lambda _m}{\varOmega }, \qquad b_m :=\frac{\lambda _m}{{\mathscr {T}}}, \end{aligned}$$

as well as the functions

$$\begin{aligned} w_m(x) := w \left( \frac{x}{\lambda _m}\right) , \qquad x \in \mathbb {R}. \end{aligned}$$

Clearly, we have for all \(m \in \mathbb {N}\) that \(w_m= {\hat{\mu }}_{w_m}\), where \(\mu _{w_m} \in {\mathscr {M}}(\mathbb {R})\) fulfills

$$\begin{aligned} {{\,\mathrm{\mathrm {supp}}\,}}\mu _{w_m} \subset \left[ - \frac{(L_1-1)\varOmega }{2L_1 \lambda _m}, \frac{(L_1-1)\varOmega }{2L_1 \lambda _m} \right] \subset \left( -\frac{\varOmega }{2}, \frac{\varOmega }{2}\right) . \end{aligned}$$

Further, the function \(w_m\) is \(a_m L_1\)-periodic. Let \((p_m)_{m \in \mathbb {N}}, (q_m)_{m \in \mathbb {N}}\) be sequences of Schwartz functions with

$$\begin{aligned} p_m(x) = {\left\{ \begin{array}{ll} 1, &{} \hbox { for}\ |x| \le \tfrac{{\mathscr {T}}}{2},\\ 0, &{} \hbox { for}\ |x| \ge \tfrac{{\mathscr {T}}_m}{2}, \end{array}\right. } \qquad q_m(x) = {\left\{ \begin{array}{ll} 1, &{} \hbox { for}\ |x| \le \tfrac{\varOmega }{2},\\ 0, &{} \hbox { for}\ |x| \ge \tfrac{\varOmega _m}{2}. \end{array}\right. } \end{aligned}$$

We consider the signal

$$\begin{aligned} y_m(x) := p_m(x) M_\nu T_\tau ({\hat{q}}_m *w_m) (x), \qquad x\in \mathbb {R}. \end{aligned}$$

Now \(p_m, q_m\) as well as \(a_m = \frac{ \lambda _m}{\varOmega } < \frac{1}{\varOmega _m}\) and \(b_m = \frac{ \lambda _m}{ {\mathscr {T}}} < \frac{1}{{\mathscr {T}}_m}\) satisfy the assumptions of Theorem 5. Hence we get

$$\begin{aligned} y_m(x) = a_m b_m \chi _{(-\frac{1}{2b_m}, \frac{1}{2b_m})}(x) \sum _{k_1 \in \mathbb {Z}} \sum _{k_2 \in \mathbb {Z}} c_{m, k_1, k_2} M_{b_m k_2} T_{a_m k_1} \left( \chi _{(-\frac{1}{2a_m}, \frac{1}{2a_m})} \star _{{\mathscr {F}}} w_m \right) (x) \end{aligned}$$
(15)

with \(c_{m, k_1, k_2} :={\hat{q}}_m(a_m k_1 - \tau ) {\hat{p}}_m(b_m k_2 - \nu )\) for \(k_1, k_2 \in \mathbb {Z}\).

Since \(\frac{1}{a_m} = \frac{\varOmega }{\lambda _m} > \varOmega \) it follows that \({{\,\mathrm{\mathrm {supp}}\,}}\mu _{w_m} \subset (-\frac{\varOmega }{2}, \frac{\varOmega }{2}) \subset (-\frac{1}{2a_m}, \frac{1}{2a_m})\). Therefore we have for all \(x \in \mathbb {R}\) and \(m \in \mathbb {N}\) that

$$\begin{aligned} \chi _{(-\frac{1}{2a_m}, \frac{1}{2a_m})} \star _{{\mathscr {F}}} w_m(x) = \int _{(-\frac{1}{2a_m}, \frac{1}{2a_m})} \mathrm {e}^{-2\pi \mathrm {i}\xi x} \,\mathrm {d}\mu _{w_m}(\xi ) = w_m (x). \end{aligned}$$

Thus for \(|x| < \frac{1}{2b_m}\) we can simplify (15) to

$$\begin{aligned} y_m(x) = a_m b_m \sum _{k_1 \in \mathbb {Z}} \sum _{k_2 \in \mathbb {Z}} c_{m, k_1, k_2} M_{b_m k_2} T_{a_m k_1} w_m (x). \end{aligned}$$

2. For \(j = -N_2, \dots , N_2\), we consider

$$\begin{aligned} y_m\left( \frac{j}{b_m L_2}\right) = a_m b_m \sum _{k_1 \in \mathbb {Z}} \sum _{k_2 \in \mathbb {Z}} c_{m, k_1, k_2} w_m\left( \frac{j}{b_m L_2} - a_m k_1\right) \mathrm {e}^{2 \pi \mathrm {i}\frac{k_2 j}{L_2}}. \end{aligned}$$
(16)

Since \({\hat{p}}_m, {\hat{q}}_m\) are Schwartz functions, we know that \((c_{m,k_1,k_2})_{k_1,k_2\in \mathbb {Z}} \in \ell ^1(\mathbb {Z}^2)\). Further \(w_m\) is bounded, so that the series in (16) converges absolutely. Consequently we can rearrange the summation and use the substitution \(k_1 = \ell _1 L_1 + n_1\) and \(k_2 = \ell _2 L_2 + n_2\) for \(\ell _1, \ell _2 \in \mathbb {Z}\) and \(n_1 = -N_1, \dots , N_1\) as well as \(n_2 = -N_2, \dots , N_2\) to obtain

$$\begin{aligned}&y_m \Bigl (\frac{j}{b_m L_2}\Bigr )\nonumber \\&\quad = a_m b_m \sum _{\ell _1 \in \mathbb {Z}} \sum _{\ell _2 \in \mathbb {Z}} \sum _{n_1 = -N_1}^{N_1} \sum _{n_2= -N_2}^{N_2} c_{m, \ell _1L_1+n_1, \ell _2 L_2+n_2} \nonumber \\&\quad \times w_m \Bigl ( \frac{j}{b_m L_2} - a_m n_1 - a_m L_1 \ell _1 \Bigr ) \mathrm {e}^{2 \pi \mathrm {i}(\ell _2 j + \frac{n_2 j}{L_2})}\nonumber \\&\quad = a_m b_m \sum _{n_1 = -N_1}^{N_1} \sum _{n_2= -N_2}^{N_2} w_m \Bigl (\frac{j}{b_m L_2} - a_m n_1 \Bigr ) \mathrm {e}^{2 \pi \mathrm {i}\frac{n_2 j}{L_2}} \sum _{\ell _1 \in \mathbb {Z}} \sum _{\ell _2 \in \mathbb {Z}} c_{m, \ell _1 L_1+n_1, \ell _2 L_2+n_2} \nonumber \\&\quad = a_m b_m \sum _{n_1 = -N_1}^{N_1} \sum _{n_2= -N_2}^{N_2} w_m \Bigl (\frac{j}{b_m L_2} - a_m n_1 \Bigr ) \mathrm {e}^{2 \pi \mathrm {i}\frac{n_2 j}{L_2}} Q_{m, n_1}(\nu ) P_{m, n_2}(\tau ), \end{aligned}$$
(17)

where in the last line we abbreviate

$$\begin{aligned}&\sum _{\ell _1 \in \mathbb {Z}} \sum _{\ell _2 \in \mathbb {Z}} c_{m, \ell _1 L_1+n_1, \ell _2 L_2+ n_2} \nonumber \\&\qquad = \underbrace{\sum _{\ell _1 \in \mathbb {Z}} {\hat{q}}_m \bigl (a_m (\ell _1 L_1 + n_1) - \tau \bigr ) }_{=:Q_{m, n_1} (\tau )} \underbrace{\sum _{\ell _2 \in \mathbb {Z}} {\hat{p}}_m \bigl (b_m(\ell _2 L_2 + n_2) - \nu \bigr ) }_{=:P_{m, n_2} (\nu )}. \end{aligned}$$
(18)

We can significantly simplify (18) via Poisson’s summation formula: Indeed, \({\hat{q}}_m, {\hat{p}}_m\) are band-limited, integrable functions, so by Lemma 4 we obtain

$$\begin{aligned} Q_{m, n_1}(\tau ) = \sum _{\ell _1 \in \mathbb {Z}} {\hat{q}}_m \bigl (a_m (\ell _1 L_1 + n_1) - \tau \bigr ) = \frac{1}{a_m L_1} \sum _{\ell _1 = -N_1}^{N_1} q_m \Bigl (\frac{-\ell _1}{a_mL_1} \Bigr ) \mathrm {e}^{2 \pi \mathrm {i}\frac{\ell _1 (a_m n_1 - \tau )}{a_m L_1} } \end{aligned}$$

and

$$\begin{aligned} P_{m, n_2}(\nu ) = \sum _{\ell _2 \in \mathbb {Z}} {\hat{p}}_m \bigl (b_m(\ell _2 L_2 + n_2) - \nu \bigr ) = \frac{1}{b_m L_2} \sum _{\ell _2 =-N_2}^{N_2} {p}_m \Bigl (\frac{-\ell _2}{b_mL_2} \Bigr ) \mathrm {e}^{2 \pi \mathrm {i}\frac{\ell _2(b_m n_2 - \tau )}{b_m L_2}}. \end{aligned}$$

We used that \(q_m(\frac{-\ell _1}{a_m L_1}) = 0\) if \(|\ell _1| \ge \frac{L_1}{2}\) since this implies \(\frac{|\ell _1|}{a_mL_1} \ge \frac{1}{2a_m} > \frac{\varOmega _m}{2}\) and also \(p_m(\frac{-\ell _2}{b_mL_2}) = 0\) if \(|\ell _2| \ge \frac{L_2}{2}\) because then \(\frac{|\ell _2|}{b_mL_2} \ge \frac{1}{2b_m} > \frac{{\mathscr {T}}_m}{2}\). 3. Finally, we take limits. By continuity of w it is easy to compute

$$\begin{aligned} \lim _{m \rightarrow \infty } w_m \Bigl (\frac{j}{b_m L_2} - a_m n_1 \Bigr ) = \lim _{m \rightarrow \infty } w \Bigl (\frac{j}{\lambda _m b_m L_2} - \frac{a_m}{\lambda _m} n_1 \Bigr ) = w \Bigl (\frac{{\mathscr {T}}j}{L_2} - \frac{n_1}{\varOmega } \Bigr ). \end{aligned}$$

Now consider the limits of \(Q_{m, n_1}(\tau )\) and \(P_{m, n_2}(\nu )\). It follows from \(a_m \varOmega = \lambda _m> \beta > \frac{2N_1}{L_1}\) that \(\frac{|\ell _1|}{a_mL_1} \le \frac{N_1}{a_mL_1} < \frac{\varOmega }{2}\) for \(\ell _1 = -N_1, \dots , N_1\), which in turn implies \(q_m(\frac{-\ell _1}{a_mL_1}) = 1\). Similarly, since \(b_m {\mathscr {T}}= \lambda _m> \beta > \frac{2N_2}{L_2}\) we have \(\frac{|\ell _2|}{b_mL_2} \le \frac{N_2}{a_mL_2} < \frac{{\mathscr {T}}}{2}\) and thus \(p(\frac{-\ell _2}{b_mL_2}) = 1\) for all \(\ell _2 = -N_2, \dots , N_2\). Consequently, it follows

$$\begin{aligned} \lim _{m\rightarrow \infty } Q_{m, n_1}(\tau )&= \lim _{m\rightarrow \infty } \frac{1}{a_m L_1} \sum _{\ell _1 = -N_1}^{N_1} \mathrm {e}^{2 \pi \mathrm {i}\frac{\ell _1 (a_m n_1 - \tau )}{a_m L_1} }\\&= \frac{\varOmega }{L_1} \sum _{\ell _1 = -N_1}^{N_1} \mathrm {e}^{2 \pi \mathrm {i}\frac{\ell _1 (n_1 - \varOmega \tau )}{L_1}} = \frac{\varOmega }{L_1}D_{N_1}\left( \frac{n_1 - \varOmega \tau }{L_1}\right) \end{aligned}$$

and by the an analogous computation,

$$\begin{aligned} \lim _{m \rightarrow \infty } P_{m, n_2}(\nu ) = \frac{{\mathscr {T}}}{L_2} D_{N_2}\left( \frac{n_2 - {\mathscr {T}}\nu }{ L_2}\right) . \end{aligned}$$

Therefore taking the limit of (17) yields

$$\begin{aligned}&\lim _{m \rightarrow \infty } y_m \Bigl (\frac{j}{b_m L_2} \Bigr )\nonumber \\&= \frac{1}{{\mathscr {T}}\varOmega } \sum _{n_1 = -N_1}^{N_1} \sum _{n_2= -N_2}^{N_2} w \Bigl (\frac{{\mathscr {T}}j}{L_2} - \frac{n_1}{\varOmega } \Bigr ) \mathrm {e}^{2 \pi \mathrm {i}\frac{n_2 j}{L_2}} \frac{\varOmega }{L_1} D_{N_1} \Bigl (\frac{n_1 - \varOmega \tau }{L_1} \Bigr ) \frac{{\mathscr {T}}}{L_2} D_{N_2} \Bigl (\frac{n_2 - {\mathscr {T}}\nu }{ L_2} \Bigr )\nonumber \\&= \frac{1}{L_1 L_2} \sum _{n_1 = -N_1}^{N_1} \sum _{n_2= -N_2}^{N_2} w \Bigl (\frac{{\mathscr {T}}\varOmega j - n_1 L_2}{\varOmega L_2} \Bigr ) \mathrm {e}^{2 \pi \mathrm {i}\frac{n_2 j}{L_2}} D_{N_1} \Bigl (\frac{n_1 - \varOmega \tau }{L_1} \Bigr ) D_{N_2} \Bigl (\frac{n_2 - {\mathscr {T}}\nu }{ L_2} \Bigr ).\qquad \end{aligned}$$
(19)

Next we consider the limit of the definition of \(y_m(\frac{j}{b_m L_2})\), i.e.,

$$\begin{aligned} \lim _{m \rightarrow \infty } y_m \Bigl (\frac{j}{b_m L_2} \Bigr ) = \lim _{m \rightarrow \infty } p_m \Bigl (\frac{j}{b_m L_2} \Bigr ) M_\nu T_\tau ({\hat{q}}_m*w_m) \Bigl (\frac{j}{b_mL_2} \Bigr ). \end{aligned}$$
(20)

Using the assumptions on \(q_m\) we obtain

for all \(m\in \mathbb {N}\), so that (20) can be written as

$$\begin{aligned} \lim _{m \rightarrow \infty } y_m \Bigl (\frac{j}{b_m L_2} \Bigr )&= \lim _{m \rightarrow \infty } p_m \Bigl (\frac{j}{b_m L_2} \Bigr ) w_m \Bigl (\frac{j}{ b_m L_2} - \tau \Bigr ) \mathrm {e}^{2 \pi \mathrm {i}\frac{j \nu }{b_m L_2}}. \end{aligned}$$

We already showed in a previous argument that \(p_m(\frac{j}{b_m L_2}) = 1\) for \(j = -N_2, \dots , N_2\) for all \(m\in \mathbb {N}\). Then it follows from continuity that

$$\begin{aligned} \lim _{m \rightarrow \infty } y_m \Bigl (\frac{j}{b_m L_2} \Bigr )&= \lim _{m \rightarrow \infty } w \Bigl (\frac{j}{\lambda _m b_m L_2} - \frac{\tau }{\lambda _m} \Bigr ) \mathrm {e}^{2 \pi \mathrm {i}\frac{j \nu }{b_m L_2}}\nonumber \\&= w \Bigl (\frac{{\mathscr {T}}j}{L_2} - \tau \Bigr ) \mathrm {e}^{2 \pi \mathrm {i}\frac{{\mathscr {T}}j \nu }{L_2}} = M_\nu T_\tau w \Bigl (\frac{{\mathscr {T}}j}{L_2} \Bigr ). \end{aligned}$$
(21)

Combining (19) with (21) then proves (5).\(\square \)

5 Numerical algorithms

In this section, we propose to solve problem (8), i.e.,

$$\begin{aligned} {{\,\mathrm{\mathrm {argmin}}\,}}_{\eta \in \mathbb {C}^S, (\tau ,\nu ) \in X^S} \Vert G \sum _{s=1}^{S} \eta _s A(\tau _s,\nu _s) - y\Vert _2^2 + \lambda \Vert \eta \Vert _{1}, \qquad \lambda > 0 \end{aligned}$$

by two kind of algorithms. We adapt the alternating descent conditional gradient method from [2] to our setting in Sect. 5.2. We will address the theoretical convergence behaviour in a forthcoming manuscript and refer only to the literature here. For numerical comparisons, we start with a simple grid refinement algorithm in the next Sect. 5.1.

5.1 Multi-level time–frequency refinement algorithm

Instead of solving the optimization problem over the continuous set \(X= [-{\mathscr {T}}/2, {\mathscr {T}}/2] \times [-\varOmega /2, \varOmega /2]\), we may discretize X on a grid \({\mathscr {J}}\) of cardinality J. For instance we could choose an equidistant grid. Then we consider the atoms on the grid points \((\tau _j, \nu _j)\), \(j \in {\mathscr {J}}\). Setting

$$\begin{aligned} Z_{{\mathscr {J}}} :=[A(\tau _1,\nu _1), \dots , A(\tau _J, \nu _J)] \in \mathbb {C}^{L_1 L_2 \times J} \end{aligned}$$

and \(\eta \in \mathbb {C}^J\), we reduce (9) to the convex minimization problem

$$\begin{aligned} \min _{\eta \in \mathbb {C}^{J}} \Vert G Z_{{\mathscr {J}}} \eta - y \Vert _2^2 + \lambda \, \Vert \eta \Vert _1. \end{aligned}$$
(22)

The sparsity of the discrete measure is here promoted by the 1-norm. In other words, we hope that \(\eta \) has only \(S \ll J\) entries which are not near zero. For one-dimensional problems on the torus, Duval and Peyré [19] showed that the discretized problem \(\varGamma \)-converges to the continuous problem in the sense of Remark 1 if the regular grid gets finer and finer under certain assumptions; so if the grid is fine enough, we should obtain a sufficient precise solution. On the contrary, a fine grid blows up the problem dimension and make its numerically intractable. Further, as described in [17] and references therein for general total variation minimization problems, the true point masses are usually approximated by several point masses of the grid in a small neighbourhood. These clusters may be detected and replaced by an averaged point mass. Further, the minimization problem (22) is a basis pursuit often encountered in compressed sensing and can be solved using toolboxes like CVX [22] or, approximately, by greedy methods like matching pursuits [4, 15, 42].

figure a

Instead of choosing a fine grid on the entire domain, we would like to solve the \(\ell ^1\) minimization problem (22) on a small set \({\mathscr {J}}\) that, in the ideal case, only covers the neighbourhoods of the unknown true parameters in X to reduce the numerical effort. For this purpose, we initially apply the orthogonal matching pursuit in Algorithm 1 on a fine regular grid until the residuum r gets small or a certain number of atoms is determined. Although the performance of the greedy method strongly depends on the current instance, the computed atoms are usually located near the true point masses. Surrounding the computed atoms with a fine local grid, we obtain a good starting set \({{\mathscr {J}}}_0\) for (22). Next, we would like to let the local grid become finer and finer to improve the solution and to let the number of atoms be nearly the same. Having an optimal \(\eta ^*\) of (22) for \({{\mathscr {J}}}_r\), we may chose a new finer grid \({{\mathscr {J}}}_{r+1}\) around the interesting features by one of the following refinement strategies:

  1. 1.

    Determine the dominant atoms corresponding to \((\tau _j, \nu _j) \in {{\mathscr {J}}}_r\) with \(|\eta _j^*| \ge \epsilon \). Discretize the neighbourhood around these atoms by a finer grid. Chose \({{\mathscr {J}}}_{r+1}\) as the union of these finer grids.

  2. 2.

    Determine the importance \(\gamma _j\) of the atom corresponding to \((\tau _j, \nu _j) \in {{\mathscr {J}}}_r\) by

    $$\begin{aligned} \gamma _j := \sum _{(\tau _k, \nu _k) \in {{\mathscr {J}}}_r \cap U_j} |\eta _k^*|, \end{aligned}$$

    where the coefficients of all atoms with parameters in a neighbourhood \(U_j\) around \((\tau _j, \nu _j)\) are summed up. For the most important neighbourhood \(U_j\), compute the barycenter by

    $$\begin{aligned} ({\tilde{\tau }}_j, {\tilde{\nu }}_j) := \sum _{(\tau _k, \nu _k) \in {{\mathscr {J}}}_r \cap U_j} \tfrac{|\eta _k^*|}{\gamma _j} \, (\tau _k, \nu _k). \end{aligned}$$

    Add a finer grid around \(({\tilde{\tau }}_j, {\tilde{\nu }}_j)\) to \({{\mathscr {J}}}_{r+1}\), remove the atoms in \(U_j\) from \({{\mathscr {J}}}_r\), and repeat the procedure as long as there are important points with \(\gamma _j \ge \epsilon \).

The new local grids should cover a smaller neighbourhood. For instance, these grids could again be regular with decreasing step size according to r. Notice that the numerical effort of the first refinement strategy is less than for the second one. On the other hand, the second strategy can leave the local grids due to the barycenters. After determining a final atomic set \({{\mathscr {J}}}^*\) containing the most dominant atoms or barycenters, the corresponding coefficients can be computed by solving the least square problem

$$\begin{aligned} \min _{\eta \in \mathbb {C}^{|{{\mathscr {J}}}^*|}} \Vert G Z_{{{\mathscr {J}}}^*} \eta - y \Vert _2^2. \end{aligned}$$
(23)

In summary, we obtain Algorithm 2.

figure b

5.2 Alternating descent conditional gradient algorithm

Next, we adapt the ADCG from [2] to our setting. This algorithm minimizes over the continuous domain X. The ADCG is a modification of the conditional gradient method (CGM) – also known as the Frank-Wolfe algorithm introduced in [21] – for total variation regularization. The original Frank-Wolfe algorithm on \(\mathbb {R}^d\) solves optimization problems of the form \({{\,\mathrm{\mathrm {argmin}}\,}}_{x \in {\mathscr {V}}} f(x)\), where the feasible set \({\mathscr {V}} \subset \mathbb {R}^d\) is compact and convex and the function f is a differentiable and convex. Given the kth iterate \(x_k\) each iteration consists basically of two steps, namely

  1. (i)

    minimizing a linearized version of f in \(x_k\) over the feasible set

    $$\begin{aligned} v_k = {{\,\mathrm{\mathrm {argmin}}\,}}_{v \in {\mathscr {V}}} f(x_k) + \langle \nabla f(x_k), v - x_k\rangle , \end{aligned}$$
  2. (ii)

    updating with

    $$\begin{aligned} x_{k+1} = x_k + \gamma (v_k - x_k). \end{aligned}$$

In super-resolution, the first step always consists in an update of the support of the measure as it is also done in the first step of our Algorithm 3.

Concerning the second step, all important convergence guarantees of the algorithms are still valid, if we replace \(x_{k+1}\) in the second step by any feasible \({\tilde{x}}_{k+1}\) that fulfills \(f({\tilde{x}}_{k+1}) \le f(x_{k+1})\). This flexibility has led to several successful variations of the classical Frank-Wolfe algorithm. ADCG related algorithms which differ in the second step are for example the algorithm in [3] and the so-called sliding Frank-Wolfe in [17]. While the first one uses soft shrinkage to update the amplitudes and a discrete gradient flow over the locations, the second one uses a non-convex solver to jointly minimize over the amplitudes and positions with a suitable starting values for the amplitudes.

Adapting the ADCG to our setting results in Algorithm 3, whose details are discussed in the following. For convergence results we refer to [2]. The expansion step of the ADCG algorithm is very similar to the greedy matching pursuit in Algorithm 1 without normalization of the atoms. To find a solution

$$\begin{aligned} (\tau _{{J_k}+1}, \nu _{{J_k}+1}) := {{\,\mathrm{\mathrm {argmax}}\,}}_{(\tau , \nu ) \in X} |\langle r, G A(\tau , \nu )\rangle |, \end{aligned}$$

the objective can first be evaluated on a fine regular grid of X. The obtained \((\tau _{{J_k}+1}, \nu _{{J_k}+1})\) may then be improved using a gradient descent method. In our numerical simulations, we however notice that this improvement step has no crucial impact on the recovered measure for our problem and can be skipped.

figure c

The second step consists in the update of the parameters by

$$\begin{aligned} (\eta , \tau , \nu ) := {{\,\mathrm{\mathrm {argmin}}\,}}_{\eta \in \mathbb {C}^{J_k+1}, (\tau ,\nu ) \in X^{J_k+1}} \Vert GZ(\tau , \nu ) \eta - y \Vert _2^2 + \lambda \Vert \eta \Vert _1 \end{aligned}$$

with \(Z (\tau , \nu ) :=[A(\tau _1,\nu _1),\dots , A(\tau _S, \nu _S)]\). In difference to the general algorithm in [17], the coefficient of the point masses \(\eta \) are complex numbers such that the above update consists in the minimization of a non-smooth objective. Therefore, we use the alternating minimization proposed in [2], which splits up the minimization into the basis pursuit or LASSO problem

$$\begin{aligned} \eta := {{\,\mathrm{\mathrm {argmin}}\,}}_{\eta \in \mathbb {C}^{J_k+1}} \Vert GZ(\tau , \nu ) \eta - y \Vert _2^2 + \lambda \Vert \eta \Vert _1. \end{aligned}$$

and the smooth minimization problem

$$\begin{aligned} (\tau , \nu ) := {{\,\mathrm{\mathrm {argmin}}\,}}_{(\tau ,\nu ) \in X^{J_k+1}} \underbrace{\Vert GZ(\tau ,\nu ) \eta - y \Vert _2^2}_{=:F(\tau ,\nu )}. \end{aligned}$$

The \(\ell ^1\) regularized problem can be solved as discussed above and the second one by a gradient descent or quasi Newton method like BFGS. A short computation shows that the gradients of the objective F are just given by

where \(\cdot ^*\) denotes the conjugation and transposition of a matrix. The partial derivatives of the atoms \(A(\tau _j,\nu _j)\) with respect to \(\tau _j\) and \(\nu _j\) are collected in the matrices

$$\begin{aligned} Z^\tau (\tau ,\nu )&:= [\tfrac{\mathrm {d}}{\mathrm {d} \tau } A(\tau _1, \nu _1), \dots , \tfrac{\mathrm {d}}{\mathrm {d} \tau } A(\tau _s, \nu _s)], \\ Z^\nu (\tau ,\nu )&:= [\tfrac{\mathrm {d}}{\mathrm {d} \nu } A(\tau _1, \nu _1), \dots , \tfrac{\mathrm {d}}{\mathrm {d} \nu } A(\tau _s, \nu _s)] \end{aligned}$$

with

$$\begin{aligned}{}[\tfrac{\mathrm {d}}{\mathrm {d} \tau } A(\tau , \nu )]_{(n_1,n_2)}&= - \tfrac{\varOmega }{L_1^2 L_2} \, D'_{N_1} \Bigl ( \frac{n_1-\varOmega \tau }{L_1} \Bigr ) \, D_{N_2} \Bigl ( \frac{n_2 - {\mathscr {T}}\nu }{L_2} \Bigr ), \\ [\tfrac{\mathrm {d}}{\mathrm {d} \nu } A(\tau , \nu )]_{(n_1,n_2)}&= - \tfrac{{\mathscr {T}}}{L_1 L_2^2} \, D_{N_1} \Bigl ( \frac{n_1-\varOmega \tau }{L_1} \Bigr ) \, D'_{N_2} \Bigl ( \frac{n_2 - {\mathscr {T}}\nu }{L_2} \Bigr ). \end{aligned}$$

The derivative of the N-th Dirichlet kernel \(D_N\) is given by

$$\begin{aligned} D'_N(x) = - 4 \pi \sum _{n=1}^N n \sin (2 \pi n x) = \biggl ( \frac{\sin ((2N+1) \pi x)}{\sin (\pi x)} \biggr )'. \end{aligned}$$

Finally, we like to mention that the numerical effort of ADCG algorithm is much higher compared with the multi-level refinement in Algorithm 2 since several optimization problems have to be solved for each added point mass.

6 Numerical results

In the following experiments, we compare the orthogonal matching pursuit, the multi-level time-frequency refinement, and the ADCG. First, we consider the performance for a specific synthetic instance. Then we study the general performance with respect to the noise level and how many measurements are needed to estimate the unknown channel. Finally, the influence of the identifier model is discussed.

Channel estimation from synthetic measurements For this experiment, we assume that the unknown channel or operator H in (2) has exactly \(S=10\) features and that this number is known in advance. The shifts and modulations \((\tau _j, \nu _j)\) are independently generated with respect to the uniform distribution on \([-{\mathscr {T}}/ 2, {\mathscr {T}}/ 2] \times [-\varOmega /2, \varOmega /2] = [-1.5, 1.5] \times [15.5, 15.5]\). The coefficients \(\eta _j\) are independently and uniformly drawn from the complex unit circle. The employed identifier w is a trigonometric polynomial of degree \(N_1 = 50\), i.e. \(L_1 = 101\), whose coefficients are independently drawn from the complex unit circle too. The true samples \(y_j = H w (\tfrac{{\mathscr {T}}j}{L_2})\) with \(j = -N_2, \dots , N_2\) and \(L_2 = 101\) are corrupted by additive complex Gaussian noise such that \(\Vert y - y^\delta \Vert _2 / \Vert y \Vert _2 = 0.1\), which corresponds to \(-10~\mathrm {db}\)Footnote 1 white noise – the noisy data are again denoted by \(y^\delta \).

To recover the unknown channel parameters, we apply the orthogonal matching pursuit (Algorithm 1) with the regular grid \({\mathscr {J}}\) of \([-{\mathscr {T}}/2, {\mathscr {T}}/2] \times [-\varOmega /2, \varOmega /2]\) consisting of 1 024 points in each direction. The same grid is used to compute the location of the new point masses in the ADCG (Algorithm 3). Both methods are stopped after computing exactly 10 features. The multi-level refinement in Algorithm 2 is initialized by applying the orthogonal matching pursuit to a coarser grid with 256 points in each direction. The local \(5\times 5\) grids are then refined 15 times by reducing the stepsize by a factor of 0.75. We always use the second refinement strategy. The multi-level refinement and the ADCG are applied to the Tikhonov regularization (9) with \(\lambda = 500\). The recovered shifts and modulations of all three methods are shown in Fig. 1. The true parameters are denoted with an additional \(\dagger \). The absolute errors of the estimation are recorded in Table 1, where the experiment has been repeated 50 times and the errors are averaged. For this instance, all three methods yield comparable results, where the shifts \(\tau _j\) and modulations \(\nu _j\) are quite accurate. The multi-level refinement and the ADCG method achieve slightly higher accuracies than the orthogonal matching pursuit, but, on the downside, the ADCG method is much more time-consuming than the others. Considering the noise level, the results are nevertheless satisfying and show that in particular the shifts and modulations are recoverable from highly noisy measurements.

Fig. 1
figure 1

Estimated shifts and modulations of a channel with \(S=10\) features, where S is exactly known in advance. The degree of the identifier and the number of samples are \(L_1 = L_2 = 101\). The additive Gaussian noise corresponds to \(-10~\mathrm {db}\)

Table 1 Mean absolute reconstruction errors over 50 experiments for channels with \(S=10\) features, where S is exactly known in advance. The degree of the identifier and the number of samples are \(L_1 = L_2 = 101\). The additive Gaussian noise corresponds to \(-10~\mathrm {db}\)

Influence of Noise Next, we study the influence of the noise on the recovery quality of the algorithms in more details. Therefore, the unknown channel is again randomly generated with respect to 10 coefficients on the complex unit circle. In contrast to the first numerical example, the algorithms are henceforth stopped if the residuum becomes small or if the objective stagnates; in other words, the algorithms have no knowledge of the true sparsity S. The degree of the random identifier with unimodular coefficients and the number samples is \(L_1 = L_2 = 101\) once more. The remaining parameters are \(T = 1\) and \(\varOmega = 101\). The parameter \(\lambda \) is chosen with respect to the noise level and goes to zero for vanishing noise. Differently from the experiment before, we want to measure how well the estimated channel approximates the true one. Since we are only interested on the behavior of the true channel on the sampled interval \([-{\mathscr {T}}/2, {\mathscr {T}}/2]\), we interpret the restriction of H as an operator from the space of \(L_1/\varOmega \)-periodic trigonometric polynomials \({\mathscr {P}}_{N_1} \subset L^2([- L_1/2\varOmega , L_1/2\varOmega ))\) of degree \(N_1\) at the most to the square-integrable functions \(L^2([-{\mathscr {T}}/2, {\mathscr {T}}/2])\), i.e.

$$\begin{aligned} H : {\mathscr {P}}_{N_1} \rightarrow L^2([-\tfrac{{\mathscr {T}}}{2}, \tfrac{{\mathscr {T}}}{2}]). \end{aligned}$$

The difference between the true operator \(H^\dagger \) and the estimated operator H is henceforth measured by the operator norm

$$\begin{aligned} \Vert H^\dagger - H\Vert _{\mathrm {op}} := \sup _{w \in {\mathscr {P}}_{N_1}\setminus \{0\}} \frac{\Vert H^\dagger w - H w\Vert _{L^2([-{\mathscr {T}}/2, {\mathscr {T}}/2])}}{\Vert w\Vert _{L^2([- L_1/2\varOmega , L_1/2\varOmega ])}}, \end{aligned}$$

where \(\Vert \cdot \Vert _{L^2(I)}\) is the 2-norm of the restriction to the specified interval I. Due to Parseval’s identity, the considered subspace is isometrically isomorph to the coefficient space \(\mathbb {C}^{L_1}\). After discretizing \([-{\mathscr {T}}/2, {\mathscr {T}}/2]\) and employing the midpoint rule, the operator norm may be computed numerically using the singular value decomposition.

The mean performance of the discussed algorithms is shown in Fig. 2, where for every noise level the experiment has been repeated 50 times. During the multi-level refinement, the step size of the local grids is decreased 25 times by a factor of 2/3. For the ADCG method the \(\ell ^1\) and least-square minimization is alternated 25 times. The observation of the first experiment for \(-10\,\mathrm {dB}\) noise carry over. Notice that already small parameter errors lead to large relative errors in the operator norm. The reconstruction error for the multi-level method and the ADCG method corresponds nearly one-to-one to the noise level of the measurements. The reconstruction by the orthogonal matching pursuit does not improve if the noise is decreasing. Although the orthogonal matching pursuit yields sufficient results as starting point for the refinement method, the problem cannot be solved sufficiently accurate by applying only this greedy method.

Fig. 2
figure 2

The recovery error of the orthogonal matching pursuit (Algorithm 1), the refinement strategy (Algorithm 2), and the Frank–Wolfe method (Algorithm 3) over varying levels of complex Gaussian noise to recover a channel with 10 features from 101 samples. The regularization parameter \(\lambda \) has been chosen with respect to the current noise level

Fig. 3
figure 3

Empirical probability that the operator norm satisfies \(\Vert H^\dagger - H\Vert _{\mathrm {op}} / \Vert H^\dagger \Vert _{\mathrm {op}} \le -40 \, \mathrm {dB}\) depending on the number of measurements and features. The coefficients of the unknown operators have been chosen unimodular

Number of required measurements During our numerical experiments, we have noticed that around 10 times more samples than unknown features are required to estimate the parameters of the channel sufficiently well. In the following, we explore the question how many measurements are needed in more details. For this, we consider the solution of Algorithm 3 for different numbers of features and numbers of measurements. The remaining parameters of the setting are \(\varOmega = L_1 = L_2 = 101\) and \({\mathscr {T}}= 1\). The coefficient of the unknown channel are unimodular, and the measurements are exact. We declare a reconstruction as success if the relative error \(\Vert H^\dagger - H\Vert _{\mathrm {op}}/\Vert H^\dagger \Vert _{\mathrm {op}}\) is less than \(-40 \, \mathrm {dB}\), and repeat the experiment 50 times for each data point. The success rate and the mean relative error in the operator norm are shown in Fig. 3 and sustain our observation.

This experiment is the numerical analogon to the theoretical recovery guarantee in [23, Thm 1], where the unknown parameters \((\eta _s, \tau _s, \nu _s)\) of (10) in Remark 2 are determined by solving an atomic norm problem. More precisely, the minimizer of the atomic norm problem yields the wanted parameters with high probability under certain assumptions. For the theoretical statement, at least \(L \ge 1024\) measurements are required. Considering the phase transition in Fig. 3, we see that, from a numerical point of view, much less measurements are needed to recover the unknown channel. In particular for higher sparsity levels, the transition between failure and success becomes non-linear, which corresponds to the theoretical results.

Influence of the minimal separation Continuing the discussion of the theoretical guarantees, we recall that one of the crucial assumptions is a lower bound for the minimal separation

$$\begin{aligned} \min \bigl \{ \tfrac{|\tau _j - \tau _k|}{{\mathscr {T}}}, \tfrac{|\nu _j - \nu _k|}{\varOmega } \bigr \}. \end{aligned}$$

If the distance between two or more features in the parameter space become to close, they cannot be resolved numerically and are often combined into one feature. This well-known effect may heavily lower the quality of the reconstruction and also occur in our setting. To study this behaviour numerically, we again consider random channels with 10 unimodular features for \(L_1 = L_2 = 101\), \({\mathscr {T}}= 1\), \(\varOmega = 101\). The shifts and modulations are generated such that the parameter set exactly possesses a certain minimal separation. The results with respect to the operator norm on \({\mathscr {P}}_{N_1}\) are shown in Fig. 4, where the experiments have been repeated 50 times without noise. If the separation falls below 0.01, then the error increases rapidly. Note that this transition point depends on the problem dimension \(L_1\), \(L_2\) and on the number of unknown features.

Fig. 4
figure 4

The recovery error for unknown channels with certain relative minimal separation between the 10 features. The employed 101 measurements have been free of noise

Importance of the identifier model Finally, we study how the chosen identifier model affects the recovery quality. During the entire paper, we used trigonometric polynomials as identifier w for the unknown channel. On the basis of w, the given samples \(Hw({\mathscr {T}}d/L_2)\) are related to the unknown parameters by Theorem 3, which are then determined by solving the Tikhonov functional (9) with respect to the total variation norm for measures. In [23], for the special case \(L := L_1 = L_2\), \(N := N_1 = N_2\), and odd \(L = {\mathscr {T}}\varOmega \), Heckel, Morgenshtern, and Soltanolkotabi have suggested to solve an atomic norm problem based on a model approximation where the identifier is chosen as sum of shifted sinc functions

$$\begin{aligned} w(x) = \sum _{n = -L -N}^{L + N} c_n {{\,\mathrm{\mathrm {sinc}}\,}}(x \varOmega - n). \end{aligned}$$
(24)

The real coefficients are chosen partially periodic as \(c_n = c_{n+L} = c_{n-L}\) for \(n = -N, \dots , N\). We denote the L-dimensional span of the sinc functions (24) by \({\mathscr {S}}_L\). The given samples are then only approximated by (5) in Theorem 3 Fig. 5.

Fig. 5
figure 5

Influence of the identifier model on the reconstruction error depending on additive Gaussian noise. For each noise level, 50 channels with 10 features have been recovered from 101 samples. The regularization parameter has been chosen proportional to the noise level

The replacement of the trigonometric polynomial by a sum of sinc functions leads to a model error. Considering a channel with 10 features and 101 samples as before, and studying the recovery error of Algorithm 3 measured in the operator norm, we see that the model mismatch corresponds to a noise level of around \(-25~\mathrm {db}\). Notice that the comparison with respect to trigonometric polynomials is somehow subjective. For this reason, we also compute the relative reconstruction error based on the subspace of sinc functions (24). Numerically, the difference between both error terms is negligible. The clearly visible approximation for sinc functions does not occur for trigonometric identifiers.