Oversampling and Undersampling in de Branges Spaces Arising from Regular Schrödinger Operators

The classical results on oversampling and undersampling (or aliasing) of functions in Paley–Wiener spaces are generalized to the case of functions in de Branges spaces arising from regular Schrödinger operators with a wide range of potentials.


Introduction
This paper deals with the subject of oversampling and undersampling -the latter also known as aliasing in the engineering and signal processing literature-in the context of de Branges Hilbert spaces of entire functions (dB spaces for short). These notions play a prominent role in the theory of Paley-Wiener spaces [15,23]. Since Paley-Communicated by Harry Dym. Wiener spaces are leading examples of dB spaces, questions related to oversampling and undersampling in dB spaces emerge naturally.
Paley-Wiener spaces stem from the Fourier transform of functions with given compact support centred at zero, viz., By the Whittaker-Shannon-Kotel'nikov theorem, any function f (z) ∈ PW a is decomposed as follows.
where the convergence of the series is uniform in any compact subset of C. The function G a (z, t) is referred to as the sampling kernel.
In oversampling, the starting point is a function f (z) ∈ PW a ⊂ PW b (a < b). Then, in addition to (1.1), one has Moreover, f (z) admits a different representation with a modified sampling kernel G ab (z, t) depending on a and b (see [15,Thm. 7.2.5]). While the convergence of the sampling formula (1.1) is unaffected by l 2 perturbations of the samples f nπ a , formula (1.2) is more robust because it is convergent even under l ∞ perturbations of the samples. That is, if the sequence { n } n∈Z is bounded and one defines f (z) := Oversampling and undersampling are, to some extent, consequences of the fact that the chain of Paley-Wiener spaces PW s , s ∈ (0, ∞), is totally ordered by inclusion. As this is a property shared by all dB spaces in the precise sense of [4,Thm. 35], it is expected that analogous notions should make sense in this latter class of spaces. We note that sampling formulas generalizing (1.1) are known for arbitrary reproducing kernel Hilbert spaces (see e.g. Kramer-type formulas in [7,8,18,20]), dB spaces among them. Analysis of error due to noisy samples and aliasing, among other sources, in Paley-Wiener spaces goes back at least to [14]. More recent literature on the subject is, for instance, [1][2][3]12]. However, to the best of our knowledge, estimates for oversampling and undersampling are not known for dB spaces apart from the Paley-Wiener class.
A function f (z) belonging to a dB space B obviously admits a representation in terms of an orthogonal basis. In particular, where k(z, w) is the reproducing kernel of B and S(γ ) is a canonical selfadjoint extension of the operator of multiplication by the independent variable in B. The expansion (1.5) is a sampling formula with k(z, t)/k(t, t) being its sampling kernel. Note that (1.1) is a particular realization of (1.5) for the dB space PW a . In order to obtain oversampling and undersampling estimates in analogy to the Paley-Wiener case, we look into dB spaces of the form for some s ∈ (0, ∞) and with Neumann boundary condition at x = 0 (see Sect. 2).
Here V ∈ L 1 (0, s) is a real function. By construction B s ⊂ B s whenever s < s (for more on this, see [17]).
where k s (z, w) is the reproducing kernel of the space B s . If S s (γ ) is a selfadjoint extension of the multiplication operator in B s , then any f (z) ∈ B s has the representation Our main results are Theorems 3.6 and 4.7, which can be summarized as follows: where K ab (z, t) is given in (3.6). Then, for every compact set K of C, there is a constant C(a, K , V ) > 0 such that We remark that the bound is uniform for f (z) ∈ B a . Note that K ab (z, t) is a modified sampling kernel analogous to the one in (1.3).
These results are somewhat limited in several respects. First, we show oversampling relative to the pair B a ⊂ B π , and undersampling relative to the pair B π ⊂ B b (for dB spaces defined according to (1.6)). These particular choices are related to a convenient simplification in the proofs, but our results can be extended to an arbitrary pair B a ⊂ B b by a scaling argument. Second, the sampling formulae use the spectra of selfadjoint operators with Neumann boundary condition at the left endpoint. This choice simplifies the asymptotic formulae for eigenvalues of the associated Schrödinger operator; it can also be removed but at the expense of a somewhat clumsier analysis. In our opinion this extra workload would not add anything substantial to the results. Finally, and more importantly from our point of view, our assumption on the potential functions is a bit too restrictive. In view of [17], we believe that our results should be valid just requiring V ∈ L 1 (0, s), but relaxing our present assumption on V would require some major changes in the details of our proofs. Further generalizations of the results presented here (in particular, involving a wider class of dB spaces) are the subject of a future work.
About the organization of this work: Sect. 2 recalls the necessary elements on de Branges spaces and regular Schrödinger operators. Section 3 deals with oversampling. Undersampling is treated in Sect. 4. The "Appendix" contains some technical results.

dB Spaces and Schrödinger Operators
There are various ways of defining a de Branges space (see [4,Sec. 19], [17,Sec. 2], [21]). We recall the following definition: a Hilbert space of entire functions B is a de Branges (dB space) when it has a reproducing kernel k(z, w) and is isometrically invariant under the mappings f (z) → f # (z) := f (z) and where Ord w ( f ) is the order of w as a zero of f . The class of dB spaces appearing in this work has the following additional properties: A distinctive structural property of dB spaces is that the set of dB subspaces of a given dB space is totally ordered by inclusion [4,Thm. 35]. For regular dB spaces (in the sense of (a2)) this means that, if B 1 and B 2 are subspaces of a dB space that are themselves dB spaces, then either The operator S of multiplication by the independent variable in a dB space B is defined by This operator is closed, symmetric and has deficiency indices (1, 1). In view of (a1), the spectral core of S is empty (cf. [10, Sec. 4]), i. e., for any z ∈ C, the operator (S − z I ) −1 is bounded although, as a consequence of the indices being (1, 1), its domain has codimension one. We consider dB spaces such that S is densely defined and denote by S(γ ), γ ∈ [0, π), the selfadjoint restrictions of S * . Since where spec(S(γ )) denotes the spectrum of S(γ ). Hence, the sampling formula holds true. The convergence of this series is in the dB space, which in turn implies uniform convergence in compact subsets of C.
The dB spaces under consideration in this work are related to symmetric operators arising from regular Schrödinger differential expressions. The construction is similar to the one developed in [17], although there are other ways of generating dB spaces from differential equations of the Sturm-Liouville type [5].
Consider a differential expression of the form where we assume (v1) V is real-valued and belongs to L 1 (0, s) for arbitrary s > 0.
For each s > 0, τ determines a closed symmetric operator H s in L 2 (0, s), This operator is known to have deficiency indices (1, 1) and empty spectral core, that is, The selfadjoint extensions of H s are given by Let ξ : R + × C → C be the solution of the eigenvalue problem (The derivative is taken with respect to the first argument.) The function ξ(x, z) is real entire for any fixed x ∈ R + [13, Thm.
with ϕ ∈ L 2 (0, s), form a dB space B s with the norm given by A straightforward computation shows that the reproducing kernel of B s is Remark 1 In view of (2.7), k s (z, w) and ξ(·, w) are related by the isometry (2.5). Hence, using (2.2) and expression (2.6) for the norm in B s , one obtains where the series converges in the L 2 -norm.
If r < s, then B r is a proper dB subspace of B s . Indeed, {B r : r ∈ (0, s)} is a chain of dB subspaces of B s in accordance with [4,Thm. 35]. The isometry from L 2 (0, s) onto B s induced by (2.5) transforms H s into the operator of multiplication by the independent variable in B s (see (2.1)), the latter will subsequently be denoted by S s . Also, the selfadjoint extensions H s (γ ) are transformed into the selfadjoint extensions S s (γ ) of S s . When referring to unitary invariants (such as the spectrum), we use interchangeably either H s (γ ) or S s (γ ) throughout this text.

Remark 2
The space B s constructed from L 2 (0, s) via (2.5) depends on the potential V , which is assumed to satisfy (v1). However, as shown in [17,Thm. 4.1], the set of entire functions in B s is the same for all V ∈ L 1 (0, s); what changes with V is the inner product in B s . Noteworthily, since the operator S s of multiplication by the independent variable is defined in its maximal domain (see (2.1)), it has always the same domain and range and acts in the same way; yet, by modifying the metric of the space, each V ∈ L 1 (0, s) gives rise to a different family of selfadjoint extensions of S s . As a consequence, every function in B s can be sampled by (2.3) using any sequence {λ n } as sampling points, as long as there exists V ∈ L 1 (0, s) such that {λ n } is the spectrum of some selfadjoint extension of the corresponding operator H s . This fact can be considered as a generalization of the notion of irregular sampling, quite well studied in Paley-Wiener spaces by means of classical analysis; the Kadec's 1/4 Theorem is a chief example of this kind of results [9].

Oversampling
The oversampling of a function in B a is related to the fact that it can be sampled as a function in B b and the sampling kernel can be modified in such a way that the sampling series is convergent under l ∞ perturbations of the samples (see the Sect. 1).
Let 0 < a < b < ∞ and V be as in (v1). Any ϕ ∈ L 2 (0, a) can be identified with an element in L 2 (0, b) since where χ E denotes the characteristic function of a set E. Define Taking into account (2.8) with s = b, (3.1) and (3.2) imply where the convergence is in L 2 (0, b).
which converges uniformly in compact subsets of C.
converges uniformly in compact subsets of C.
Assume that Hypothesis 3.1 is met. Enumerate any given sequence ∈ l ∞ such that In view of (3.4), the function is well defined and the defining series converges uniformly in compact subsets of C. Moreover, This is performed in two stages, the first one deals with the case V ≡ 0, the second one employs perturbative methods to consider the general case.
If V ≡ 0, the function ξ given in Sect. 2 is Whenever we refer to the function ξ corresponding to V ≡ 0, we write the righthand-side of (3.8). We reserve the use of the symbol ξ only for the case V ≡ 0. Also, throughout this paper we use the main branch of the square root function. As mentioned in the Sect. 1, for the sake of simplicity we assume b = π and fix γ = π/2. A straightforward calculation yields spec (H π (π/2)) = {n 2 : n ∈ N ∪ {0}}. (3.9) Moreover, by substituting (3.8) into (2.7), we verify that the reproducing kernel k π (z, w) corresponding to the case V ≡ 0 satisfies In the remainder of this section, we denote ·, · L 2 (0,π ) simply as ·, · . Proof Consider a compact set K in C such that spec(H π (π/2)) intersects K only at n 2 0 with n 0 ∈ N. It will be clear at the end of the proof that there is no loss of generality in this assumption. First note that cos( √ z ·), R(·) cos(n 0 ·) is uniformly bounded in K (one can use the Cauchy-Schwarz inequality and note that the factor depending on z is continuous in K ). On the other hand, by Lemma A.5, Thus, taking into account (3.10), the series (3.5) converges uniformly in K .
Now, let us address the case of non-zero V satisfying (v2). As before we set b = π and γ = π/2. Also, we assume spec(H π (π/2)) = {λ n } ∞ n=0 ordered such that λ n−1 <λ n for all n ∈ N. The subsequent analysis make use of the following auxiliary functions.
Proof In terms of the functions introduced in Definition 3.3, one writes It will be shown that each of the five terms on the right-hand side of (3.12) is appropriately bounded. For the first term, one uses the inequality (A.11) of Lemma A.4 and the first inequality of Lemma A.7. The estimate of the second term is obtain by combining (A.12) of Lemma A.4 and the second inequality of Lemma A.7. The third term on the right-hand side of (3.12) is estimated in Lemma A.6. As regards the fourth and fifth terms in (3.12), one proceeds as follows. From Lemma A.3(ii), it follows that uniformly with respect to x ∈ [0, π] for n sufficiently large. Also, |R(x)| ≤ 1 according to (3.2). Therefore, one has The bound for the remaining term follows by a similar reasoning taking into account (A.4). Thus, (3.14) By combining the estimates of the first three terms, together with (3.13) and (3.14), the bound of the statement is established. (v2). If b = π and γ = π/2, then Hypothesis 3.1 holds true.

Proposition 3.5 Let V be as in
Proof From Lemma A.3(iii) we know that k π (λ n , λ n ) −k π (n 2 , n 2 ) = O(n −2 ) as n → ∞. This implies that k π (λ n , λ n ) ≥k π (n 2 , n 2 ) 2 = π 4 for n sufficiently large, where we have used (3.10). Hence, for n suficiently large. Again resorting to Lemma A.3(iii), one obtains for all z ∈ C, and where c 1 : C → R is a positive continuous function. As a consequence of the previous inequality, there exists another positive continuous function Hence, by Proposition 3.2, the series (3.5) converges uniformly in compact subsets of C.
Arguing as in the paragraph below Hypothesis 3.1, one arrives at the following assertion in which the oversampling procedure is established (see the Sect. 1). (v2) with b = π . Consider B a with a ∈ (0, π). Then, for every compact set K ⊂ C, there exist a constant C(a, K , V ) > 0 such that

Theorem 3.6 Suppose V obeys
for all f (z) ∈ B a , where = { t } is any bounded real sequence and f (z) is given by (3.7) with b = π and γ = π/2.

Undersampling
In this section, we treat undersampling of functions in B b \B a (a < b) with the sampling points given by the spectrum of S a (γ ) as explained in the Sect. 1.

Hypothesis 4.1
For a < b and each z ∈ C, the series converges absolutely and uniformly with respect to x ∈ [0, b].

Lemma 4.2 Assume that Hypothesis 4.1 is met. Define
Then, for each z ∈ C, |ξ ext a (x, z) − ξ(x, z)| is continuous in C.

Moreover,
(iv) if ψ ∈ L 2 (0, b) and g(z) ∈ B b are related by the isometry (2.5), then This, along with Remark 3, yields (ii). Item (iii) follows from Lemma A.1. To prove (iv), apply the dominated convergence theorem, which holds because of Hypothesis 4.1, Assume that Hypothesis 4.1 holds true. Suppose that ψ ∈ L 2 (0, b) and g(z) ∈ B b are related by the isometry (2.5), that is, Then, due to Lemma 4.2(ii), where the function h a has been defined in Lemma 4.2(iii). Therefore, for each ψ ∈ L 2 (0, b), the difference |g(z) − g(z)| is uniformly bounded in compact subsets of C.
Below we prove that Hypothesis 4.1 holds true when V satisfies (v2) with b > π. As in the previous section, this is performed in two stages, the first one deals with the particular case V ≡ 0 and the second one treats the general case.
In keeping with the simplification made in the previous section, we consider only the case a = π and γ = π/2.
Using trigonometric identities and Eqs. (2.7) and (3.8) one verifies that whenever n ∈ N ∪ {0} and z ∈ C\{n 2 }. Recall thatk π denotes the reproducing kernel within B π associated with V ≡ 0. Proof Let K be a compact subset of C. As in the proof of Proposition 3.2, assume without loss of generality that n 2 0 is the only point of spec(H π (π/2)) in K (n 0 ∈ N). Due to (3.8)-(3.10), it suffices to show the uniform convergence of the series n =n 0 |k π (n 2 , z)| in K . By (4.6), one obtains for all n ≥ N ; we note that c 3 may depend on b and V . The estimate (4.7) in turn implies uniformly with respect to x ∈ [0, b], where c 4 : C → R is another continuous positive function that may also depend on b and V . The claimed assertion now follows from Proposition 4.3.
where g(z) is given by (4.5) with a = π , i. e., g(z) is given by the series (4.3) with a = π and γ = π/2. |z − w| < δ and |y − v| < δ imply |θ(z, for any (z, y) , (w, v) ∈ K × Y . Take w ∈ K such that |z 0 − w| < δ. If v ∈ Y satisfies |ϑ(z 0 ) − v| < δ then, in view of (A.2), Due to (A.1) and the fact that θ is non negative, The following Lemma is the analogue of [11,Lemma 2.2] for Neumann-like boundary conditions. Lemma A.2 Given a > 0, suppose that V ∈ L 1 (0, a). Then, for each z ∈ C, the unique solution of the initial value problem is the corresponding Green's function. This solution satisfies the estimate for some constant C = C(a, V ) > 0. Furthermore, the derivative obeys and satisfies the estimate Since cos √ zx ≤ exp( Im √ z x) and for some constant C 0 > 0 (cf. [11, Lemma A.1]), one has z|x .
An induction argument then shows The assertions (A.5) and (A.6) are proved by similar arguments so we omit the details.
The next results refer to the functions ρ, T , and F introduced in Definition 3.3, as well as the reproducing kernel k b (z, w) from (2.7) and the particular casek b (z, w) when V ≡ 0.
A straightforward computation shows that Together with (3.11) and (ii), these inequalities imply uniformly with respect to x ∈ [0, π]. Using integration by parts along with the fact that ρ(0) = ρ(π) = 0, one obtains Here, C 1 > 0 depends on V while C 2 > 0 and C 3 > 0 may, in addition, depend on a.
Proof Integrating by parts one obtains, F (x, z) ≤ C V exp( Im √ z π).
On the other hand, since F (x, z) = V (x)ξ(x, z) − z F(x, z), it follows from (A.4) that This implies (A.10). The proof of (A.11) repeats the argumentation above: integrate by parts and observe that sup The proof of (A.12) follows a similar reasoning.