1 Introduction

1.1 General Setting and First Results

Given an (essentially) bounded function \(\omega \), called a symbol, on the unit circle \( {\mathbb {T}}=\{v \in {\mathbb {C}}\ \vert \ \left| v\right| =1\}\), the associated Hankel matrix, \(\varGamma (\widehat{\omega })\), is the (bounded) operator on \(\ell ^{2}(\mathbb {Z_{+}})\), \({\mathbb {Z}}_{+}=\{0,1,2,\ldots \}\), whose “matrix” entries are

$$\begin{aligned} \varGamma (\widehat{\omega })_{j,k}=\widehat{\omega }(j+k),\quad j, k \ge 0, \end{aligned}$$

where \(\widehat{\omega }\) denotes the sequence of Fourier coefficients

$$\begin{aligned} \widehat{\omega }(j)=\int _{0}^{2\pi }\omega (e^{ i \vartheta })e^{- i j\vartheta }\frac{d\vartheta }{2\pi },\ \ \ j \in {\mathbb {Z}}. \end{aligned}$$

The matrix \(\varGamma (\widehat{\omega })\) is always symmetric. In particular, it is self-adjoint if and only if \(\widehat{\omega }\) is real-valued. For instance, this is the case when \(\omega \) satisfies the following symmetry condition

$$\begin{aligned} \omega (v)=\overline{\omega (\overline{v})},\ \ \ v \in {\mathbb {T}}. \end{aligned}$$
(1.1)

In this paper, we consider symbols in the class of the piece-wise continuous functions on \({\mathbb {T}}\), denoted by \(PC({\mathbb {T}})\), i.e. those symbols \(\omega \) for which the limits

$$\begin{aligned} \omega (z+)=\lim _{\varepsilon \rightarrow 0+}\omega (z e^{i \varepsilon }),\ \ \ \omega (z-)=\lim _{\varepsilon \rightarrow 0+}\omega (z e^{-i \varepsilon }), \end{aligned}$$
(1.2)

exist and are finite for all \(z \in {\mathbb {T}}.\) The points \(z \in {\mathbb {T}}\) for which the quantity

$$\begin{aligned} \varkappa _{z}(\omega )=\frac{\omega (z+)-\omega (z-)}{2}\ne 0 \end{aligned}$$

are called the jump discontinuities of \(\omega \) and \(\varkappa _{z}(\omega )\) is the half-height of the jump of the symbol at z. Due to the presence of jump discontinuities, Hankel matrices with these symbols are non-compact. The compactness of \({\mathbb {T}}\) and the existence of the limits in (1.2) can be used to show that the sets

$$\begin{aligned} \Omega _{s}=\{z \in {\mathbb {T}}\vert \left| \varkappa _{z}(\omega )\right|>s\}, \ \ \ s>0, \end{aligned}$$

are finite and so the set of jump-discontinuities of \(\omega \), denoted by \(\Omega \), is at most countable. Furthermore, if the symbol satisfies (1.1), \(\Omega \) is symmetric with respect to the real axis and for any \(z \in \Omega \)

$$\begin{aligned} \varkappa _{\overline{z}}(\omega )=-\overline{\varkappa _{z}(\omega )}, \end{aligned}$$

whereby we obtain that \(\left| \varkappa _{z}(\omega )\right| =\left| \varkappa _{\overline{z}}(\omega )\right| \), and at \(z=\pm 1\), \(\varkappa _{z}(\omega )\) is purely imaginary.

Hankel matrices with piece-wise continuous symbols still attract attention in both the operator theory and spectral theory community, see for instance [18, 19] and references therein. S. Power, [16], showed that the essential spectrum of such matrices consists of bands depending only on the heights of the jumps of the symbol and gave the following identity:

$$\begin{aligned} {{\,\mathrm{spec}\,}}_{ess}\left( \varGamma (\widehat{\omega })\right)= & {} \left[ 0, -i\varkappa _{1}(\omega )\right] \cup \left[ 0, -i\varkappa _{-1}(\omega )\right] \cup \nonumber \\&\bigcup _{z \in \Omega \backslash \{\pm 1\}}\left[ -i(\varkappa _{z}(\omega )\varkappa _{\overline{z}}(\omega ))^{1/2},i(\varkappa _{z}(\omega )\varkappa _{\overline{z}}(\omega ))^{1/2}\right] ,\quad \quad \end{aligned}$$
(1.3)

where the notation \([a,b], a,b \in {\mathbb {C}}\) denotes the line segment joining a and b. Assuming that the symbol has finitely many jumps and, say, it is Lipschitz continuous on the left and on the right of the jumps, in [18], a more detailed picture is obtained for the absolutely continuous (a.c.) spectrum of \(\left| \varGamma (\widehat{\omega })\right| =\sqrt{\varGamma (\widehat{\omega })^{*}\varGamma (\widehat{\omega })}\), where the following formula is obtained

$$\begin{aligned} {{\,\mathrm{spec}\,}}_{ac}\left( \left| \varGamma (\widehat{\omega })\right| \right) =\bigcup _{z \in \Omega }\left[ 0, \left| \varkappa _{z}(\omega )\right| \right] . \end{aligned}$$

Furthermore, it is shown that each band contributes 1 to the multiplicity of the a.c. spectrum.

Example

First examples of symbols fitting in this scheme are the following

$$\begin{aligned} \gamma (e^{i\vartheta })=i\pi ^{-1} e^{-i\vartheta }(\pi -\vartheta ),\ \ \psi (e^{i\vartheta })=2 \mathbb {1}_{E}(e^{i\vartheta }),\quad \vartheta \in [0, 2\pi ). \end{aligned}$$
(1.4)

where \(\mathbb {1}_{E}\) is the characteristic function of \(E=\{\vartheta \in [0,\, 2\pi ):\,\cos \vartheta >0\}\). It is clear that both \( \gamma , \psi \in PC({\mathbb {T}})\), and their jumps occur at \(z=1\) and \(z=\pm i\) respectively and \(\varkappa _{1}(\gamma )=i\), \(\varkappa _{\pm i}(\psi )=\mp 1\). Simple integration by parts shows that

$$\begin{aligned} \widehat{\gamma }(j)=\frac{1}{\pi (j+1)},\ \ \widehat{\psi }(j)=\frac{2\sin (\pi j/2)}{\pi j}, \quad j\ge 0, \end{aligned}$$

with the understanding that \(\widehat{\psi }(0)=1\). Power’s result in (1.3) in these cases gives

$$\begin{aligned} {{\,\mathrm{spec}\,}}_{ess}\left( \varGamma (\widehat{\gamma })\right) =\left[ 0,\, 1\right] , \quad {{\,\mathrm{spec}\,}}_{ess}\left( \varGamma (\widehat{\psi }\, )\right) =\left[ -1,\, 1\right] . \end{aligned}$$
(1.5)

The matrix \(\varGamma (\widehat{\gamma })\), known as the Hilbert matrix, has simple a.c. spectrum coinciding with the interval [0, 1] and a full spectral decomposition was exhibited in [22]. In [11], the authors perform a more detailed spectral analysis of \(\varGamma (\widehat{\psi })\) and show that its spectrum is purely a.c. of multiplicity one and coincides with the interval \([-1,\, 1]\).

For \(N\ge 1\), let \(\varGamma ^{(N)}(\widehat{\omega })\) be the \(N\times N\) Hankel matrix

$$\begin{aligned} \varGamma ^{(N)}(\widehat{\omega })=\{\widehat{\omega }(j+k)\}_{j,k= 0}^{N-1}. \end{aligned}$$

We wish to give a description of the relationship between the spectrum of the infinite matrix \(\varGamma (\widehat{\omega })\) and that of its truncation \(\varGamma ^{(N)}(\widehat{\omega })\). More specifically:

  1. (i)

    for a non-self-adjoint Hankel matrix, we study the distribution of the singular values of \(\varGamma ^{(N)}(\widehat{\omega })\) inside the spectrum of \(\left| \varGamma (\widehat{\omega })\right| \);

  2. (ii)

    in the self-adjoint setting, we study the distribution of the eigenvalues of \(\varGamma ^{(N)}(\widehat{\omega })\) inside the spectrum of \(\varGamma (\widehat{\omega })\).

To do so, for a non-self-adjoint Hankel matrix \(\varGamma (\widehat{\omega })\) we study the asymptotic behaviour of the singular-value counting function

$$\begin{aligned} \mathsf {n}(t;\varGamma ^{(N)}(\widehat{\omega }))=\#\{n:\ s_{n}(\varGamma ^{(N)}(\widehat{\omega }))>t\},\ \ \ t>0, \end{aligned}$$

as \(N \rightarrow \infty \). Here \(\{s_{n}(\varGamma ^{(N)}(\widehat{\omega }))\}_{n\ge 1}\) is the sequence of singular values of \(\varGamma ^{(N)}(\widehat{\omega })\). In particular, we study the logarithmic spectral density of \(\left| \varGamma (\widehat{\omega })\right| \), defined as

$$\begin{aligned} \mathsf {LD}_{\square }(t;\varGamma (\widehat{\omega })):=\lim _{N \rightarrow \infty }\frac{\mathsf {n}(t; \varGamma ^{(N)}(\widehat{\omega }))}{\log (N)}. \end{aligned}$$
(1.6)

For a self-adjoint \(\varGamma (\widehat{\omega })\), its spectrum, \({{\,\mathrm{spec}\,}}(\varGamma (\widehat{\omega }))\), is a subset of the real line and so we look at how the positive and negative eigenvalues of \(\varGamma ^{(N)}(\widehat{\omega })\) distribute inside \({{\,\mathrm{spec}\,}}(\varGamma (\widehat{\omega }))\). To this end, we analyze the behaviour of the eigenvalue counting functions

$$\begin{aligned} \mathsf {n}_{\pm }(t;\varGamma ^{(N)}(\widehat{\omega }))=\#\{n:\ \lambda ^{\pm }_{n}(\varGamma ^{(N)}(\widehat{\omega }))>t\},\ \ \ t>0, \end{aligned}$$

as \(N \rightarrow \infty \). Here \(\{\lambda ^{\pm }_{n}(\varGamma ^{(N)}(\widehat{\omega }))\}_{ n\ge 1}\) are the sequences of positive eigenvalues of \(\pm \varGamma ^{(N)}(\widehat{\omega })\) respectively. In this setting, we study the functions

$$\begin{aligned} \mathsf {LD}_{\square }^{\pm }(t;\varGamma (\widehat{\omega })):=\lim _{N \rightarrow \infty }\frac{\mathsf {n}_{\pm }(t; \varGamma ^{(N)}(\widehat{\omega }))}{\log (N)}. \end{aligned}$$
(1.7)

Similarly to the non-self-adjoint setting, we call the function \(\mathsf {LD}^{+}_{\square }\) (resp. \(\mathsf {LD}^{-}_{\square }\)) in (1.7) the positive (resp. negative) logarithmic spectral density of \(\varGamma (\omega )\).

The \(\square \) appearing as an index in the definitions of the logarithimic spectral densities in (1.6) and (1.7) has been chosen to stress the fact that, a priori, these quantities depend on our choice to truncate the infinite matrix \(\varGamma (\widehat{\omega })\) to its upper \(N\times N\) square. Furthermore, the terminology we use for the functions \(\mathsf {LD}_{\square }\) and \(\mathsf {LD}^{\pm }_{\square }\) comes from the fact that we are only studying a logarithmically-small portion of the singular values (or eigenvalues) of the matrix \(\varGamma ^{(N)}(\widehat{\omega })\). Their definitions are motivated by the results obtained by Widom (see [23, Theorem 4.3]) for the Hilbert matrix \(\varGamma (\widehat{\gamma })\), where he showed that

$$\begin{aligned} \mathsf {LD}_{\square }(t;\varGamma (\widehat{\gamma }))= & {} \mathsf {c}(t), \end{aligned}$$
(1.8)
$$\begin{aligned} \mathsf {LD}_{\square }^{-}(t;\varGamma (\widehat{\gamma }))= & {} 0,\quad \mathsf {LD}^{+}_{\square }(t;\varGamma (\widehat{\gamma }))=\mathsf {c}(t). \end{aligned}$$
(1.9)

Here \(\mathsf {c}(t):=0\) whenever \(t \notin (0,1)\) and

$$\begin{aligned} \mathsf {c}(t):=\frac{1}{\pi ^{2}}{{\,\mathrm{arcsech}\,}}(t)=\frac{1}{\pi ^{2}}\log \left( \frac{1+\sqrt{1-t^{2}}}{t} \right) , \quad t \in (0,1]. \end{aligned}$$
(1.10)

We note that a factor of \(2\pi \) is missing in the statement of [23, Theorem 4.3]. The aim of this paper is to extend (1.8) to a general symbol \(\omega \in PC({\mathbb {T}})\). In particular, for a non-self-adjoint Hankel matrix, we aim to show that

$$\begin{aligned} \mathsf {LD}_{\square }(t;\varGamma (\widehat{\omega }))=\sum _{z\in \Omega }\mathsf {c}\left( t\left| \varkappa _{z}(\omega )\right| ^{-1}\right) , \end{aligned}$$
(1.11)

where \(\mathsf {c}\) is the function defined in (1.10). Recall that the symbol \(\psi \) defined in (1.4) has jumps at \(\pm 1\) whose half-height is \(\varkappa _{\pm i}(\psi )=\mp 1\), so for the Hankel matrix \(\varGamma (\widehat{\psi })\) the formula (1.11) yields

$$\begin{aligned} \mathsf {LD}_{\square }(t;\varGamma (\widehat{\psi }))=2 \mathsf {c}\left( t\right) . \end{aligned}$$

For self-adjoint Hankel matrices, we extend the result in (1.9) to symbols \(\omega \in PC({\mathbb {T}})\) satisfying (1.1) and obtain

$$\begin{aligned} \mathsf {LD}^{\pm }_{\square }(t;\varGamma (\widehat{\omega }))= & {} \mathsf {c}\left( t\left| \varkappa _{1}(\omega )\right| ^{{-}1}\right) \mathbb {1}_{\pm }(-i\varkappa _{1}(\omega )){+}\mathsf {c}\left( t\left| \varkappa _{-1}(\omega )\right| ^{-1}\right) \mathbb {1}_{\pm }({-}i\varkappa _{-1}(\omega ))\nonumber \\&+\sum _{z\in \Omega ^{+}}{\mathsf {c}\left( t\left| \varkappa _{z}(\omega )\right| ^{-1}\right) }, \end{aligned}$$
(1.12)

where \(\Omega ^{+}=\{z\in \Omega \ \vert {{\,\mathrm{Im}\,}}z>0\}\), and \(\mathbb {1}_{\pm }\) is the indicator function of the half-line \((0, \pm \infty )\). Again, the function \(\mathsf {c}\) has been defined in (1.10). In particular, for the symbol \(\psi \) in (1.4), we obtain that

$$\begin{aligned} \mathsf {LD}^{\pm }_{\square }(t;\varGamma (\widehat{\psi }))=\mathsf {c}\left( t\right) . \end{aligned}$$

A natural question that we also address here is that of the universality of the limits in (1.6) and (1.7). In other words, we investigate whether they depend on the choice of “regularisation” of the matrix \(\varGamma (\widehat{\omega })\). For instance, the main results of this paper, Theorems 1.1 and 1.2 below, tell us that the singular values of the matrix \(\varGamma ^{(N)}(\widehat{\omega })\) and of the regularised matrix

$$\begin{aligned} \varGamma _{N}(\widehat{\omega })=\left\{ e^{-\frac{j+k}{N}}\widehat{\omega }(j+k)\right\} _{j,k\ge 0},\quad N\ge 1, \end{aligned}$$
(1.13)

have the same distribution for large values of N.

1.2 Schur–Hadamard Multipliers

The truncation of a matrix to its finite \(N\times N\) upper block and the “matrix regularisation” in (1.13) are examples of Schur–Hadamard multipliers, defined below.

For a bounded sequence \((\tau (j,k))_{j,k\ge 0}\), called a multiplier, and a bounded operator A on \(\ell ^{2}(\mathbb {Z_{+}})\), the Schur–Hadamard multiplication of \(\tau \) and A is the operator on \(\ell ^{2}(\mathbb {Z_{+}})\), \(\tau \star A\), formally defined through the quadratic form

$$\begin{aligned} ((\tau \star A)e_{j}, e_{k})=\tau (j,k)(Ae_{j},e_{k}),\ \ \ j, k\ge 0, \end{aligned}$$
(1.14)

where \(e_{j}\) is the j-th vector of the standard basis of \(\ell ^{2}(\mathbb {Z_{+}})\). Various authors in the literature, [2, 4, 14], have addressed the issue of establishing how properties of \(\tau \) translate into the boundedness of this operation on the space of bounded operators and the Schatten classes \(\mathfrak {S}_{p}\) (for a definition see Sect. 2 below). To do so, they have studied the operator norms

$$\begin{aligned} \Vert \tau \Vert _{\mathfrak {M}}= & {} \sup _{\Vert A\Vert =1}\Vert \tau \star A\Vert , \end{aligned}$$
(1.15)
$$\begin{aligned} \Vert \tau \Vert _{\mathfrak {M}_{p}}= & {} \sup _{\Vert A\Vert _{\mathfrak {S}_{p}}=1}\Vert \tau \star A\Vert _{\mathfrak {S}_{p}},\quad 1\le p\le \infty . \end{aligned}$$
(1.16)

Using the duality of \(\mathfrak {S}_{p}\)-classes, it is possible to show that the following identities hold

$$\begin{aligned} \Vert \tau \Vert _{\mathfrak {M}_{1}}= & {} \Vert \tau \Vert _{\mathfrak {M}_{\infty }}=\Vert \tau \Vert _{\mathfrak {M}}, \end{aligned}$$
(1.17)
$$\begin{aligned} \Vert \tau \Vert _{\mathfrak {M}_{p}}= & {} \Vert \tau \Vert _{\mathfrak {M}_{q}},\quad 1< p,\, q<\infty ,\, p^{-1}+q^{-1}=1, \end{aligned}$$
(1.18)

and so it is sufficient to study the boundedness of Schur–Hadamard multiplication on \(\mathfrak {S}_{p}\) for \(p\ge 2\). The case of \(p=2\) is somewhat trivial. In fact, the structure of \(\mathfrak {S}_{2}\) gives that any bounded sequence \(\tau \) is a bounded Schur–Hadamard multiplier and furthermore

$$\begin{aligned} \Vert \tau \Vert _{\mathfrak {M}_{2}}=\sup _{j,k\ge 0}\left| \tau (j,k)\right| . \end{aligned}$$

For a general \(1<p<\infty ,\, p\ne 2\) not much is known with regards to the finiteness of \(\Vert \tau \Vert _{\mathfrak {M}_p}\). However, a necessary and sufficient condition for the boundedness of a Schur–Hadamard multiplier on the space of bounded operators (and, as a consequence of (1.17), on \(\mathfrak {S}_{1}\) and \(\mathfrak {S}_{\infty }\)) is known and can be found in [5].

For the purposes of this paper, we will consider the Schur–Hadamard multiplier \(\tau \) in (1.14) as the restriction to \({\mathbb {Z}}_{+}^{2}\) of a bounded function defined on \([0,\, \infty )^{2}\). For \(N\ge 1\) set \(\tau _{N}(j,k)=\tau (jN^{-1}, kN^{-1})\). If \(\tau \) is such that the sequence of \(\tau _{N}\) satisfies the following

$$\begin{aligned} \sup _{N\ge 1}\Vert \tau _{N}\Vert _{\mathfrak {M}}<\infty , \end{aligned}$$
(1.19)

we say that \(\tau \) induces a uniformly bounded multiplier. An easy example of such a multiplier is the \( N\times N\) truncation of an infinite matrix. To see this take the function

$$\begin{aligned} \tau _{\square }(x,y)=\mathbb {1}_{\square }(x,y), \end{aligned}$$
(1.20)

where \(\mathbb {1}_{\square }\) is the characteristic function of the half-open unit square \([0,1)^{2}\). For any bounded operator A, \(\tau _{N}\star A\) is the truncation to its upper \(N\times N\) block and so we have that for any \(N \ge 1\)

$$\begin{aligned} \Vert \tau _{N}\Vert _{\mathfrak {M}}= 1. \end{aligned}$$

We discuss some more examples of Schur–Hadamard multipliers below.

1.3 Statement of the Main Results

As we anticipated, our main results are not only concerned to the existence of the limits in (1.8) and (1.9), but also with their universality. In other words, for a Hankel matrix \(\varGamma (\widehat{\omega })\) and a given multiplier \(\tau \), we show that under some mild assumptions on \(\tau \), see (A)–(C) below, the function

$$\begin{aligned} \mathsf {LD}_{\tau }(t;\varGamma (\widehat{\omega })):=\lim _{N\rightarrow \infty }\frac{\mathsf {n}(t;\tau _{N}\star \varGamma (\widehat{\omega }))}{\log N},\quad t>0, \end{aligned}$$
(1.21)

is independent of the choice of \(\tau \). Similarly, for a self-adjoint Hankel matrix and a multiplier \(\tau \) such that \(\tau (x,y)=\overline{\tau (y,x)}\), we show that the same is true for the functions

$$\begin{aligned} \mathsf {LD}_{\tau }^{\pm }(t;\varGamma (\widehat{\omega })):=\lim _{N\rightarrow \infty }\frac{\mathsf {n}_{\pm }(t;\tau _{N}\star \varGamma (\widehat{\omega }))}{\log N},\quad t>0. \end{aligned}$$
(1.22)

Note that when \(\tau =\tau _{\square }\) as in (1.20), the functions \(\mathsf {LD}_{\tau }(t;\varGamma (\widehat{\omega })),\, \mathsf {LD}_{\tau }^{\pm }(t;\varGamma (\widehat{\omega }))\) are precisely those defined in (1.8) and (1.9).

Let us state the following assumptions on \(\tau \):

  1. (A)

    \(\tau \) induces a uniformly bounded Schur–Hadamard multiplier, i.e. (1.19) holds;

  2. (B)

    \(\tau (0,0)=1\) and for some \(\varepsilon >0\) and some \(\beta >1/2\), there exists \(C_{\beta }>0\), so that

    $$\begin{aligned} \left| \tau (x, y)-1\right| \le C_{\beta }\left| \log (x+y)\right| ^{-\beta }, \quad \forall \, 0\le x,\,y\le \varepsilon ; \end{aligned}$$
  3. (C)

    for some \(\alpha >1/2\) one can find \(C_{\alpha }\) so that

    $$\begin{aligned} \left| \tau (x,y)\right| \le C_{\alpha }\log (x+y+2)^{-\alpha },\ \ \ \forall \, x,\, y\ge 0. \end{aligned}$$

Then (1.11) is a particular case of the following:

Theorem 1.1

Let \(\tau \) be a multiplier satisfying (A)–(C). Let \(\omega \in PC({\mathbb {T}})\) and \(\Omega \) be the set of its discontinuities. Then

$$\begin{aligned} \mathsf {LD}_{\tau }(t;\varGamma (\widehat{\omega }))=\sum _{z\in \Omega }\mathsf {c}\left( t\left| \varkappa _{z}(\omega )\right| ^{-1}\right) \end{aligned}$$
(1.23)

where \(\mathsf {c}(t)\) is the function defined in (1.10).

Analogously for the self-adjoint case, (1.12) is a particular case of the Theorem below:

Theorem 1.2

Let \(\tau \) satisfy conditions (A)-(C) and such that \(\tau (x, y)=\overline{\tau (y,x)}.\) Suppose \(\omega \in PC({\mathbb {T}})\) satisfies (1.1) and let \(\Omega ^{+}=\{z \in \Omega \, \vert \, {{\,\mathrm{Im}\,}}z>0\}\). Then

$$\begin{aligned} \mathsf {LD}_{\tau }^{\pm }(t;\varGamma (\widehat{\omega }))= & {} \sum _{z\in \Omega ^{+}}{\mathsf {c}\left( t\left| \varkappa _{z}(\omega )\right| ^{-1}\right) }\nonumber \\&+\mathsf {c}\left( t\left| \varkappa _{1}(\omega )\right| ^{-1}\right) \mathbb {1}_{\pm }(-i\varkappa _{1}(\omega ))\nonumber \\&+\mathsf {c}\left( t\left| \varkappa _{-1}(\omega )\right| ^{-1}\right) \mathbb {1}_{\pm }(-i\varkappa _{-1}(\omega )), \end{aligned}$$
(1.24)

where \(\mathsf {c}(t)\) is the function defined in (1.10) and \(\mathbb {1}_{\pm }\) is the characteristic function of the half-line \((0, \pm \infty )\).

1.4 Remarks

  1. (A)

    It is clear that Theorems 1.1 and 1.2 generalise the result of Widom in [23] mentioned earlier in (1.10) to any multiplier \(\tau \) and, in both instances, we only describe the behaviour of a logarithmically small portion of the spectrum of \(\tau _{N}\star \varGamma (\widehat{\omega })\) as most of the points lie in a vicinity of 0.

  2. (B)

    Both Theorems 1.1 and 1.2 deal with a rather general class of symbols and for this reason we cannot say more about the error term in the asymptotic expansion of the functions \(\mathsf {n}, \mathsf {n}_{\pm }\). In fact, we can only write

    $$\begin{aligned} \mathsf {n}(t; \tau _{N}\star \varGamma (\widehat{\omega }))=\log (N)\sum _{z \in \Omega }\mathsf {c}(t\left| \varkappa _{z}(\omega )\right| ^{-1})+o(\log (N)),\quad N\rightarrow \infty . \end{aligned}$$

    If, however, we were to restrict our attention to those symbols with finitely many jumps and some degree of smoothness away from them (say Lipschitz continuity), we would obtain a more precise estimate, see [8], however the trade-off would be that of making our results less general.

  3. (C)

    Studying the spectral density of operators is common to many areas of spectral analysis. In particular, our results can be put in parallel to well-known results in the spectral theory of Schrödinger operators, where the existence and universality of the density of states is a well-studied problem for a wide class of potentials, see [7] and [10, Section 5] for an introduction and references therein for more on this subject.

  4. (D)

    Both Theorems 1.1 and 1.2 assume that the multiplier \(\tau \) induces a uniformly bounded multiplier on the space of bounded operators. However, this condition can be substantially weakened in two different ways.

Firstly, we can weaken assumption (A) on the multiplier \(\tau \) by assuming that for some finite \(p>1\), \(\tau \) induces a uniformly bounded Schur–Hadamard multiplier on \(\mathfrak {S}_{p}\), or in other words that

$$\begin{aligned} \sup _{N\ge 1}\Vert \tau _{N}\Vert _{\mathfrak {M}_{p}}=\sup _{N\ge 1}\left( \sup _{\Vert A\Vert _{\mathfrak {S}_{p}}=1}\Vert \tau _{N}\star A\Vert _{\mathfrak {S}_{p}}\right) <\infty . \end{aligned}$$
(1.25)

However, as a trade-off, we need to impose more stringent conditions on the symbol, as the following statement shows:

Proposition 1.3

Suppose \(\tau \) satisfies (1.25) as well as Assumptions (B) and (C). If the symbol \(\omega \) can be written as

$$\begin{aligned} \omega (v)=-i\sum _{z \in \Omega }\varkappa _{z}(\omega )\gamma (\overline{z}v)+\eta (v),\quad v\in {\mathbb {T}}, \end{aligned}$$
(1.26)

where \(\Omega \) is a finite subset of \({\mathbb {T}}\), \(\gamma \) is the symbol in (1.4) and \(\eta \) is a symbol for which \(\varGamma (\widehat{\eta }) \in \mathfrak {S}_{p}\), then (1.23) holds. Furthermore, if \(\tau (x,y)=\overline{\tau (y,x)}\) and \(\omega \) also satisfies (1.1), then (1.24) holds.

Secondly, we can assume that \(\tau \) only induces a uniformly bounded Schur–Hadamard multiplier on the space bounded Hankel matrices, i.e. that

$$\begin{aligned} \sup _{N\ge 1}\left( \sup _{\Vert \varGamma (\widehat{\omega })\Vert =1}\Vert \tau _{N}\star \varGamma (\widehat{\omega })\Vert \right) <\infty . \end{aligned}$$
(1.27)

In this case, Theorems 1.1 and 1.2 still hold in their generality and we have the following

Proposition 1.4

Let \(\omega \in PC({\mathbb {T}})\) and let \(\tau \) satisfy (1.27) as well as Assumptions (B) and (C). Then (1.23) holds. Furthermore, if \(\omega \) satisfies the symmetry condition (1.1) and \(\tau (x,y)=\overline{\tau (y,x)}\), then (1.24) holds.

We chose to make use of Assumption (A) instead of (1.25) and (1.27), because there are no known necessary and sufficient conditions for a multiplier to satisfy either of them. We give specific examples of multipliers that satisfy these conditions below.

1.5 Some Examples of Schur–Hadamard Multipliers

Example 1.5

(Factorisable multipliers) If the function \(\tau \) can be factorised as

$$\begin{aligned} \tau (x,y)=f(x)g(y),\quad x,y\ge 0, \end{aligned}$$

for some bounded function fg, then it is easy to see that it induces a uniformly bounded Schur–Hadamard multiplier in the sense of (1.19), and furthermore

$$\begin{aligned} \sup _{N\ge 1}\Vert \tau _{N}\Vert _{\mathfrak {M}}\le \Vert f\Vert _{\infty }\Vert g\Vert _{\infty }. \end{aligned}$$

As it was pointed out earlier in (1.20), the truncation to the upper \(N\times N\) square is an example of such a multiplier. Another example is given by choosing the function \(\tau _{1}(x, y)=e^{-(x+y)}=e^{-x}e^{-y}\). This induces the regularisation in (1.13) and it is immediate to see that

$$\begin{aligned} \sup _{N\ge 1}\Vert (\tau _{1})_{N}\Vert _{\mathfrak {M}}=1. \end{aligned}$$

Furthermore, \(\tau _{1}\) satisfies the assumptions (B) and (C) and so Theorems 1.1 and 1.2 hold.

Example 1.6

(Non-examples) In stark contrast to the square truncation in (1.20), the so-called “main triangle projection” induced by the function

$$\begin{aligned} \tau _{2}(x,y)=\mathbb {1}_{[0,1)}\left( x+y\right) \end{aligned}$$
(1.28)

is not uniformly bounded on the bounded operators, see [1, 6, 12], where it was shown that

$$\begin{aligned} \sup _{\Vert A\Vert =1}\Vert (\tau _{2})_{N}\star A\Vert = \pi ^{-1}\log (N)+o(\log (N)),\ \ \ N \rightarrow \infty . \end{aligned}$$

However, \(\tau _{2}\) is uniformly bounded on any Schatten class \(\mathfrak {S}_{p},\, 1<p<\infty \), see [5], and so Proposition 1.3 holds.

Proposition 1.4 shows that Theorems 1.1 and 1.2 still hold in the case that the Schur–Hadamard multiplier is only uniformly bounded on the set of bounded Hankel matrices. An example of such a multiplier is given by the indicator function, \(\tau _{\beta , \gamma }\), of the region

$$\begin{aligned} \Xi _{\beta , \gamma }=\{(x, y)\in [0,1]^{2}\ \vert \ x\le -\beta y+ \gamma \}, \quad \beta ,\, \gamma \in {\mathbb {R}}. \end{aligned}$$

Even though \(\tau _{\beta , \gamma }\) does not induce, in general, a uniformly bounded Schur–Hadamard multiplier, it has been shown in [6, Theorem 1(a)] that this is the case on the set of bounded Hankel matrices for \(\beta \ne 1, 0\) and any \(\gamma \) (at \(\beta =1\) and \(\gamma =1\), \(\tau _{1, 1}\) reduces to the multiplier \(\tau _{2}\) considered above). With this at hand, an appropriate choice of the parameters \(\beta \) and \(\gamma \) gives (1.23) and (1.24).

Example 1.7

(General Criterion) For more complicated functions, the following criterion can be of help. Let \(\Sigma \subset {\mathbb {R}}\) and m be a measure on \(\Sigma \). Suppose that for the function \(\tau \) we can write

$$\begin{aligned} \tau _{N}(j,k)=\tau \left( \frac{j}{N}, \frac{k}{N}\right) =\int _{\Sigma }e^{-it(j+k)}f_{N}(t)dm(t),\quad \forall j,\, k\ge 0 \end{aligned}$$

for some functions \(f_{N} \in L^{1}(\Sigma , m)\) so that \(\sup _{N\ge 1}\Vert f_{N}\Vert _{L^{1}(\Sigma , m)}<\infty \). It is not hard to check that \(\tau \) induces a uniformly bounded Schur–Hadamard multiplier and that

$$\begin{aligned} \sup _{N\ge 1}\Vert \tau _{N}\Vert _{\mathfrak {M}}\le \sup _{N\ge 1}\Vert f_{N}\Vert _{L^{1}(\Sigma , m)}. \end{aligned}$$

Using this it is possible to show that the function

$$\begin{aligned} \tau _{3}(x,y)=\left( 1-(x+y)\right) \mathbb {1}_{[0,1)}\left( x+y\right) , \end{aligned}$$
(1.29)

induces a uniformly bounded Schur–Hadamard multiplier, since one has the following representation

$$\begin{aligned} \tau _{3}(j N^{-1},k N^{-1})=\int _{0}^{2\pi }e^{-i t \left( j+k\right) }F_{N}(e^{it})\frac{dt}{2\pi },\quad j,\, k\ge 0, \end{aligned}$$
(1.30)

where \(F_{N}\) in (1.30) denotes the N-th Fejér kernel.

The multipliers induced by the functions

$$\begin{aligned} \tau _{1}(x,y)= & {} e^{-(x+y)},\\ \tau _{2}(x,y)= & {} \mathbb {1}_{[0,1)}(x+y),\\ \tau _{3}(x,y)= & {} \left( 1-(x+y)\right) \mathbb {1}_{[0,1)}\left( x+y\right) \end{aligned}$$

are related to the Abel-Poisson, Dirichlet and Cesaro summation methods respectively and share some of the properties of the operators of convolution with the respective kernels, see Sect. 3 for more on the Poisson kernel.

1.6 Outline of the Proofs

To prove Theorems 1.1 and 1.2 we use a similar approach to the one in [16] and [20] and combine abstract results concerned with the general properties of the functions \(\mathsf {n}, \mathsf {n}_{\pm }\) (see Sect. 2) and more hands-on function theoretic ones that are specific to the theory of Hankel matrices, see Sect. 3.

To prove Theorem 1.1, we firstly assume that the set of jump discontinuities, \(\Omega \), of the symbol \(\omega \) is finite and we write

$$\begin{aligned} \omega (v)=-i\sum _{z \in \Omega }\varkappa _{z}(\omega )\gamma (\overline{z}v)+\eta (v),\quad v\in {\mathbb {T}}, \end{aligned}$$
(1.31)

where \(\gamma \) is the symbol in (1.4) and \(\eta \) is a continuous function on \({\mathbb {T}}\). The analysis of \(\mathsf {LD}_{\tau }(t;\varGamma (\widehat{\omega }))\) then proceeds with the study of each summand appearing in (1.31) and the interactions this has with all the others. In particular, Assumption (A) allows us to disregard the contribution coming from the matrix \(\varGamma (\widehat{\eta })\), i.e. it gives that

$$\begin{aligned} \mathsf {LD}_{\tau }(t;\varGamma (\widehat{\omega }))=\mathsf {LD}_{\tau }\left( t;\sum _{z\in \Omega }\varkappa _{z}(\omega )\varGamma (\widehat{\gamma }_{z})\right) , \end{aligned}$$

where \(\gamma _{z}(v)=-i\gamma (\overline{z}v),\, v \in {\mathbb {T}}\). The invariance of the functions \(\mathsf {LD}_{\tau }\) with respect to the choice of multiplier, proved in Theorem 2.5, gives that

$$\begin{aligned} \mathsf {LD}_{\tau }\left( t;\sum _{z\in \Omega }\varkappa _{z}(\omega )\varGamma (\widehat{\gamma }_{z})\right) =\mathsf {LD}_{\tau _{1}}\left( t;\sum _{z\in \Omega }\varkappa _{z}(\omega )\varGamma (\widehat{\gamma }_{z})\right) \end{aligned}$$

where the multiplier \(\sigma _{1}(x,y)=e^{-(x+y)}\) is given in the Example 1.5 above, and it is shown to induce the regularisation in (1.13), i.e. \((\tau _{1})_{N}\star \varGamma (\widehat{\omega })=\varGamma _{N}(\widehat{\omega })\). For the multiplier \(\tau _{1}\), we explicitly show that the operators \(\varGamma (\widehat{\gamma }_{z})\) are mutually “almost orthogonal” in the sense that if \(z\ne w \in \Omega \), then both

$$\begin{aligned} \varGamma _{N}(\widehat{\gamma }_{z})^{*}\varGamma _{N}(\widehat{\gamma }_{w}),\quad \varGamma _{N}(\widehat{\gamma }_{z})\varGamma _{N}(\widehat{\gamma }_{w})^{*} \end{aligned}$$

are trace-class. From here, Theorem 2.7, gives that each jump contributes independently, or in other words that we can write

$$\begin{aligned} \mathsf {LD}_{\tau _{1}}\left( t;\sum _{z\in \Omega }\varkappa _{z}(\omega )\varGamma (\widehat{\gamma }_{z})\right) =\sum _{z \in \Omega }\mathsf {LD}_{\tau _{1}}(t;\varkappa _{z}(\omega )\varGamma (\widehat{\gamma }_{z})). \end{aligned}$$

We note here that the above is another instance of the general fact that jumps occurring at different points of the unit circle contribute independently to the spectral properties of the operator \(\varGamma (\widehat{\omega })\). For this reason, we follow the terminology used by the authors of [20] and we refer to this fact as the “Localisation Principle”.

Finally, using once again the Invariance Principle, Theorem 2.5, and the result of Widom in (1.10), we obtain the identity (1.23) for a symbol \(\omega \) with finitely many jumps.

The proof of Theorem 1.2 roughly follows the same outline. However, instead of writing the symbol \(\omega \) as in (1.31), we make use of the symmetry of the set of jump-discontinuities, \(\Omega \), to decompose it as follows

$$\begin{aligned} \omega (v)= & {} \varkappa _{1}\gamma _{1}(v)+\varkappa _{-1}\gamma _{-1}(v)\\&-i\sum _{z \in \Omega ^{+}}{\left( \varkappa _{z}(\omega )\gamma _{z}(v)+\varkappa _{\overline{z}}(\omega )\gamma _{\overline{z}}(v)\right) }+\eta (v),\quad v \in {\mathbb {T}}, \end{aligned}$$

where \(\Omega ^{+}=\{z \in \Omega \ \vert \ {{\,\mathrm{Im}\,}}{z}>0\}\) and, as before, \(\gamma _{z}(v)=-i\gamma (\overline{z}v)\) and \(\eta \) is a continuous symbol on \({\mathbb {T}}\). The same strategy used in the proof of Theorem 1.1 leads to the following identity

$$\begin{aligned} \mathsf {LD}_{\tau }^{\pm }(t;\varGamma (\widehat{\omega }))= & {} \mathsf {LD}_{\tau _{1}}^{\pm }(t;\varkappa _{1}(\omega )\varGamma (\widehat{\gamma }_{1}))+\mathsf {LD}_{\tau _{1}}^{\pm }(t;\varkappa _{-1}(\omega )\varGamma (\widehat{\gamma }_{-1}))\\&+\sum _{z \in \Omega ^{+}}\mathsf {LD}_{\tau _{1}}^{\pm }(t;\varkappa _{z}(\omega )\varGamma (\widehat{\gamma }_{z})+\varkappa _{\overline{z}}(\omega )\varGamma (\widehat{\gamma }_{\overline{z}})). \end{aligned}$$

The fact that the jumps of \(\omega \) are arranged symmetrically around \({\mathbb {T}}\) can be used to show that the positive and negative eigenvalues of the compact operator

$$\begin{aligned} \varkappa _{z}(\omega )\varGamma _{N}(\widehat{\gamma }_{z})+\varkappa _{\overline{z}}(\omega )\varGamma _{N}(\widehat{\gamma }_{\overline{z}}) \end{aligned}$$

are arranged almost symmetrically around 0, in a sense that we will specify in Lemma 3.1-(ii). Using Theorem 2.8, we conclude that

$$\begin{aligned} \mathsf {LD}_{\tau _{1}}^{\pm }(t;\varkappa _{z}(\omega )\varGamma (\widehat{\gamma }_{z})+\varkappa _{\overline{z}}(\omega )\varGamma (\widehat{\gamma }_{\overline{z}}))=\mathsf {LD}_{\tau _{1}}(t;\varkappa _{z}(\omega )\varGamma (\widehat{\gamma }_{z})). \end{aligned}$$
(1.32)

Using once again the result of Widom in (1.10), we arrive at (1.24). It is worth noting here that (1.32) shows that if \(\omega \) has jumps occurring at a pair of complex conjugate points, then the upper and lower logarithmic spectral densities, \(\mathsf {LD}^{\pm }(t, \varGamma (\widehat{\omega }))\), contribute equally to the logarithmic spectral density of \(\left| \varGamma (\widehat{\omega })\right| \), we refer to this as the “Symmetry Principle”, following the terminology used by the authors of [21].

Both Theorems 1.1 and 1.2 are then extended to the case of a symbol with infinitely-many jump-discontinuities using an approximation argument first presented by Power in [16] and subsequently in [15, Ch. 10, Thm. 1.10], see Sect. 4 below.

2 Abstract Properties of the Spectral Density

2.1 First Definitions and Results

Let \(\mathfrak {S}_{\infty }\) denote the ideal of compact operators. For any \(p>0\), \(\mathfrak {S}_{p}\) denotes the ideal of compact operators whose singular values are p-summable and let \(\mathfrak {S}_{0}=\cap _{p>0}\mathfrak {S}_{p}\). For \(p\ge 1\), the \(\mathfrak {S}_{p}\)-norm is defined as

$$\begin{aligned} \Vert A\Vert _{\mathfrak {S}_{p}}^{p}= & {} \sum _{n=1}^{\infty }s_{n}(A)^{p},\quad 1\le p<\infty .\\ \Vert A\Vert _{\mathfrak {S}_{\infty }}= & {} \sup _{n}s_{n}(A),\quad p=\infty . \end{aligned}$$

Here \(\{s_{n}(A)\}_{n=1}^{\infty }\) is the sequence of singular values of A ordered in a decreasing manner with multiplicities taken into account. All operators in this section are bounded operators acting on the space of square summable sequence \(\ell ^{2}(\mathbb {Z_{+}})\).

The functions \(\mathsf {n}, \mathsf {n}_{\pm }\) were defined in the Introduction. It is clear that \(\mathsf {n}(t; A)=\mathsf {n}(t; A^{*}),\) as the non-zero singular values of A and \(A^{*}\) coincide and, furthermore one has

$$\begin{aligned} \mathsf {n}(t; A)=\mathsf {n}(t^{2}; A^{*}A), \quad t>0. \end{aligned}$$
(2.1)

For any self-adjoint operator A, the functions \(\mathsf {n}\) and \(\mathsf {n}_{\pm }\) are linked via the following:

$$\begin{aligned} \mathsf {n}(t;A)=\mathsf {n}_{+}(t;A)+\mathsf {n}_{-}(t;A), \quad t>0. \end{aligned}$$

The singular-value counting function of \(K \in \mathfrak {S}_{p},\) satisfies the following simple estimate:

Lemma 2.1

Let \(K \in \mathfrak {S}_{p}\), \(1\le p< \infty \), then, for any \(t>0\) one has

$$\begin{aligned} \mathsf {n}(t; K)\le \frac{\Vert K\Vert _{\mathfrak {S}_p}^{p}}{t^{p}}. \end{aligned}$$

If K is self-adjoint, the same holds for the functions \(\mathsf {n}_{\pm }(t;K)\).

We will also use the following inequalities, known as Weyl’s inequalitites, see [3, Thm. 9, Ch. 9]:

Lemma 2.2

(Weyl Inequality) Let AB be compact operators and \(0<s<t\), then

$$\begin{aligned} \mathsf {n}(t;A+B)\le & {} \mathsf {n}(t-s;A)+\mathsf {n}(s;B), \end{aligned}$$
(2.2)
$$\begin{aligned} \mathsf {n}_{\pm }(t;A+B)\le & {} \mathsf {n}_{\pm }(t-s;A)+\mathsf {n}_{\pm }(s;B), \end{aligned}$$
(2.3)

with the last inequality holding for self-adjoint operators.

For a bounded \(\tau \) on \([0, \infty )^{2}\) we have already defined in the Introduction the meaning of \(\tau _{N}\star A\). We have the following simple

Lemma 2.3

Let \(\tau \) be continuous at (0, 0) with \(\tau (0,0)=1\) and suppose it satisfies Assumption (A). Then for any bounded operator A, \(\tau _{N}\star A\rightarrow A\) as \(N \rightarrow \infty \) in the strong operator topology. Furthermore, if A is compact, the same is true in the operator norm.

Proof of Lemma

Recall that Assumption (A) implies that for any operator A one has

$$\begin{aligned} \Vert \tau _{N}\star A\Vert \le \sup _{N\ge 1}\Vert \tau _{N}\Vert _{\mathfrak {M}}\Vert A\Vert . \end{aligned}$$
(2.4)

Let \(e_{j},\, j\ge 0\) be the standard basis vectors of \(\ell ^{2}(\mathbb {Z_{+}})\). Using continuity of \(\tau \) at (0, 0) and the fact that \(\tau (0,0)=1\), a simple calculation shows \((\tau _{N}\star A)e_{j}\rightarrow Ae_{j}\) in \(\ell ^{2}(\mathbb {Z_{+}})\) and so we obtain that \((\tau _{N}\star A)x\rightarrow A x\) as \(N \rightarrow \infty \) for any finite sequence \(x \in \ell ^{2}(\mathbb {Z_{+}})\).

For any \(x \in \ell ^{2}(\mathbb {Z_{+}})\), the result follows from a standard \(\varepsilon /3\) argument. In particular, for \(\varepsilon >0\), we find a finite sequence \(x_{\varepsilon }\) so that \(\Vert x-x_{\varepsilon }\Vert _{2}<\varepsilon /3\). Using the triangle inequality and (2.4), together with the fact that \((\tau _{N}\star A)x_{\varepsilon }\rightarrow A x_{\varepsilon }\) we obtain the assertion.

If A is compact, we have that for any given \(\varepsilon >0\) we can find a finite matrix B so that \(\Vert A-B\Vert <\varepsilon \). For any finite matrix B, the convergence \(\tau _{N}\star B\rightarrow B\) in the strong operator topology implies convergence in the operator norm, so for N large we have \(\Vert (\tau _{N}\star B)- B\Vert <\varepsilon \). The triangle inequality now yields

$$\begin{aligned} \Vert (\tau _{N}\star A)-A\Vert\le & {} \Vert \tau _{N}\star (A-B)\Vert +\Vert \tau _{N}\star B-B\Vert +\Vert B-A\Vert \\\le & {} (1+\sup _{N\ge 1}\Vert \tau _{N}\Vert _{\mathfrak {M}})\Vert A-B\Vert +\varepsilon \\\le & {} (2+\sup _{N\ge 1}\Vert \tau _{N}\Vert _{\mathfrak {M}})\varepsilon \end{aligned}$$

\(\square \)

As a consequence, we have the following

Lemma 2.4

Let \(K \in \mathfrak {S}_{\infty }\) and \(\tau \) be as in Lemma 2.3. Then for any \(t>0\) one has

$$\begin{aligned} \mathsf {n}(t;\tau _{N}\star K)=O_{t}(1),\quad N \rightarrow \infty . \end{aligned}$$

If \(\tau (x,y)=\overline{\tau (y,x)}\) and K is self-adjoint, the same holds for the functions \(\mathsf {n}_{\pm }\).

Proof of Lemma

From Lemma 2.3, we have that \(\tau _{N}\star K\rightarrow K\) in the operator norm and, in particular, for \(\varepsilon >0\) we can find N suitably large so that \(\Vert \tau _{N}\star K-K\Vert <\varepsilon \), whereby it follows that \(\mathsf {n}(\varepsilon ;\tau _{N}\star K-K)=0\). Using (2.2), we obtain for \(0<\varepsilon <t\):

$$\begin{aligned} \mathsf {n}(t;\tau _{N}\star K)&\le \mathsf {n}(t-\varepsilon ;K)= C_{t}. \end{aligned}$$

The proof in the self-adjoint case follows exactly the same reasoning. \(\square \)

Define \(\mathcal {B}_{0}\) as the set of operators on \(\ell ^{2}(\mathbb {Z_{+}})\):

$$\begin{aligned} A \in \mathcal {B}_{0} \quad \Longleftrightarrow \quad \ A_{j,k}= O\left( \frac{1}{j+k}\right) ,\ \ \ j,k \rightarrow \infty . \end{aligned}$$
(2.5)

Clearly \(A \in \mathcal {B}_{0}\) if and only if there exists a sequence \( a \in \ell ^{\infty }({\mathbb {N}}^{2})\) so that

$$\begin{aligned} A_{j,k}=\frac{a_{j,k}}{\pi (j+k+1)},\qquad \forall j, k\ge 0. \end{aligned}$$

From Hilbert inequality one obtains the estimate \(\Vert A\Vert \le \Vert a\Vert _{\ell ^{\infty }},\) and so A is also bounded. If the multiplier \(\tau \) satisfies assumption (C), i.e. if for some \(\alpha >1/2,\) one has

$$\begin{aligned} \left| \tau (x,y)\right| \le \frac{C_{\alpha }}{\log (x+y+2)^{\alpha }},\ \ \ \forall x,\,y, \end{aligned}$$

it is not difficult to see that when \(A \in \mathcal {B}_{0}\) one has that \(\tau _{N}\star A \in \mathfrak {S}_{2}\), since we have the following estimate

$$\begin{aligned} \Vert \tau _{N}\star A\Vert _{\mathfrak {S}_{2}}^{2}= & {} \sum _{j,k,\ge 0}\left| \tau \left( \frac{j}{N},\,\frac{ k}{N}\right) A_{j,k}\right| ^{2}\\\le & {} C_{\alpha } \sum _{j,k\ge 0}\frac{1}{\log \left( \frac{j+k}{N}+2\right) ^{2\alpha }(j+k+1)^{2}}<\infty . \end{aligned}$$

In particular, \(\tau _{N}\star A\) is a compact operator for any given N and so it makes sense to study how the functions \(\mathsf {n}(t;\tau _{N}\star A)\) and \(\mathsf {n}_{\pm }(t;\tau _{N} \star A)\) (whenever A is self-adjoint and \(\tau (x,y)=\overline{\tau (y,x)}\)) behave for large N. To this end, it is useful to define the following two functionals

$$\begin{aligned}&\overline{\mathsf {LD}}_{\tau }(t; A):=\limsup _{N\rightarrow \infty }\frac{\mathsf {n}(t; \tau _{N}\star A)}{\log (N)},\ \ \ t>0, \end{aligned}$$
(2.6)
$$\begin{aligned}&\underline{\mathsf {LD}}_{\tau }(t; A):=\liminf _{N\rightarrow \infty }\frac{\mathsf {n}(t; \tau _{N}\star A)}{\log (N)},\ \ \ t>0. \end{aligned}$$
(2.7)

If \(\overline{\mathsf {LD}}_{\tau }(t;A)=\underline{\mathsf {LD}}_{\tau }(t;A)\), we denote by \(\mathsf {LD}_{\tau }(t;A)\) their common value. For a self-adjoint operator \(A \in \mathcal {B}_{0}\), we define the functionals \(\overline{\mathsf {LD}}_{\tau }^{\pm }(t;A),\, \underline{\mathsf {LD}}_{\tau }^{\pm }(t;A)\) with the functions \(\mathsf {n}_{\pm }\) replacing \(\mathsf {n}\) in (2.6) and (2.7) respectively and denote by \(\mathsf {LD}_{\tau }^{\pm }(t; A)\) their common value, if it exists.

2.2 Invariance of Spectral Densities

For a fixed operator \(A \in \mathcal {B}_{0}\), we wish to study the relation between the asymptotic behaviour of \(\mathsf {n}(t;\tau _{N}\star A)\) for large N and the Schur–Hadamard multiplier \(\tau \). In particular, the result below tells us that the function \(\mathsf {n}(t;\tau _{N}\star A)\) (as well as \(\mathsf {n}_{\pm }(t;\tau _{N}\star A)\)) asymptotically behaves independently of the multiplier \(\tau \). We refer to this phenomenon as the Invariance Principle and we state it as follows

Theorem 2.5

(Invariance Principle) Suppose \(\sigma _{1}, \sigma _{2}\) are multipliers satisfying assumptions (B) and (C). Then for \(A \in \mathcal {B}_{0}\) and for \(t>0\) one has that

$$\begin{aligned}&\overline{\mathsf {LD}}_{\sigma _{1}}(t+0;A)\le \overline{\mathsf {LD}}_{\sigma _{2}}(t;A)\le \overline{\mathsf {LD}}_{\sigma _{1}}(t-0;A),\\&\underline{\mathsf {LD}}_{\sigma _{1}}(t+0;A)\le \underline{\mathsf {LD}}_{\sigma _{2}}(t;A)\le \underline{\mathsf {LD}}_{\sigma _{1}}(t-0;A). \end{aligned}$$

Similarly, for a self-adjoint \(A\in \mathcal {B}_{0}\) and \(\sigma _{i}(x,y)=\overline{\sigma _{i}(y,x)}\), then one has that

$$\begin{aligned}&\overline{\mathsf {LD}}_{\sigma _{1}}^{\pm }(t+0;A)\le \overline{\mathsf {LD}}_{\sigma _{2}}^{\pm }(t;A)\le \overline{\mathsf {LD}}_{\sigma _{1}}^{\pm }(t-0;A),\\&\underline{\mathsf {LD}}_{\sigma _{1}}^{\pm }(t+0;A)\le \underline{\mathsf {LD}}_{\sigma _{2}}^{\pm }(t;A)\le \underline{\mathsf {LD}}_{\sigma _{1}}^{\pm }(t-0;A). \end{aligned}$$

Before proving the result, let us prove the following auxiliary lemma.

Lemma 2.6

Let \(\sigma \) satisfy Assumption (C) and be such that \(\sigma (0,0)=0\) and such that for some \(\varepsilon >0\) and some \(\beta >1/2\), there exists \(C_{\beta }>0\), so that

$$\begin{aligned} \left| \sigma (x, y)\right| \le C_{\beta }\left| \log (x+y)\right| ^{-\beta }, \quad \forall \, 0\le x,\,y\le \varepsilon . \end{aligned}$$
(2.8)

For any \(A \in \mathcal {B}_{0}\), one has \(\sigma _{N}\star A\in \mathfrak {S}_{2}\) and furthermore there exists \(C>0\), independent of N, such that

$$\begin{aligned} \Vert \sigma _{N}\star A\Vert _{\mathfrak {S}_{2}}\le C. \end{aligned}$$

Proof of Lemma

We need to estimate the following quantity

$$\begin{aligned} \Vert \sigma _{N}\star A\Vert _{\mathfrak {S}_{2}}^{2}=\sum _{j,k\ge 0}\left| \sigma \left( \frac{j}{N},\, \frac{k}{N}\right) \right| ^{2}\left| A_{j,k}\right| ^{2}. \end{aligned}$$

A modification of the integral test and the assumption that \(A \in \mathcal {B}_{0}\), shows that one can find \(C>0\) so that

$$\begin{aligned} \Vert \sigma _{N}\star A\Vert _{\mathfrak {S}_{2}}^{2}\le & {} C \iint _{{\mathbb {R}}_{+}^{2}}\frac{\left| \sigma \left( \frac{x}{N},\, \frac{y}{N}\right) \right| ^{2}}{(x+y+1)^{2}}dxdy\\= & {} C \iint _{{\mathbb {R}}_{+}^{2}}\frac{\left| \sigma \left( s, t\right) \right| ^{2}}{(s+t+1/N)^{2}}dsdt\quad (:=I_{N}), \end{aligned}$$

the last inequality follows from the change of variables \(x=Ns,\, y=Nt\). Let \(\Omega _{\varepsilon }=\{(s, t) \in {\mathbb {R}}_{+}^{2}\, \vert \, s^{2}+t^{2}<\varepsilon \}\) and \(\Omega _{\varepsilon }^{c}={\mathbb {R}}_{+}^{2}{\setminus } \Omega _{\varepsilon }\), then:

$$\begin{aligned} I_{N}= & {} \iint _{\Omega _{\varepsilon }}\frac{\left| \sigma \left( s, t\right) \right| ^{2}}{(s+t+1/N)^{2}}dsdt \qquad (:=J_{1})\\&+\iint _{\Omega _{\varepsilon }^{c}}\frac{\left| \sigma \left( s, t\right) \right| ^{2}}{(s+t+1/N)^{2}}dsdt\qquad (:=J_{2}) \end{aligned}$$

We will show that each summand is uniformly bounded. Since \(\sigma \) satisfies (2.8), it follows

$$\begin{aligned} J_{1}\le & {} \frac{C_{\beta }}{\log (2)^{2}}\iint _{\Omega _{\varepsilon }}\frac{1}{\log \left( s^{2}+t^{2}\right) ^{2\beta }(s^{2}+t^{2})}dsdt\\\le & {} C \int _{0}^{\varepsilon }\frac{1}{r\log (r)^{2\beta }}dr<\infty . \end{aligned}$$

The second inequality is a consequence of writing the integral in polar coordinates and, since \(\beta >1/2\), the last integral is finite. Using (C), it follows that

$$\begin{aligned} J_{2}\le & {} C \iint _{\Omega _{\varepsilon }^{c}}\frac{dsdt}{(s+t)^{2}\log (s+t+2)^{2\alpha }}\\\le & {} C \int _{\varepsilon }^{\infty }\frac{dx}{x\log (x+2)^{2\alpha }}<\infty . \end{aligned}$$

We have thus obtained that \(I_{N}\) is uniformly bounded in N, whereby the assertion follows. \(\square \)

Proof of Theorem 2.5

Write \(A_{i}^{(N)}= (\sigma _{i})_{N}\star A\), Weyl’s inequality (2.2) states that

$$\begin{aligned} \mathsf {n}(t;A_{1}^{(N)})= & {} \mathsf {n}(t+s-s;A_{1}^{(N)}+A_{2}^{(N)}-A_{2}^{(N)})\\\le & {} \mathsf {n}(t-s;A_{2}^{(N)})+\mathsf {n}(s;A_{1}^{(N)}-A_{2}^{(N)}), \end{aligned}$$

for any \(0<s<t\). Swapping the roles of \(A_{1}^{(N)}\) and \(A_{2}^{(N)}\) in the above, we obtain

$$\begin{aligned}&\mathsf {n}(t+s;A_{2}^{(N)})-\mathsf {n}(s;A_{1}^{(N)}-A_{2}^{(N)})\le \mathsf {n}(t;A_{1}^{(N)}),\\&\mathsf {n}(t;A_{1}^{(N)})\le \mathsf {n}(t-s;A_{2}^{(N)})+\mathsf {n}(s;A_{1}^{(N)}-A_{2}^{(N)}). \end{aligned}$$

Lemmas 2.1 and 2.6 together imply that

$$\begin{aligned} \mathsf {n}(s;A_{1}^{(N)}-A_{2}^{(N)})=O_{s}(1) \end{aligned}$$

as \(N\rightarrow \infty \), and so we obtain that

$$\begin{aligned}&\overline{\mathsf {LD}}_{\sigma _{1}}(t+s;A)\le \overline{\mathsf {LD}}_{\sigma _{2}}(t;A)\le \overline{\mathsf {LD}}_{\sigma _{1}}(t-s;A),\\&\underline{\mathsf {LD}}_{\sigma _{1}}(t+s;A)\le \underline{\mathsf {LD}}_{\sigma _{2}}(t;A)\le \underline{\mathsf {LD}}_{\sigma _{1}}(t-s;A). \end{aligned}$$

Sending \(s\rightarrow 0\) gives the desired inequalities. In the self-adjoint setting, the same reasoning carries through once we replace the function \(\mathsf {n}\) with the functions \(\mathsf {n}_{\pm }\). \(\square \)

2.3 Almost Symmetric and Almost Orthogonal Operators

As mentioned in the Introduction, we will use the following two results which are similar, at least in spirit, to Theorems 2.2 in [20] and Theorem 2.7 in [19] and their proofs follow the same scheme.

From now on, we make no assumptions on the uniform boundedness and smoothness of our multiplier \(\tau \) and write \(A^{(N)}=\tau _{N}\star A\). The first of the two results discussed below is about the interactions at the level of their spectral densities between two operators. Namely, if AB are bounded operators whose truncations \(A^{(N)}, B^{(N)}\) are almost orthogonal in the sense that

$$\begin{aligned} A^{(N)*}B^{(N)}\in \mathfrak {S}_{p},\qquad A^{(N)}B^{(N)*}\in \mathfrak {S}_{p}, \end{aligned}$$

for some \(p\ge 1\) uniformly in N, then each of the logarithmic spectral densities of \(\left| A\right| \) and \(\left| B\right| \) contributes independently to the logarithmic spectral density of \(\left| A+B\right| \). Let us state the result as follows

Theorem 2.7

Let \(A_{i}\), with \(1\le i\le L\), be a family of operators such that for some \(p\in [1, \infty )\) one has

$$\begin{aligned} \sup _{N\ge 1}\Vert A^{(N)*}_{j}A_{k}^{(N)}\Vert _{\mathfrak {S}_{p}}<\infty ,\ \ \ \sup _{N\ge 1}\Vert A_{j}^{(N)}A^{(N)*}_{k}\Vert _{\mathfrak {S}_{p}}^{p}<\infty ,\ \ \ \forall \, j\ne k. \end{aligned}$$

Then, for \(A=\sum _{j=1}^{L}A_{j}\) and for any \(t>0\):

$$\begin{aligned} \sum _{j=1}^{L}\overline{\mathsf {LD}}_{\tau }(t+0; A_{j})\le \overline{\mathsf {LD}}_{\tau }\left( t; A\right)\le & {} \sum _{j=1}^{L}\overline{\mathsf {LD}}_{\tau }(t-0; A_{j}), \end{aligned}$$
(2.9)
$$\begin{aligned} \sum _{j=1}^{L}\underline{\mathsf {LD}}_{\tau }(t+0; A_{j})\le \underline{\mathsf {LD}}_{\tau }\left( t; A\right)\le & {} \sum _{j=1}^{L}\underline{\mathsf {LD}}_{\tau }(t-0; A_{j}). \end{aligned}$$
(2.10)

If all \(A_{j}\) are self-adjoint and \(\tau (x,y)=\overline{\tau (y,x)}\), then we have

$$\begin{aligned} \sum _{j=1}^{L}\overline{\mathsf {LD}}_{\tau }^{\pm }(t+0; A_{j})\le & {} \overline{\mathsf {LD}}_{\tau }^{\pm }\left( t; A\right) \le \sum _{j=1}^{L}\overline{\mathsf {LD}}_{\tau }^{\pm }(t-0; A_{j}), \end{aligned}$$
(2.11)
$$\begin{aligned} \sum _{j=1}^{L}\underline{\mathsf {LD}}_{\tau }^{\pm }(t+0; A_{j})\le & {} \underline{\mathsf {LD}}_{\tau }^{\pm }\left( t; A\right) \le \sum _{j=1}^{L}\underline{\mathsf {LD}}_{\tau }^{\pm }(t-0; A_{j}). \end{aligned}$$
(2.12)

Proof of Theorem 2.7

We will prove only (2.9), since (2.10),(2.11) and (2.12) follow the same line of reasoning. Put \(\mathcal {H}=\oplus _{i=1}^{L}\ell ^{2}(\mathbb {Z_{+}})\) and define the block diagonal operator \(\mathcal {A}_{N}={{\,\mathrm{diag}\,}}\{A_{1}^{(N)},\ldots ,A_{L}^{(N)}\}\) such that

$$\begin{aligned} \mathcal {A}_{N}(f_{1},\ldots , f_{L})=(A_{1}^{(N)}f_{1},\ldots , A_{L}^{(N)}f_{L}). \end{aligned}$$

Similarly, let \(\mathcal {A}={{\,\mathrm{diag}\,}}\{A_{1}, \ldots , A_{L}\}\). Define the operator \(\mathcal {J}:\mathcal {H}\rightarrow \ell ^{2}(\mathbb {Z_{+}})\) as

$$\begin{aligned} \mathcal {J}(f_{1},\ldots ,f_{L})=\sum _{j=1}^{L}f_{j}. \end{aligned}$$

The operator \((\mathcal {J}\mathcal {A}_{N})^{*}(\mathcal {J}\mathcal {A}_{N})\) can be written as an \(L\times L\) block-matrix of the form:

$$\begin{aligned} \begin{pmatrix} A_{1}^{(N)*}A_{1}^{(N)}&{}A_{1}^{(N)*}A_{2}^{(N)}&{}\ldots &{}A_{1}^{(N)*}A_{L}^{(N)}\\ A_{2}^{(N)*}A_{1}^{(N)}&{}A_{2}^{(N)*}A_{2}^{(N)}&{}\ldots &{}A_{2}^{(N)*}A_{L}^{(N)}\\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ A_{L}^{(N)*}A_{1}^{(N)}&{}A_{L}^{(N)*}A_{2}^{(N)}&{}\ldots &{}A_{L}^{(N)*}A_{L}^{(N)} \end{pmatrix}. \end{aligned}$$

Since the operator \(\mathcal {A}_{N}^{*}\mathcal {A}_{N}\) is the block diagonal \(L\times L\) matrix

$$\begin{aligned} \begin{pmatrix} A_{1}^{(N)*}A_{1}^{(N)}&{}0&{}\ldots &{}0\\ 0&{}A_{2}^{(N)*}A_{2}^{(N)}&{}\ldots &{}0\\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ 0&{}0&{}\ldots &{}A_{L}^{(N)*}A_{L}^{(N)} \end{pmatrix}, \end{aligned}$$

it is easy to see that the difference \((\mathcal {J}\mathcal {A}_{N})^{*}(\mathcal {J}\mathcal {A}_{N})-\mathcal {A}_{N}^{*}\mathcal {A}_{N}\) is the \(L\times L\) matrix

$$\begin{aligned} \mathcal {K}_{N}=\begin{pmatrix} 0&{}A_{1}^{(N)*}A_{2}^{(N)}&{}\ldots &{}A_{1}^{(N)*}A_{L}^{(N)}\\ A_{2}^{(N)*}A_{1}^{(N)}&{}0&{}\ldots &{}A_{2}^{(N)*}A_{L}^{(N)}\\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ A_{L}^{(N)*}A_{1}^{(N)}&{}A_{L}^{(N)*}A_{2}^{(N)}&{}\ldots &{}0 \end{pmatrix}. \end{aligned}$$

Furthermore, since the operators \(A_{j}\) are so that \(\sup _{N}\Vert A_{j}^{(N)*}A_{k}^{(N)}\Vert _{\mathfrak {S}_{p}}\) is finite for all \(j\ne k\), then

$$\begin{aligned} \sup _{N\ge 1}\Vert \mathcal {K}_{N}\Vert _{\mathfrak {S}_{p}}<\infty . \end{aligned}$$

Thus, Weyl inequality (2.2) gives

$$\begin{aligned} \mathsf {n}(t; \mathcal {J}\mathcal {A}_{N})= & {} \mathsf {n}(t^{2}; (\mathcal {J}\mathcal {A}_{N})^{*}(\mathcal {J}\mathcal {A}_{N}))\le \mathsf {n}(t^{2}-s; \mathcal {A}_{N}^{*} \mathcal {A}_{N})+\mathsf {n}(s; \mathcal {K}_{N})\\= & {} \sum _{j=1}^{L}\mathsf {n}(t^{2}-s; A_{j}^{(N)*} A_{j}^{(N)})+\mathsf {n}(s; \mathcal {K}_{N}), \end{aligned}$$

where in the second line we used the fact that \(\mathcal {A}_{N}^{*} \mathcal {A}_{N}\) is diagonal and so

$$\begin{aligned} \mathsf {n}(t^{2}-s; \mathcal {A}_{N}^{*} \mathcal {A}_{N})=\sum _{j=1}^{L}\mathsf {n}(t^{2}-s; A_{j}^{(N)*} A_{j}^{(N)}). \end{aligned}$$

Just as in the proof of Theorem 2.5, we swap the roles of \(\mathcal {J}\mathcal {A}_{N}\) and \(\mathcal {A}_{N}\) and, using (2.1), we obtain

$$\begin{aligned}&\sum _{j=1}^{L}\mathsf {n}(\sqrt{t^{2}+s}; A_{j}^{(N)})-\mathsf {n}(s; \mathcal {K}_{N})\le \mathsf {n}(t; \mathcal {J}\mathcal {A}_{N}),\\&\mathsf {n}(t; \mathcal {J}\mathcal {A}_{N})\le \sum _{j=1}^{L}\mathsf {n}(\sqrt{t^{2}-s}; A_{j}^{(N)})+\mathsf {n}(s; \mathcal {K}_{N}). \end{aligned}$$

Dividing by \(\log (N)\), exploiting the sub-additivity of the \(\limsup \) in conjunction with Lemma 2.1 and sending \(s \rightarrow 0\), one gets

$$\begin{aligned} \sum _{j=1}^{L}\overline{\mathsf {LD}}_{\tau }(t+0; A_{j})\le \overline{\mathsf {LD}}_{\tau }(t; \mathcal {J}\mathcal {A})\le \sum _{j=1}^{L}\overline{\mathsf {LD}}_{\tau }(t-0; A_{j}). \end{aligned}$$
(2.13)

Recall now that we set \(A=\sum _{j=1}^{L}{A_{j}}\). We have

$$\begin{aligned} A^{(N)}A^{(N)*}= & {} \sum _{j, k=1}^{L}A_{j}^{(N)}A_{k}^{(N)*},\\ (\mathcal {J}\mathcal {A}_{N})(\mathcal {J}\mathcal {A}_{N})^{*}= & {} \sum _{j=1}^{L}A_{j}^{(N)}A_{j}^{(N)*}. \end{aligned}$$

Write \(D_{N}=A^{(N)}A^{(N)*}-(\mathcal {J}\mathcal {A}_{N})(\mathcal {J}\mathcal {A}_{N})^{*}\), then from our assumptions it follows that \(\sup _{N\ge 1}\Vert D_{N}\Vert _{\mathfrak {S}_{p}}<\infty \) and using (2.1) in conjunction with the Weyl inequality (2.2), we obtain

$$\begin{aligned} \mathsf {n}(t; A^{(N)*})= & {} \mathsf {n}(t^{2}; A^{(N)}A^{(N)*})\le \mathsf {n}(t^{2}-s; (\mathcal {J}\mathcal {A}_{N})(\mathcal {J}\mathcal {A}_{N})^{*})+\mathsf {n}(s;D_{N})\\= & {} \mathsf {n}(\sqrt{t^{2}-s};(\mathcal {J}\mathcal {A}_{N})^{*})+\mathsf {n}(s;D_{N}). \end{aligned}$$

Exchanging the roles of \(A^{(N)*}\) and of \((\mathcal {J}\mathcal {A}_{N})^{*}\) and using once more the Weyl inequality (2.2), we also have that

$$\begin{aligned} \mathsf {n}(t; (\mathcal {J}\mathcal {A}_{N})^{*})\ge \mathsf {n}(\sqrt{t^{2}+s};A_{N}^{*})-\mathsf {n}(s;D_{N}). \end{aligned}$$

Thus the following inequalities are easily obtained

$$\begin{aligned} \overline{\mathsf {LD}}_{\tau }(t; A)= & {} \overline{\mathsf {LD}}_{\tau }(t; A^{*})\ge \overline{\mathsf {LD}}_{\tau }(t+0; (\mathcal {J}\mathcal {A})^{*})=\overline{\mathsf {LD}}_{\tau }(t+0; (\mathcal {J}\mathcal {A})),\\ \overline{\mathsf {LD}}_{\tau }(t; A)= & {} \overline{\mathsf {LD}}_{\tau }(t; A^{*})\le \overline{\mathsf {LD}}_{\tau }(t-0; (\mathcal {J}\mathcal {A})^{*})=\overline{\mathsf {LD}}_{\tau }(t-0; (\mathcal {J}\mathcal {A})). \end{aligned}$$

The above, in conjunction with (2.13), gives the result. \(\square \)

The second result applies to a self-adjoint operator A and establishes a relation between \(\overline{\mathsf {LD}}_{\tau }^{+}(t;A)\) (resp. \(\underline{\mathsf {LD}}^{+}(t;A)\)) and \(\overline{\mathsf {LD}}_{\tau }^{-}(t;A)\) (resp. \(\underline{\mathsf {LD}}^{-}(t;A)\)). More precisely, if a self-adjoint operator A is so that its truncation \(A^{(N)}\) is almost symmetric under reflection around 0, in the sense that for some unitary operator U one has

$$\begin{aligned} UA^{(N)}+A^{(N)}U\ \in \ \mathfrak {S}_{p}\end{aligned}$$

for some \(p\ge 1\) uniformly in N, then its upper and lower logarithmic spectral densities contribute equally to the logarithmic spectral density of \(\left| A\right| \). In other words, the positive and negative eigenvalues of \(\tau _{N}\star A\) accumulate to the spectrum of A in the same way. We can formulate this as follows

Theorem 2.8

Let A be a self-adjoint operator and let \(\tau \) be such that \(\tau (x,y)=\overline{\tau (y,x)}\). Suppose there exists a unitary operator U for which

$$\begin{aligned} \sup _{N\ge 1}\Vert UA^{(N)}+A^{(N)}U\Vert _{\mathfrak {S}_{p}}<\infty \end{aligned}$$

for some \(p\ge 1\). Then for \(t>0\)

$$\begin{aligned}&\overline{\mathsf {LD}}_{\tau }^{-}(t+0;A)\le \overline{\mathsf {LD}}_{\tau }^{+}(t;A)\le \overline{\mathsf {LD}}_{\tau }^{-}(t-0;A),\\&\underline{\mathsf {LD}}_{\tau }^{-}(t+0;A)\le \underline{\mathsf {LD}}_{\tau }^{+}(t;A)\le \underline{\mathsf {LD}}_{\tau }^{-}(t-0;A) \end{aligned}$$

In particular, we get that

$$\begin{aligned}&\overline{\mathsf {LD}}_{\tau }(t+0;A)\le 2\overline{\mathsf {LD}}_{\tau }^{\pm }(t;A)\le \overline{\mathsf {LD}}_{\tau }(t-0;A),\\&\underline{\mathsf {LD}}_{\tau }(t+0;A)\le 2\underline{\mathsf {LD}}_{\tau }^{\pm }(t;A)\le \underline{\mathsf {LD}}_{\tau }(t-0;A). \end{aligned}$$

Proof of Theorem 2.8

Write

$$\begin{aligned} K_{N}=A^{(N)}+U^{*}A^{(N)}U. \end{aligned}$$

By assumption \(\sup _{N\ge 1}\Vert K_{N}\Vert _{\mathfrak {S}_{p}}\) is finite. Furthermore, by Weyl inequality (2.3)

$$\begin{aligned} \mathsf {n}_{\pm }(t; A^{(N)})= & {} \mathsf {n}_{\pm }(t; -U^{*}A^{(N)}U+K_{N})\\\le & {} \mathsf {n}_{\pm }(t-s; -U^{*}A^{(N)}U)+\mathsf {n}_{\pm }(s,K_{N})\\= & {} \mathsf {n}_{\mp }(t-s; A^{(N)})+\mathsf {n}_{\pm }(s,K_{N}), \end{aligned}$$

where \(0<s<t.\) In particular, this gives that

$$\begin{aligned} \mathsf {n}_{-}(t+s; A^{(N)})-\mathsf {n}_{-}(s,K_{N})\le \mathsf {n}_{+}(t; A^{(N)})\le \mathsf {n}_{-}(t-s; A^{(N)})+\mathsf {n}_{+}(s,K_{N}). \end{aligned}$$

The result follows once we divide through by \(\log (N)\), send \(N \rightarrow \infty \) and use Lemma 2.1. \(\square \)

3 Hankel Operators and the Abel Summation Method

3.1 Hankel Operators

In the Introduction, we defined Hankel matrices acting on \(\ell ^{2}(\mathbb {Z_{+}})\), equivalently they can also be defined as integral operators acting on \(L^{2}({\mathbb {T}})\).

Let \({\mathbb {T}}\) be the unit circle in the complex plane, and \(\varvec{m}\) the Lebesgue measure normalised to 1, i.e \(d\varvec{m}(z)=(2\pi iz)^{-1}dz\). Define the Riesz projection as

$$\begin{aligned}&P_{+}:L^{2}({\mathbb {T}})\longrightarrow L^{2}({\mathbb {T}})\nonumber \\&(P_{+}f)(v)=\lim _{\varepsilon \rightarrow 0}\int _{{\mathbb {T}}}{\frac{f(z)z}{z-(1-\varepsilon )v}d\varvec{m}(z)},\quad v \in {\mathbb {T}}. \end{aligned}$$
(3.1)

For a symbol \(\omega \), the Hankel operator \(H(\omega )\) is:

$$\begin{aligned}&H(\omega ):L^{2}({\mathbb {T}})\rightarrow L^{2}({\mathbb {T}})\nonumber \\&H(\omega )f= P_{+}\omega JP_{+}f \end{aligned}$$
(3.2)

where J is the involution \(Jf(v)=f(\overline{v})\) and, by a slight abuse of notation, \(\omega \) denotes both the symbol and the induced operator of multiplication on \(L^{2}({\mathbb {T}})\). We can immediately see that if \(\omega \) satisfies (1.1), \(H(\omega )\) is self-adjoint. Furthermore, it is easy to see that

$$\begin{aligned} \Vert H(\omega )\Vert \le \Vert \omega \Vert _{\infty }. \end{aligned}$$
(3.3)

For any non-negative integers jk, one has

$$\begin{aligned} \left( H(\omega )z^{j}, z^{k}\right) _{L^{2}({\mathbb {T}})}= & {} \left( P_{+}\omega JP_{+}z^{j}, z^{k}\right) _{L^{2}({\mathbb {T}})}\\= & {} \left( \omega \cdot z^{-j}, z^{k}\right) _{L^{2}({\mathbb {T}})}=\widehat{\omega }(j+k), \end{aligned}$$

and so the matrix representation of \(H(\omega )\) in the basis \(\{z^{n}\}_{n\in {\mathbb {Z}}}\) is the block-matrix

$$\begin{aligned} \begin{pmatrix} 0&{}\quad 0\\ 0&{}\quad \varGamma (\widehat{\omega }) \end{pmatrix}, \end{aligned}$$

with respect to the orthogonal decomposition \(L^{2}({\mathbb {T}})=H^{2}\oplus (H^{2})^{\perp }\), where \(H^{2}\) is the closed linear span in \(L^{2}({\mathbb {T}})\) of the monomials \(\{z^{n}\}_{n\ge 0}\). In other words, \(H(\omega )\) and \(\varGamma (\widehat{\omega })\) are unitarily equivalent (modulo kernels) under the Fourier transform.

For \(0<r<1\), let \(P_{r}\) be the Poisson kernel, defined as

$$\begin{aligned} P_{r}(v)=\sum _{j=-\infty }^{\infty }{r^{\left| n\right| }v^{n}}=\frac{1-r^{2}}{\left| 1-rv\right| ^{2}},\ \ \ v\in {\mathbb {T}}. \end{aligned}$$

For \(\omega _{r}=P_{r}*\omega \), we have the identity

$$\begin{aligned} H(\omega _{r})=C_{r}H(\omega )C_{r}, \end{aligned}$$
(3.4)

where \(C_{r}\) is the operator of convolution by \(P_{r}\) on \(L^{2}({\mathbb {T}})\). Furthermore, it is unitarily equivalent (modulo kernels) to the Hankel matrix

$$\begin{aligned} \varGamma ^{(r)}(\widehat{\omega })=\left\{ r^{j+k}\widehat{\omega }(j+k)\right\} _{j,k\ge 0}. \end{aligned}$$
(3.5)

Note that for \(r=e^{-1/N}\), the above reduces to the truncation considered in (1.13). For \(0<r<1\), the map \(H(\omega )\mapsto H(\omega _{r})\) has the following properties

  1. (i)

    for any bounded Hankel operator \(H(\omega )\), \(H(\omega _{r})\in \mathfrak {S}_{0}\). Furthermore, (3.4) and Hölder’s inequality for Schatten classes (see [3, Thm. 2, Ch. 11.4]) give for \(1\le p\le \infty \)

    $$\begin{aligned} \Vert H(\omega _{r})\Vert _{\mathfrak {S}_{p}}\le \frac{1}{(1-r^{2p})^{1/p}}\Vert H(\omega )\Vert ; \end{aligned}$$
  2. (ii)

    if \(H(\omega ) \in \mathfrak {S}_{p}\) for some \(1\le p\le \infty \), then (3.4) implies \(\Vert H(\omega _{r})\Vert _{\mathfrak {S}_{p}}\le \Vert H(\omega )\Vert _{\mathfrak {S}_{p}}.\)

3.2 Almost Orthogonal and Almost Symmetric Hankel Operators

Recall that for a function \(\eta :{\mathbb {T}}\rightarrow {\mathbb {C}}\), its singular support, denoted \({{\,\mathrm{sing\,supp}\,}}{\eta }\), is defined as the smallest closed subset, \({{\,\mathrm{M}\,}}\), of \({\mathbb {T}}\) such that \(\eta \in C^{\infty }({\mathbb {T}}\backslash {{\,\mathrm{M}\,}})\).

Lemma 3.1

The following statements hold

  1. (i)

    Let \(\omega _{1}, \omega _{2}\, \in L^{\infty }({\mathbb {T}})\) have disjoint singular supports. Set \((\omega _{i})_{r}=P_{r}*\omega _{i}, i=1,2\). Then

    $$\begin{aligned} \sup _{r<1}\Vert H((\omega _{1})_{r})^{*}H((\omega _{2})_{r})\Vert _{\mathfrak {S}_{1}}<\infty ,\quad \sup _{r<1}\Vert H((\omega _{1})_{r})H((\omega _{2})_{r})^{*}\Vert _{\mathfrak {S}_{1}}<\infty ; \end{aligned}$$
  2. (ii)

    Suppose \(\omega \in L^{\infty }({\mathbb {T}})\) be so that \(\pm 1 \notin {{\,\mathrm{sing\,supp}\,}}(\omega ).\) Let \(\mathfrak {s}(v)={{\,\mathrm{sign}\,}}({{\,\mathrm{Im}\,}}(v)),\) \(v\in {\mathbb {T}}\). Then

    $$\begin{aligned} \sup _{r<1}\Vert \mathfrak {s}H(\omega _r)+H(\omega _r)\mathfrak {s}\Vert _{\mathfrak {S}_{1}}<\infty . \end{aligned}$$

Remark 3.2

Similar results are already known in the literature, but only from a qualitative standpoint. In fact, under the same assumptions of (i), it is known that both \(H(\omega _{1})^{*}H(\omega _{2})\) and \(H(\omega _{2})^{*}H(\omega _{1}) \in \mathfrak {S}_{0}\). Similarly for (ii), it is also known that \(\mathfrak {s}H(\omega )+H(\omega )\mathfrak {s}\in \mathfrak {S}_{0}\). For a proof of both facts see [20, Lemma 2.5] and [21, Lemma 4.2] respectively, even though both facts are already mentioned in [17].

To prove the statements in Lemma 3.1, we use the following:

Lemma 3.3

  1. (i)

    if K is an operator on \(L^{2}({\mathbb {T}})\) with integral kernel \(k \in C^{\infty }({\mathbb {T}}^{2})\), then \(K \in \mathfrak {S}_{1}\);

  2. (ii)

    \(H(\omega ) \in \mathfrak {S}_{1}\) if \(\omega \in C^{2}({\mathbb {T}})\) and furthermore there exists \(C>0\) such that:

    $$\begin{aligned} \Vert H(\omega )\Vert _{\mathfrak {S}_{1}}\le \Vert \omega \Vert _{\infty }+C\Vert \omega ''\Vert _{2}. \end{aligned}$$
  3. (iii)

    if \(\omega \in C^{2}({\mathbb {T}})\), the commutator \([P_{+}, \omega ]\) is trace-class.

Proof of Lemma 3.3

(i) is folklore. It can be proved by approximating the kernel k by trigonometric polynomials.

Let us prove (ii). First, recall two facts:

  1. (a)

    any \(\omega \in C^{2}({\mathbb {T}})\) is the uniform limit of the sequence

    $$\begin{aligned} \omega _{N}(v)=\sum _{\left| j\right| \le N-1}{\widehat{\omega }(j)v^{j}},\ \ \ v \in {\mathbb {T}}. \end{aligned}$$

    Thus for any N and \(v \in {\mathbb {T}}\), the Cauchy-Schwarz inequality together with Plancherel Identity give:

    $$\begin{aligned} \left| \omega (v)-\omega _{N}(v)\right| \le \Vert \omega ''\Vert _{2}\left( 2\sum _{j\ge N}j^{-4}\right) ^{1/2}\le C \Vert \omega ''\Vert _{2} N^{-3/2}. \end{aligned}$$
  2. (b)

    for \(A \in \mathfrak {S}_{\infty }\) one has \(s_{N}(A)=\inf \{\Vert A-B\Vert \ |\ rank(B)\le N-1\},\ N\ge 1,\) and, in particular, \(s_{1}(A)=\Vert A\Vert .\)

Putting these two facts together and noting that \(rank\left( H(\omega _{N})\right) \le N\), we have that for \(N\ge 2\):

$$\begin{aligned} s_{N}(H(\omega ))= & {} \inf \{\Vert H(\omega )-B\Vert \ |\ rank(B)\le N-1\}\\\le & {} \Vert H(\omega )-H(\omega _{N-1})\Vert \\\le & {} \Vert \omega -\omega _{N-1}\Vert _{\infty }\\\le & {} C\frac{\Vert \omega ''\Vert _{2}}{(N-1)^{3/2}}. \end{aligned}$$

Thus we see that \(H(\omega ) \in \mathfrak {S}_{1}\) and, furthermore,

$$\begin{aligned} \Vert H(\omega )\Vert _{\mathfrak {S}_{1}}=s_{1}(H(\omega ))+\sum _{n\ge 2}{s_{n}(H(\omega ))}\le \Vert \omega \Vert _{\infty }+C \Vert \omega ''\Vert _{2}. \end{aligned}$$

(iii). Write \(P_{-}=I-P_{+}\), where I is the identity operator. Since \(P_{+}\) is a projection and \(P_{+}P_{-}=P_{-}P_{+}=0\), one has

$$\begin{aligned}{}[P_{+}, \omega ]=[P_{+},(P_{+}+P_{-}) \omega ](P_{+}+P_{-})=P_{+}\omega P_{-}-P_{-}\omega P_{+}. \end{aligned}$$

Using the identity \(P_{-}=JP_{+}J-P_{+}JP_{+}\), it follows that

$$\begin{aligned}{}[P_{+}, \omega ]=H(\omega )J-JH(\overline{\omega })^{*}-P_{+}\omega P_{+}JP_{+}+P_{+}JP_{+}\omega P_{+}. \end{aligned}$$

Since \(P_{+}JP_{+}\) is a rank-one operator (projection onto constants), \([P_{+}, \omega ]\) is trace-class if and only if \(H(\omega )J-JH(\overline{\omega })^{*}\) is, which follows immediately from (ii). \(\square \)

With these facts at hand, we are now ready to prove Lemma 3.1.

Proof of Lemma 3.1

(i): we will only show the first inequality, as the second can be proved in the same way. From the assumptions on \(\omega _{1}, \omega _{2}\), we can find \(\zeta _{1}, \zeta _{2}\ \in \ C^{\infty }({\mathbb {T}})\) such that \({{\,\mathrm{supp}\,}}\zeta _{1}\cap {{\,\mathrm{supp}\,}}\zeta _{2}=\varnothing \) and such that \((1-\zeta _{i})\omega _{i}\) vanishes identically in a neighbourhood of \({{\,\mathrm{sing\,supp}\,}}\omega _{i}\). We will repeatedly use the following two facts:

  1. (a)

    for any \(\varphi \in L^\infty ({\mathbb {T}})\), Young’s inequality holds, i.e one has the estimate:

    $$\begin{aligned} \Vert P_{r}*\varphi \Vert _{\infty }\le \Vert \varphi \Vert _{\infty }; \end{aligned}$$
    (3.6)
  2. (b)

    one has that \(P_{r}*\omega \in C^{\infty }({\mathbb {T}})\) and furthermore \((P_{r}*\omega ) \rightarrow \omega \) as \(r\rightarrow 1-\) locally uniformly on \({\mathbb {T}}{\setminus } {{\,\mathrm{sing\,supp}\,}}{\omega }\). The same is true for its derivatives \((P_{r}*\omega )^{(n)}\).

We set \(\widetilde{\zeta }_{i}=1-\zeta _{i},\, i=1,2\) and use the triangle inequality to obtain

$$\begin{aligned} \Vert H((\omega _{1})_{r})^{*}H((\omega _{2})_{r})\Vert _{\mathfrak {S}_{1}}\le & {} \Vert H(\widetilde{\zeta }_{1}(\omega _{1})_{r})^{*}H(\widetilde{\zeta }_{2}(\omega _{2})_{r})\Vert _{\mathfrak {S}_{1}}\\&+\Vert H(\widetilde{\zeta }_{1}(\omega _{1})_{r})^{*}H(\zeta _{2}(\omega _{2})_{r})\Vert _{\mathfrak {S}_{1}}\\&+\Vert H(\zeta _{1}(\omega _{1})_{r})^{*}H(\widetilde{\zeta }_{2}(\omega _{2})_{r})\Vert _{\mathfrak {S}_{1}}\\&+\Vert H(\zeta _{1}(\omega _{1})_{r})^{*}H(\zeta _{2}(\omega _{2})_{r})\Vert _{\mathfrak {S}_{1}}, \end{aligned}$$

from which we see that it is sufficient to find uniform bounds for each summand above.

Recall that \(H(\omega _{i})=P_{+}\omega _{i}JP_{+}\) and \(P_{+}\) is a projection, thus:

$$\begin{aligned} H(\zeta _{1}(\omega _{1})_{r})^{*}H(\zeta _{2}(\omega _{2})_{r})=P_{+}J\overline{\zeta _{1}(\omega _{1})_{r}}P_{+}\zeta _{2}(\omega _{2})_{r}JP_{+}. \end{aligned}$$

By our choice, the functions \(\zeta _{1}\) and \(\zeta _{2}\) have disjoint supports, and so the integral operator \(\overline{\zeta }_{1}P_{+}\zeta _{2}\) has a \(C^{\infty }({\mathbb {T}}^{2})\) integral kernel given by

$$\begin{aligned} k(z,v)=\frac{\overline{\zeta }_{1}(z)\zeta _{2}(v)}{v-z}v,\quad v,\, z \in {\mathbb {T}}. \end{aligned}$$

Thus Lemma 3.3-(i) shows that \(\overline{\zeta }_{1}P_{+}\zeta _{2} \in \mathfrak {S}_{1}\). Furthermore, using Hölder inequality for the Schatten classes and (3.6), we deduce that

$$\begin{aligned} \sup _{r<1}\Vert H(\zeta _{1}(\omega _{1})_{r})^{*}H(\zeta _{2}(\omega _{2})_{r})\Vert _{\mathfrak {S}_{1}}\le \Vert \omega _{1}\Vert _{\infty }\Vert \omega _{2}\Vert _{\infty }\Vert \overline{\zeta }_{1}P_{+}\zeta _{2}\Vert _{\mathfrak {S}_{1}}<\infty . \end{aligned}$$

By Lemma 3.3-(ii), we also have that \(H(\widetilde{\zeta }_{1}(\omega _{1})_{r}) \in \mathfrak {S}_{1}\) and furthermore:

$$\begin{aligned}&\Vert H(\widetilde{\zeta }_{1}(\omega _{1})_{r})^{*}H(\zeta _{2}(\omega _{2})_{r})\Vert _{\mathfrak {S}_{1}}\le \Vert H(\widetilde{\zeta }_{1}(\omega _{1})_{r})\Vert _{\mathfrak {S}_{1}}\Vert H(\zeta _{2}(\omega _{2})_{r})\Vert \nonumber \\&\quad \le \Vert \zeta _{2}\Vert _{\infty }\Vert \omega _{2}\Vert _{\infty }(C\Vert (\widetilde{\zeta }_{1}(\omega _{1})_{r})''\Vert _{2}+\Vert \omega _{1}\Vert _{\infty }), \end{aligned}$$
(3.7)

for some \(C>0\) independent of r. In (3.7) we used once more the Hölder inequality for Schatten classes together with the estimates (3.3) and (3.6).

From (b) and the fact that \({\widetilde{\zeta _{i}}}\omega _{i}\) vanishes identically on \({{\,\mathrm{sing\,supp}\,}}{\omega _{i}}\), we conclude that \((\widetilde{\zeta }_{i}(\omega _{i})_{r})''\rightarrow (\widetilde{\zeta }_{i}\omega _{i})''\) uniformly on the whole of \({\mathbb {T}}\), and so

$$\begin{aligned} \sup _{r<1}\Vert (\widetilde{\zeta }_{i}(\omega _{i})_{r})''\Vert _{2}<\infty . \end{aligned}$$
(3.8)

Using (3.8) in (3.7) finally gives

$$\begin{aligned} \sup _{r<1}{\Vert H(\widetilde{\zeta }_{1}(\omega _{1})_{r})^{*}H(\zeta _{2}(\omega _{2})_{r})\Vert _{\mathfrak {S}_{1}}}<\infty . \end{aligned}$$

Similarly, one can show that

$$\begin{aligned}&\sup _{r<1}{\Vert H(\widetilde{\zeta _{1}}(\omega _{1})_{r})^{*}H(\widetilde{\zeta }_{2}(\omega _{2})_{r})\Vert _{\mathfrak {S}_{1}}}<\infty ,\\&\sup _{r<1}{\Vert H(\zeta _{1}(\omega _{1})_{r})^{*}H(\widetilde{\zeta }_{2}(\omega _{2})_{r})\Vert _{\mathfrak {S}_{1}}}<\infty . \end{aligned}$$

(ii) Since \(\pm 1 \notin {{\,\mathrm{sing\,supp}\,}}{\omega }\), then we can write \(\omega =\varphi +\eta \) for some \(\eta \in C^{\infty }({\mathbb {T}})\) and some \(\varphi \) vanishing identically in a neighbourhood U of \(\pm 1\). With this decomposition of \(\omega \), we can see that

$$\begin{aligned} H(\omega _{r})=H(\varphi _{r})+H(\eta _{r}). \end{aligned}$$

Since \(\eta \) is smooth, then \(H(\eta ) \in \mathfrak {S}_{1}\) and so the triangle inequality and Hölder inequality for Schatten classes imply that

$$\begin{aligned} \sup _{r<1}\Vert \mathfrak {s}H(\omega _{ r})+H(\omega _{r})\mathfrak {s}\Vert _{\mathfrak {S}_{1}}\le 2 \Vert H(\eta )\Vert _{\mathfrak {S}_{1}}+\sup _{r<1}\Vert \mathfrak {s}H(\varphi _{ r})+H(\varphi _{r})\mathfrak {s}\Vert _{\mathfrak {S}_{1}}. \end{aligned}$$

So it is sufficient to consider those symbols \(\omega \) vanishing on a neighbourhood, U, of \(\pm 1\).

Fix a smooth function \(\zeta \) such that \(0\le \zeta \le 1\), it vanishes identically on some open \(V\subset U\) so that \(\pm 1 \in V\), \(\zeta \equiv 1\) on \({\mathbb {T}}\backslash U\) and such that \(\zeta (v)=\zeta (\overline{v}),\, v\in {\mathbb {T}}\). We can write:

$$\begin{aligned} \mathfrak {s}H(\omega _r)+H(\omega _r)\mathfrak {s}= & {} \mathfrak {s}H((1-\zeta )\omega _r)+H((1-\zeta )\omega _r)\mathfrak {s}\nonumber \\&+\mathfrak {s}H(\zeta \omega _r)+H(\zeta \omega _r)\mathfrak {s}\end{aligned}$$
(3.9)

Let us study these operators more closely. Using the triangle inequality, we obtain that

$$\begin{aligned} \sup _{r<1}\Vert \mathfrak {s}H((1-\zeta )\omega _r)+H((1-\zeta )\omega _r)\mathfrak {s}\Vert _{\mathfrak {S}_{1}}\le 2\sup _{r<1}\Vert H((1-\zeta )\omega _r)\Vert _{\mathfrak {S}_{1}}. \end{aligned}$$
(3.10)

Using (b) and the fact that \((1-\zeta )\omega \equiv 0\) on \({\mathbb {T}}\), we conclude that \(((1-\zeta )\omega _{r})''\rightarrow 0\) on \({\mathbb {T}}\) and so Lemma 3.3-(ii) gives

$$\begin{aligned} \sup _{r<1}\Vert H((1-\zeta )\omega _{r})\Vert _{\mathfrak {S}_{1}}\le \sup _{r<1}(\Vert \omega \Vert _{\infty }+C\Vert ((1-\zeta )\omega _{r})''\Vert _{2})<\infty . \end{aligned}$$
(3.11)

For the operators appearing in the second line of (3.9), write

$$\begin{aligned} \mathfrak {s}H(\zeta \omega _r)+H(\zeta \omega _r)\mathfrak {s}=\left( \left[ \mathfrak {s}, P_{+}\right] \zeta \right) \omega _{r}JP_{+}+P_{+}\omega _{r}J\left( \zeta \left[ P_{+}, \mathfrak {s}\right] \right) . \end{aligned}$$

Let us now prove that the commutators \(\left[ \mathfrak {s}, P_{+}\right] \zeta ,\, \zeta \left[ \mathfrak {s}, P_{+}\right] \in \mathfrak {S}_{1}\). By our choice of \(\mathfrak {s}\) and \(\zeta \), we have \(J\mathfrak {s}=-\mathfrak {s}J\) and \(J\zeta =\zeta J\), whence it follows that

$$\begin{aligned} \left[ \mathfrak {s}, P_{+}\right] \zeta= & {} \mathfrak {s}P_{+}\zeta -\mathfrak {s}\zeta P_{+}+\mathfrak {s}\zeta P_{+}-P_{+}\mathfrak {s}\zeta =\mathfrak {s}\left[ P_{+}, \zeta \right] +\left[ \mathfrak {s}\zeta , P_{+}\right] ,\\ \zeta \left[ \mathfrak {s}, P_{+}\right]= & {} \zeta \mathfrak {s}P_{+}-P_{+}\mathfrak {s}\zeta +P_{+}\mathfrak {s}\zeta -\zeta P_{+}\mathfrak {s}=\left[ \mathfrak {s}\zeta , P_{+}\right] +\left[ P_{+}, \zeta \right] \mathfrak {s}. \end{aligned}$$

Furthermore, our choice of \(\zeta \) gives that the product \(\mathfrak {s}\zeta \in C^{\infty }({\mathbb {T}})\), and Lemma 3.3-(iii) together with (3.6) implies that

$$\begin{aligned} \sup _{r<1}\Vert \mathfrak {s}H(\zeta \omega _r)+H(\zeta \omega _r)\mathfrak {s}\Vert _{\mathfrak {S}_{1}}\le \Vert \omega \Vert _{\infty }(\Vert \left[ \mathfrak {s}, P_{+}\right] \zeta \Vert _{\mathfrak {S}_{1}}+\Vert \zeta \left[ P_{+}, \mathfrak {s}\right] \Vert _{\mathfrak {S}_{1}})<\infty . \end{aligned}$$
(3.12)

Putting together (3.10), (3.11) and (3.12) and using the triangle inequality on (3.9) gives the assertion. \(\square \)

3.3 Spectral Density of Our Model Operator: The Hilbert Matrix

An important ingredient to the proof of all our results is the model operator for which it is possible to explicitly compute the spectral density. Following the ideas of previous works, [16, 18], a natural candidate is the Hilbert matrix, given by the symbol \(\gamma \) defined in (1.4). Putting together the result of Widom, see [23, Theorem 5.1] and the Invariance Principle 2.5, we obtain

Proposition 3.4

Let \(\tau \) satisfy assumptions (B) and (C), then one has that

$$\begin{aligned} \overline{\mathsf {LD}}_{\tau }(t;\varGamma (\widehat{\gamma }))=\underline{\mathsf {LD}}_{\tau }(t;\varGamma (\widehat{\gamma }))=\mathsf {c}(t),\ \ \ t>0, \end{aligned}$$

where \(\mathsf {c}\) has been defined in (1.10). If \(\tau (x,y)=\overline{\tau (y,x)}\), then we also have

$$\begin{aligned}&\overline{\mathsf {LD}}^{+}_{\tau }(t;\varGamma (\widehat{\gamma }))=\underline{\mathsf {LD}}^{+}_{\tau }(t;\varGamma (\widehat{\gamma }))=\mathsf {c}(t),\\&\overline{\mathsf {LD}}^{-}_{\tau }(t;\varGamma (\widehat{\gamma }))=\underline{\mathsf {LD}}^{-}_{\tau }(t;\varGamma (\widehat{\gamma }))=0. \end{aligned}$$

As an immediate consequence of the above, we obtain

Corollary 3.5

Let \(z \in {\mathbb {T}}\) be fixed and let \(\gamma _{z}(v)=-i\gamma (\overline{z}v)\). Then the same result of Proposition 3.4 holds for the operator \(\varGamma (\widehat{\gamma }_{z})\).

Proof of Proposition

The Invariance Principle, Theorem 2.5, shows that it is sufficient for the statement to hold for \(\tau _{\square }(x, y)=\mathbb {1}_{\square }(x,y)\) defined in (1.20). This has already been done in [23, Theorem 5.1] and it has already been discussed in the Introduction in (1.10). Since the Hilbert matrix is a positive-definite operator, it is easy to see that \(\tau _{N}\star \varGamma (\widehat{\gamma })\) is positive-definite and so

$$\begin{aligned} \overline{\mathsf {LD}}_{\tau }(t;\varGamma (\widehat{\gamma }))=\overline{\mathsf {LD}}^{+}_{\tau }(t;\varGamma (\widehat{\gamma })), \quad \overline{\mathsf {LD}}^{-}_{\tau }(t;\varGamma (\widehat{\gamma }))=0. \end{aligned}$$

The statement can be independently proved using the function \(\tau _{1}(x,y)=e^{-(x+y)}\) discussed in the Introduction, however we postpone this to the Appendix. \(\square \)

Proof of Corollary 3.5

Indeed, note that \(\widehat{\gamma }_{z}(j)=-i\,\overline{z}^{j}\,\widehat{\gamma }(j),\, j\ge 0\). Hence, for any function \(\tau \) one has:

$$\begin{aligned} \tau _{N}\star \varGamma (\widehat{\gamma }_{z})=-iU_{\overline{z}}(\tau _{N}\star \varGamma (\widehat{\gamma }))U_{\overline{z}} \end{aligned}$$

where \(U_{\overline{z}}\) is the unitary operator of multiplication by the sequence \(\{\overline{z}^{j}\}_{ j\ge 0}\), \(z \in {\mathbb {T}}\), acting on \(\ell ^{2}(\mathbb {Z_{+}})\). From this, we immediately see that

$$\begin{aligned} s_{n}(\tau _{N}\star \varGamma (\widehat{\psi }_{z}))=s_{n}(\tau _{N}\star \varGamma (\widehat{\gamma })),\quad \forall \,n\ge 1 \end{aligned}$$

and so the statement follows immediately from Proposition 3.4. \(\square \)

4 Proof of Theorem 1.1

The proof of the result will be broken down in two steps. For brevity, we denote by \(\varGamma ^{(N)}(\widehat{\omega })\) the operator \(\tau _{N}\star \varGamma (\widehat{\omega })\). We also recall that \(\Omega \) is the set of jump-discontinuities of the symbol \(\omega \) and \(\mathsf {c}\) is the function in (1.10).

Step 1. Finitely many jumps. Suppose that \(\Omega \) is finite. Setting \(\gamma _{z}(v)=-i\gamma (\overline{z}v)\), with \(\gamma \) being the symbol defined in (1.4), write

$$\begin{aligned} \omega (v)=\sum _{z \in \Omega }{\varkappa _{z}(\omega )\gamma _{z}(v)}+\eta (v) \end{aligned}$$
(4.1)

where \(\eta \) is continuous on \({\mathbb {T}}\) and let \(\Phi \) denote the symbol

$$\begin{aligned} \Phi (v)=\sum _{z \in \Omega }{\varkappa _{z}(\omega )\gamma _{z}(v)}. \end{aligned}$$

Weyl’s inequality (2.2) shows that for \(0<s<t\) one has

$$\begin{aligned} \mathsf {n}(t+s;\varGamma ^{(N)}(\widehat{\Phi }))-\mathsf {n}(s;\varGamma ^{(N)}(\widehat{\eta }))\le & {} \mathsf {n}(t;\varGamma ^{(N)}(\widehat{\omega }))\\ \mathsf {n}(t;\varGamma ^{(N)}(\widehat{\omega }))\le & {} \mathsf {n}(t-s;\varGamma ^{(N)}(\widehat{\Phi }))+\mathsf {n}(s;\varGamma ^{(N)}(\widehat{\eta })). \end{aligned}$$

Since \(\varGamma (\widehat{\eta })\) is compact, Lemma 2.4 implies \(\mathsf {n}(s;\varGamma ^{(N)}(\widehat{\eta }))=O_{s}(1)\) as \(N\rightarrow \infty \) and so, using the definition of the functionals \(\underline{\mathsf {LD}}_{\tau }, \overline{\mathsf {LD}}_{\tau }\) we deduce that for any \(t>0\)

$$\begin{aligned} \overline{\mathsf {LD}}_{\tau }(t; \varGamma (\widehat{\omega }))\le & {} \overline{\mathsf {LD}}_{\tau }(t-0; \varGamma (\widehat{\Phi })), \end{aligned}$$
(4.2)
$$\begin{aligned} \underline{\mathsf {LD}}_{\tau }(t; \varGamma (\widehat{\omega }))\ge & {} \underline{\mathsf {LD}}_{\tau }(t+0; \varGamma (\widehat{\Phi })). \end{aligned}$$
(4.3)

Integration by parts shows that

$$\begin{aligned} \widehat{\Phi }(j)= & {} \sum _{z \in \Omega }{\varkappa _{z}(\omega )\widehat{\gamma }_{z}(j)}\nonumber \\= & {} \frac{-i}{\pi (j+1)}\sum _{z \in \Omega }{\varkappa _{z}(\omega )z^{j}}=O\left( \frac{1}{j+1}\right) ,\ \ \ j\rightarrow \infty \end{aligned}$$
(4.4)

and so by the Invariance Principle 2.5 applied to the operator \(\varGamma (\widehat{\Phi })\), we obtain

$$\begin{aligned}&\overline{\mathsf {LD}}_{\tau }(t; \varGamma (\widehat{\Phi }))\le \overline{\mathsf {LD}}_{\tau _{1}}(t-0; \varGamma (\widehat{\Phi })), \end{aligned}$$
(4.5)
$$\begin{aligned}&\underline{\mathsf {LD}}_{\tau }(t; \varGamma (\widehat{\Phi }))\ge \underline{\mathsf {LD}}_{\tau _{1}}(t+0; \varGamma (\widehat{\Phi })). \end{aligned}$$
(4.6)

where \(\tau _{1}(x,y)=e^{-(x+y)}\) induces the regularisation \(\varGamma _{N}(\widehat{\omega })=(\tau _{1})_{N}\star \varGamma (\widehat{\omega })\), as in (1.13). The Fourier transform \(\mathcal {F}\) on \(L^{2}({\mathbb {T}})\), defined as

$$\begin{aligned} (\mathcal {F}f)(j)=\int _{{\mathbb {T}}}f(z)\overline{z}^{j}d\varvec{m}(z),\quad j\ge 0, \end{aligned}$$

gives that, see (3.5), modulo kernels

$$\begin{aligned} \varGamma _{N}(\widehat{\Phi })=\sum _{z\in \Omega }\varkappa _{z}(\omega )\varGamma _{N}(\widehat{\gamma }_{z})=\sum _{z \in \Omega }{\varkappa _{z}(\omega )\mathcal {F}H((\gamma _{z})_{N})\mathcal {F}^{*}}, \end{aligned}$$

where \((\gamma _{z})_{N}=P_{r}*\gamma _{z}\), with \(P_{r}\) being the Poisson kernel with \(r=e^{-1/N}\). Thus by Lemma 3.1-(i) and the unitary equivalence above, we infer that whenever \(z\ne w\)

$$\begin{aligned} \sup _{N\ge 1}\Vert \varGamma _{N}(\widehat{\gamma }_{z})^{*}\varGamma _{N}(\widehat{\gamma }_{w})\Vert _{\mathfrak {S}_{1}}=\sup _{N\ge 1}\Vert H((\gamma _{z})_{N})^{*}H(\gamma _{w})_{N}\Vert _{\mathfrak {S}_{1}}<\infty . \end{aligned}$$

Using Theorem 2.7, it then follows that for \(t>0\)

$$\begin{aligned}&\overline{\mathsf {LD}}_{\tau _{1}}(t; \varGamma (\widehat{\Phi }))\le \sum _{z \in \Omega }\overline{\mathsf {LD}}_{\tau _{1}}(t-0; \varkappa _{z}(\omega )\varGamma (\widehat{\gamma }_{z})), \end{aligned}$$
(4.7)
$$\begin{aligned}&\underline{\mathsf {LD}}_{\tau _{1}}(t; \varGamma (\widehat{\Phi }))\ge \sum _{z\in \Omega }\underline{\mathsf {LD}}_{\tau _{1}}(t+0; \varkappa _{z}(\omega )\varGamma (\widehat{\gamma }_{z})). \end{aligned}$$
(4.8)

Finally, Corollary 3.5 together with (4.2), (4.3), (4.5), (4.6), (4.7) and (4.8) and the continuity of \(\mathsf {c}\) at \(t\ne 0\) gives that

$$\begin{aligned}&\overline{\mathsf {LD}}_{\tau }(t;\varGamma (\widehat{\omega }))\le \sum _{z \in \Omega }\mathsf {c}\left( \frac{t}{\left| \varkappa _{z}(\omega )\right| }\right) ,\\&\underline{\mathsf {LD}}_{\tau }(t;\varGamma (\widehat{\omega }))\ge \sum _{z\in \Omega }\mathsf {c}\left( \frac{t}{\left| \varkappa _{z}(\omega )\right| }\right) . \end{aligned}$$

The obvious inequality \(\underline{\mathsf {LD}}_{\tau }(t;H(\omega ))\le \overline{\mathsf {LD}}_{\tau }(t;H(\omega ))\) proves the assertion.

Remark 4.1

We note that (4.7) and (4.8) hold if we consider any symbol \(\omega \) which is smooth except for a finite set of jump discontinuities. These two together are yet another instance of the Localisation principle we referred to in the Introduction.

Step 2. From finitely many to infinitely jumps. Suppose now that \(\Omega \) is infinite. Define the sets:

$$\begin{aligned} \Omega _{0}= & {} \{z \in {\mathbb {T}}\ |\ \left| \varkappa _{z}(\omega )\right| \ge 2^{-1} \},\\ \Omega _{n}= & {} \{z \in {\mathbb {T}}\ |\ 2^{-n-1}\le \left| \varkappa _{z}(\omega )\right| < 2^{-n} \},\ \ \ n\ge 1. \end{aligned}$$

As we mentioned earlier, these are finite. Let \(\varphi _{n}\) be functions such that \({{\,\mathrm{sing\,supp}\,}}\varphi _{n}=\Omega _{n}\), \(\varkappa _{z}(\varphi _{n})=\varkappa _{z}(\omega )\) for any \(z \in \Omega _{n}\) and such that

$$\begin{aligned} \Vert \varphi _{n}\Vert _{\infty }=\max _{z \in \Omega _{n}}\left| \varkappa _{z}(\omega )\right| . \end{aligned}$$

Let \(\Phi =\sum _{n\ge 0}{\varphi _{n}} \in L^{\infty }({\mathbb {T}})\). Since \(\omega - \Phi \in C({\mathbb {T}})\), the operator \(\varGamma (\widehat{\omega })-\varGamma (\widehat{\Phi }) \in \mathfrak {S}_{\infty }\) and so, by Lemma 2.4 once again we obtain

$$\begin{aligned} \overline{\mathsf {LD}}_{\tau }(t; \varGamma (\widehat{\omega }))\le & {} \overline{\mathsf {LD}}_{\tau }(t-0; \varGamma (\widehat{\Phi })),\\ \underline{\mathsf {LD}}_{\tau }(t; \varGamma (\widehat{\omega }))\ge & {} \underline{\mathsf {LD}}_{\tau }(t+0; \varGamma (\widehat{\Phi })). \end{aligned}$$

For a fixed \(s>0\), let M be so that \(\Vert \Phi -\Phi _{M}\Vert _{\infty }< s\), where \(\Phi _{M}=\sum _{n=0}^{M}{\varphi _{n}}\). The uniform boundedness of \(\tau \) then gives

$$\begin{aligned}&\Vert \tau _{N}\star (\varGamma (\widehat{\Phi })-\varGamma (\widehat{\Phi }_{M}))\Vert \le \left( \sup _{N\ge 1}\Vert \tau _{N}\Vert _{\mathfrak {M}}\right) \Vert \Phi -\Phi _{M}\Vert _{\infty }\\&\quad < \left( \sup _{N\ge 1}\Vert \tau _{N}\Vert _{\mathfrak {M}}\right) s:=s'. \end{aligned}$$

Letting \(\widetilde{\Omega }_{M}=\cup _{n=0}^{M}\Omega _{n}\), we then obtain that:

$$\begin{aligned} \overline{\mathsf {LD}}_{\tau }(t; H(\omega ))\le \overline{\mathsf {LD}}_{\tau }(t-s'; H(\Phi _{M}))= & {} \sum _{z\in \widetilde{\Omega }_{M}}\mathsf {c}\left( \frac{t-s'}{\left| \varkappa _{z}(\omega )\right| }\right) ,\\ \underline{\mathsf {LD}}_{\tau }(t; H(\omega ))\ge \underline{\mathsf {LD}}_{\tau }(t+s'; H(\Phi _{M}))= & {} \sum _{z\in \widetilde{\Omega }_{M}}\mathsf {c}\left( \frac{t+s'}{\left| \varkappa _{z}(\omega )\right| }\right) . \end{aligned}$$

The equalities above follow from the Step 1., since \(\Phi _{M}\) has finitely many jumps. Finally, sending \(s \rightarrow 0\) and noting that there are only finitely many \(z \in \Omega \) such that \(t\le \left| \varkappa _{z}(\omega )\right| \), one obtains

$$\begin{aligned} \underline{\mathsf {LD}}_{\tau }(t; H(\omega ))=\overline{\mathsf {LD}}_{\tau }(t; H(\omega )). \end{aligned}$$

5 Proof of Theorem 1.2

Just as in the proof of Theorem 1.1, we break the argument into two steps, and use the same notation as before for the operator \(\tau _{N}\star \varGamma (\widehat{\omega })\) and for the symbols \(\gamma _{z}\). We also set \(\Omega ^{+}=\{z \in \Omega \ \vert \ {{\,\mathrm{Im}\,}}z>0\}\).

Step 1. Finitely many jumps. Just as before, suppose that the symbol \(\omega \) has finitely-many jump-discontinuities. Let us write the symbol \(\omega \) as

$$\begin{aligned} \omega =\left( \varkappa _{1}(\omega )\gamma _{1}+\varkappa _{-1}(\omega )\gamma _{-1}+\sum _{z \in \Omega ^{+}}{\varkappa _{z}(\omega )\gamma _{z}+\overline{\varkappa _{z}(\omega )}\gamma _{\overline{z}}}\right) +\eta , \end{aligned}$$
(5.1)

where \(\eta \) is continuous on \({\mathbb {T}}\). If \(\omega \) has no jump at \(\pm 1\), the corresponding quantities do not appear in the above. Denoting by \(\Phi \) the sum in the brackets, Weyl inequality (2.3) gives for \(0<s<t\)

$$\begin{aligned} \mathsf {n}_{\pm }(t+s;\varGamma ^{(N)}(\widehat{\Phi }))-\mathsf {n}(s;\varGamma ^{(N)}(\widehat{\eta }))\le & {} \mathsf {n}_{\pm }(t;\varGamma ^{(N)}(\widehat{\omega })),\\ \mathsf {n}_{\pm }(t;\varGamma ^{(N)}(\widehat{\omega }))\le & {} \mathsf {n}_{\pm }(t-s;\varGamma ^{(N)}(\widehat{\Phi }))+\mathsf {n}_{\pm }(s;\varGamma ^{(N)}(\widehat{\eta })). \end{aligned}$$

By Lemma 2.4, we obtain that \(\mathsf {n}_{\pm }(s;\varGamma ^{(N)}(\widehat{\eta }))=O_{s}(1)\), and so, for any \(t>0\), it follows that

$$\begin{aligned} \overline{\mathsf {LD}}_{\tau }^{\pm }(t; \varGamma (\widehat{\omega }))\le & {} \overline{\mathsf {LD}}_{\tau }^{\pm }(t-0; \varGamma (\widehat{\Phi })), \\ \underline{\mathsf {LD}}_{\tau }^{\pm }(t; \varGamma (\widehat{\omega }))\ge & {} \underline{\mathsf {LD}}_{\tau }^{\pm }(t+0; \varGamma (\widehat{\Phi })). \end{aligned}$$

Integration by parts once again shows that

$$\begin{aligned} \widehat{\Phi }(j)=O\left( \frac{1}{j+1}\right) ,\quad j\rightarrow \infty . \end{aligned}$$

Applying Theorem 2.5 to \(\varGamma (\widehat{\Phi })\), we see that it is sufficient to prove the result for the multiplier \(\tau _{1}(x,y)=e^{-(x+y)}.\) Since the symbols

$$\begin{aligned} \varkappa _{1}(\omega )\gamma _{1},\ \varkappa _{-1}(\omega )\gamma _{-1},\ \varkappa _{z}(\omega )\gamma _{z}+\overline{\varkappa _{z}(\omega )}\gamma _{\overline{z}} \end{aligned}$$

have mutually disjoint singular supports for \(z \in \Omega ^{+}\), Lemma 3.1-(ii) and Theorem 2.7 imply that for \(t>0\)

$$\begin{aligned} \overline{\mathsf {LD}}^{\pm }_{\tau _{1}}(t; \varGamma (\widehat{\Phi }))\le & {} \overline{\mathsf {LD}}^{\pm }_{\tau _{1}}(t-0; \varkappa _{1}(\omega )\varGamma (\widehat{\gamma _{1}}))+ \overline{\mathsf {LD}}^{\pm }_{\tau _{1}}(t-0; \varkappa _{1}(\omega )\varGamma (\widehat{\gamma _{1}}))\nonumber \\&+\sum _{z \in \Omega ^{+}}\overline{\mathsf {LD}}^{\pm }_{\tau _{1}}(t-0; \varkappa _{z}(\omega )\varGamma (\widehat{\gamma }_{z})+\overline{\varkappa _{z}(\omega )}\varGamma (\widehat{\gamma }_{\overline{z}})) \end{aligned}$$
(5.2)
$$\begin{aligned} \underline{\mathsf {LD}}^{\pm }_{\tau _{1}}(t; \varGamma (\widehat{\Phi }))\ge & {} \underline{\mathsf {LD}}^{\pm }_{\tau }(t+0; \varkappa _{1}(\omega )\varGamma (\widehat{\gamma _{1}}))+ \underline{\mathsf {LD}}^{\pm }_{\tau _{1}}(t+0; \varkappa _{1}(\omega )\varGamma (\widehat{\gamma _{1}})) \nonumber \\&+\sum _{z \in \Omega ^{+}}\underline{\mathsf {LD}}^{\pm }_{\tau _{1}}(t+0; \varkappa _{z}(\omega )\varGamma (\widehat{\gamma }_{z})+\overline{\varkappa _{z}(\omega )}\varGamma (\widehat{\gamma }_{\overline{z}})). \end{aligned}$$
(5.3)

The operators \(\varkappa _{\pm 1}(\omega )\varGamma (\widehat{\gamma }_{\pm 1})\) are sign definite, and furthermore one has that

$$\begin{aligned} \varkappa _{\pm 1}(\omega )\varGamma (\widehat{\gamma }_{\pm 1})\ge 0\ (\text {resp.} \le 0)\ \text {if} \ -i \varkappa _{\pm 1}(\omega )\ge 0\ (\text {resp.} \le 0). \end{aligned}$$

In either case, Proposition 3.4 gives that

$$\begin{aligned} \overline{\mathsf {LD}}^{\pm }_{\tau _{1}}(t;\varkappa _{\pm 1}(\omega )\varGamma (\widehat{\gamma }_{\pm 1}))= & {} \mathbb {1}_{\pm }(-i \varkappa _{\pm 1}(\omega ))\overline{\mathsf {LD}}_{\tau }(t;\varkappa _{\pm 1}(\omega )\varGamma (\widehat{\gamma }_{\pm 1}))\nonumber \\= & {} \mathbb {1}_{\pm }(-i \varkappa _{\pm 1}(\omega ))\mathsf {c}\left( t\left| \varkappa _{\pm 1}(\omega )\right| ^{-1}\right) \end{aligned}$$
(5.4)

where \(\mathbb {1}_{\pm }\) is the indicator function of \({\mathbb {R}}_{\pm }=(0, \pm \infty )\).

From Lemma 3.1-(ii), Theorems 2.8 and  1.1 above, we get that for any \(z \in \Omega ^{+}\)

$$\begin{aligned} \overline{\mathsf {LD}}^{\pm }_{\tau _{1}}\left( t; \varkappa _{z}(\omega )\varGamma (\widehat{\gamma }_{z})+\overline{\varkappa _{z}(\omega )}\varGamma (\widehat{\gamma }_{\overline{z}})\right)= & {} \frac{1}{2}\overline{\mathsf {LD}}_{\tau _{1}}\left( t; \varkappa _{z}(\omega )\varGamma (\widehat{\gamma }_{z})+\overline{\varkappa _{z}(\omega )}\varGamma (\widehat{\gamma }_{\overline{z}})\right) \nonumber \\= & {} \mathsf {c}\left( t\left| \varkappa _{z}(\omega )\right| ^{-1}\right) . \end{aligned}$$
(5.5)

Using (5.4) and (5.5) in (5.2) and (5.3), the continuity of \(\mathsf {c}\) at \(t\ne 0\) gives that

$$\begin{aligned} \underline{\mathsf {LD}}^{\pm }_{\tau _{1}}(t; \varGamma (\widehat{\omega }))=\overline{\mathsf {LD}}^{\pm }_{\tau _{1}}(t; \varGamma (\widehat{\Phi })) \end{aligned}$$

and so we arrive at (1.7).

Remark 5.1

As we wrote earlier in the Introduction, if the symbol has a pair of complex conjugate jumps, then (5.5) shows that the upper and lower logarithmic spectral density of \(\varGamma (\widehat{\omega })\) contribute equally to the logarithmic spectral density of \(\left| \varGamma (\widehat{\omega })\right| \). This is an effect of the Symmetry Principle we referred to in the Introduction.

Step 2. From finitely many to infinitely many jumps. For fixed \(s > 0\), define the set

$$\begin{aligned} \Omega _{s}^{+}=\{z \in \Omega \ \vert \ \left| \varkappa _{z}(\omega )\right|>s \text { and } {{\,\mathrm{Im}\,}}z> 0\}. \end{aligned}$$

Just as in Step 2. in the proof of Theorem 1.1, we can find a symbol \(\omega _{s} \in PC({\mathbb {T}})\) so that \(\Vert \omega -\omega _{s}\Vert _{\infty }< s\), the set of its discontinuities is precisely \(\Omega _{s}^{+}\cup \{\pm 1\}\) and

$$\begin{aligned} \varkappa _{z}(\omega )=\varkappa _{z}(\omega _{s}),\quad \forall z \in \Omega _{s}^{+}\cup \{\pm 1\}. \end{aligned}$$

The set \(\Omega _{s}^{+}\cup \{\pm 1\}\) is finite, thus from Weyl inequality (2.3) and Step 1. we obtain

$$\begin{aligned} \overline{\mathsf {LD}}^{\pm }_{\tau }(t; \varGamma (\widehat{\omega }))\le & {} \overline{\mathsf {LD}}^{\pm }_{\tau }(t-s'; \varGamma (\widehat{\omega }_{s}))\\ \underline{\mathsf {LD}}^{\pm }_{\tau }(t; \varGamma (\widehat{\omega }))\ge & {} \underline{\mathsf {LD}}^{\pm }_{\tau }(t+s'; \varGamma (\widehat{\omega }_{s})), \end{aligned}$$

where \(s'=\left( \sup _{N\ge 1}\Vert \tau _{N}\Vert _\mathfrak {M}\right) s.\) Finally, sending \(s \rightarrow 0\) and using the continuity of \(\mathsf {c}\) at \(t\ne 0\) establishes the result in its generality. \(\square \)

Proof of Proposition 1.3

The same reasoning of Step 1. in both proofs above applies in this case, with only one minor change. Since we assume that \(\tau \) induces a uniformly bounded multiplier on \(\mathfrak {S}_{p},\, p>1\), i.e. that (1.25) holds, in (4.1) and (5.1) we need to assume that \(\eta \) is a symbol so that \(\varGamma (\widehat{\eta }) \in \mathfrak {S}_{p}\). Then Lemma 2.4 shows that \(\mathsf {n}(s;\varGamma ^{(N)}(\widehat{\eta }))=O_{s}(1)\) and, in the self-adjoint case \(\mathsf {n}_{\pm }(s;\varGamma ^{(N)}(\widehat{\eta }))=O_{s}(1)\). The rest follows immediately. \(\square \)

Proof of Proposition 1.4

Exactly the same reasoning of the proofs of Theorems 1.1 and 1.2 above applies in this case, with the only difference being that in this case \(\tau \) is no longer inducing a uniformly bounded multiplier on the whole space of bounded operators, just on Hankel matrices. However, all of the terms appearing in the arguments just presented are bounded Hankel operators and so the same arguments apply in this case. \(\square \)