The spectral density of Hankel operators with piecewise continuous symbols

In 1966, H. Widom proved an asymptotic formula for the distribution of eigenvalues of the $N\times N$ truncated Hilbert matrix for large values of $N$. In this paper, we extend this formula to Hankel matrices with symbols in the class of piece-wise continuous functions on the unit circle. Furthermore, we show that the distribution of the eigenvalues is independent of the choice of truncation (e.g. square or triangular truncation).

1. Introduction 1.1. General setting and first results. Given an (essentially) bounded function ω, called a symbol, on the unit circle T = {v ∈ C | |v| = 1}, the associated Hankel matrix, Γ ( ω), is the (bounded) operator on ℓ 2 (Z + ), Z + = {0, 1, 2, . . .}, whose "matrix" entries are where ω denotes the sequence of Fourier coefficients The matrix Γ ( ω) is always symmetric. In particular, it is self-adjoint if and only if ω is real-valued. For instance, this is the case when ω satisfies the following symmetry condition (1.1) In this paper, we consider symbols in the class of the piece-wise continuous functions on T, denoted by P C(T), i.e. those symbols ω for which the limits ω(z+) = lim ε→0+ ω(ze iε ), ω(z−) = lim ε→0+ ω(ze −iε ), (1.2) exist and are finite for all z ∈ T. The points z ∈ T for which the quantity κ z (ω) = ω(z+) − ω(z−) 2 = 0 are called the jump discontinuities of ω and κ z (ω) is the half-height of the jump of the symbol at z. Due to the presence of jump discontinuities, Hankel matrices with these symbols are non-compact. The compactness of T and the existence of the limits in (1.2) can be used to show that the sets Ω s = {z ∈ T| |κ z (ω)| > s}, s > 0, are finite and so the set of jump-discontinuities of ω, denoted by Ω, is at most countable. Furthermore, if the symbol satisfies (1.1), Ω is symmetric with respect to the real axis and for any z ∈ Ω κ z (ω) = −κ z (ω), whereby we obtain that |κ z (ω)| = |κ z (ω)|, and at z = ±1, κ z (ω) is purely imaginary.
Hankel matrices with piece-wise continuous symbols still attract attention in both the operator theory and spectral theory community, see for instance [18,19] and references therein. S. Power, [16], showed that the essential spectrum of such matrices consists of bands depending only on the heights of the jumps of the symbol and gave the following identity: spec ess (Γ ( ω)) = [0, −iκ 1 (ω)] ∪ [0, −iκ −1 (ω)] ∪ z∈Ω\{±1} −i(κ z (ω)κ z (ω)) 1/2 , i(κ z (ω)κ z (ω)) 1/2 , (1.3) where the notation [a, b], a, b ∈ C denotes the line segment joining a and b. Assuming that the symbol has finitely many jumps and, say, it is Lipschitz continuous on the left and on the right of the jumps, in [18], a more detailed picture is obtained for the absolutely continuous (a.c.) spectrum of |Γ ( ω)| = Γ ( ω) * Γ ( ω), where the following formula is obtained Furthermore, it is shown that each band contributes 1 to the multiplicity of the a.c. spectrum.
For N ≥ 1, let Γ (N ) ( ω) be the N × N Hankel matrix . We wish to give a description of the relationship between the spectrum of the infinite matrix Γ ( ω) and that of its truncation Γ (N ) ( ω). More specifically: (i) for a non-self-adjoint Hankel matrix, we study the distribution of the singular values of Γ (N ) ( ω) inside the spectrum of |Γ ( ω)|; (ii) in the self-adjoint setting, we study the distribution of the eigenvalues of Γ (N ) ( ω) inside the spectrum of Γ ( ω). To do so, for a non-self-adjoint Hankel matrix Γ ( ω) we study the asymptotic behaviour of the singular-value counting function n(t; Γ (N ) ( ω)) = #{n : s n (Γ (N ) ( ω)) > t}, t > 0, as N → ∞. Here {s n (Γ (N ) ( ω))} n≥1 is the sequence of singular values of Γ (N ) ( ω). In particular, we study the logarithmic spectral density of |Γ ( ω)|, defined as LogDens (t; Γ ( ω)) := lim N →∞ n(t; Γ (N ) ( ω)) log(N) . (1.6) For a self-adjoint Γ ( ω), its spectrum, spec(Γ ( ω)), is a subset of the real line and so we look at how the positive and negative eigenvalues of Γ (N ) ( ω) distribute inside spec(Γ ( ω)). To this end, we analyze the behaviour of the eigenvalue counting functions n ± (t; Γ (N ) ( ω)) = #{n : λ ± n (Γ (N ) ( ω)) > t}, t > 0, as N → ∞. Here {λ ± n (Γ (N ) ( ω))} n≥1 are the sequences of positive eigenvalues of ±Γ (N ) ( ω) respectively. In this setting, we study the functions Similarly to the non-self-adjoint setting, we call the function LogDens + (resp. LogDens − ) in (1.7) the positive (resp. negative) logarithmic spectral density of Γ (ω). The appearing as an index in the definitions of the logarithimic spectral densities in (1.6) and (1.7) has been chosen to stress the fact that, a priori, these quantities depend on our choice to truncate the infinite matrix Γ ( ω) to its upper N × N square. Furthermore, the terminology we use for the functions LogDens and LogDens ± comes from the fact that we are only studying a logarithmically-small portion of the singular values (or eigenvalues) of the matrix Γ (N ) ( ω). Their definitions are motivated by the results obtained by Widom (see [23,Theorem 4.3]) for the Hilbert matrix Γ ( γ), where he showed that Here c(t) := 0 whenever t / ∈ (0, 1) and (1.10) We note that a factor of 2π is missing in the statement of [23,Theorem 4.3]. The aim of this paper is to extend (1.8) to a general symbol ω ∈ P C(T). In particular, for a non-self-adjoint Hankel matrix, we aim to show that where c is the function defined in (1.10). Recall that the symbol ψ defined in (1.4) has jumps at ±1 whose half-height is κ ±i (ψ) = ∓1, so for the Hankel matrix Γ ( ψ) the formula (1.11) yields For self-adjoint Hankel matrices, we extend the result in (1.9) to symbols ω ∈ P C(T) satisfying (1.1) and obtain where Ω + = {z ∈ Ω | Im z > 0}, and ½ ± is the indicator function of the half-line (0, ±∞).
Again, the function c has been defined in (1.10). In particular, for the symbol ψ in (1.4), we obtain that LogDens ± (t; Γ ( ψ)) = c (t) . A natural question that we also address here is that of the universality of the limits in (1.6) and (1.7). In other words, we investigate whether they depend on the choice of "regularisation" of the matrix Γ ( ω). For instance, the main results of this paper, Theorems 1.1 and 1.2 below, tell us that the singular values of the matrix Γ (N ) ( ω) and of the regularised matrix For a bounded sequence (τ (j, k)) j,k≥0 , called a multiplier, and a bounded operator A on ℓ 2 (Z + ), the Schur-Hadamard multiplication of τ and A is the operator on ℓ 2 (Z + ), τ ⋆ A, formally defined through the quadratic form where e j is the j-th vector of the standard basis of ℓ 2 (Z + ). Various authors in the literature, [2,4,14], have addressed the issue of establishing how properties of τ translate into the boundedness of this operation on the space of bounded operators and the Schatten classes S p (for a definition see section 2 below). To do so, they have studied the operator norms Using the duality of S p -classes, it is possible to show that the following identities hold 18) and so it is sufficient to study the boundedness of Schur-Hadamard multiplication on S p for p ≥ 2. The case of p = 2 is somewhat trivial. In fact, the structure of S 2 gives that any bounded sequence τ is a bounded Schur-Hadamard multiplier and furthermore For a general 1 < p < ∞, p = 2 not much is known with regards to the finiteness of τ Mp . However, a necessary and sufficient condition for the boundedness of a Schur-Hadamard multiplier on the space of bounded operators (and, as a consequence of (1.17), on S 1 and S ∞ ) is known and can be found in [5].
For the purposes of this paper, we will consider the Schur-Hadamard multiplier τ in (1.14) as the restriction to Z 2 + of a bounded function defined on [0, ∞) 2 . For N ≥ 1 set τ N (j, k) = τ (jN −1 , kN −1 ). If τ is such that the sequence of τ N satisfies the following we say that τ induces a uniformly bounded multiplier. An easy example of such a multiplier is the N × N truncation of an infinite matrix. To see this take the function τ (x, y) = ½ (x, y), (1.20) where ½ is the characteristic function of the half-open unit square [0, 1) 2 . For any bounded operator A, τ N ⋆ A is the truncation to its upper N × N block and so we have that for any N ≥ 1 τ N M = 1. We discuss some more examples of Schur-Hadamard multipliers below.
1.3. Statement of the main results. As we anticipated, our main results are not only concerned to the existence of the limits in (1.8) and (1.9), but also with their universality. In other words, for a Hankel matrix Γ ( ω) and a given multiplier τ , we show that under some mild assumptions on τ , see (A)-(C) below, the function is independent of the choice of τ . Similarly, for a self-adjoint Hankel matrix and a multiplier τ such that τ (x, y) = τ (y, x), we show that the same is true for the functions Note that when τ = τ as in (1.20), the functions LogDens τ (t; Γ ( ω)) and LogDens ± τ (t; Γ ( ω)) are precisely those defined in (1.8) and (1.9). Let us state the following assumptions on τ : (A) τ induces a uniformly bounded Schur-Hadamard multiplier, i.e. (1.19) holds; (B) τ (0, 0) = 1 and for some ε > 0 and some β > 1/2, there exists C β > 0, so that (C) for some α > 1/2 one can find C α so that |τ (x, y)| ≤ C α log(x + y + 2) −α , ∀ x, y ≥ 0.
Then (1.11) is a particular case of the following: Theorem 1.1. Let τ be a multiplier satisfying (A)-(C). Let ω ∈ P C(T) and Ω be the set of its discontinuities. Then where c(t) is the function defined in (1.10).

Remarks.
(A) It is clear that Theorems 1.1 and 1.2 generalise the result of Widom in [23] mentioned earlier in (1.10) to any multiplier τ and, in both instances, we only describe the behaviour of a logarithmically small portion of the spectrum of τ N ⋆ Γ ( ω) as most of the points lie in a vicinity of 0. (B) Both Theorems 1.1 and 1.2 deal with a rather general class of symbols and for this reason we cannot say more about the error term in the asymptotic expansion of the functions n, n ± . In fact, we can only write If, however, we were to restrict our attention to those symbols with finitely many jumps and some degree of smoothness away from them (say Lipschitz continuity), we would obtain a more precise estimate, see [8], however the trade-off would be that of making our results less general. (C) Studying the spectral density of operators is common to many areas of spectral analysis. In particular, our results can be put in parallel to well-known results in the spectral theory of Schrödinger operators, where the existence and universality of the density of states is a well-studied problem for a wide class of potentials, see [7] and [10, Section 5] for an introduction and references therein for more on this subject. (D) Both Theorems 1.1 and 1.2 assume that the multiplier τ induces a uniformly bounded multiplier on the space of bounded operators. However, this condition can be substantially weakened in two different ways. Firstly, we can weaken assumption (A) on the multiplier τ by assuming that for some finite p > 1, τ induces a uniformly bounded Schur-Hadamard multiplier on S p , or in other words that However, as a trade-off, we need to impose more stringent conditions on the symbol, as the following statement shows: Proposition 1.3. Suppose τ satisfies (1.25) as well as Assumptions (B) and (C). If the symbol ω can be written as where Ω is a finite subset of T, γ is the symbol in (1.4) and η is a symbol for which Γ ( η) ∈ S p , then (1.23) holds. Furthermore, if τ (x, y) = τ (y, x) and ω also satisfies (1.1), then (1.24) holds.
Secondly, we can assume that τ only induces a uniformly bounded Schur-Hadamard multiplier on the space bounded Hankel matrices, i.e. that In this case, Theorems 1.1 and 1.2 still hold in their generality and we have the following Proposition 1.4. Let ω ∈ P C(T) and let τ satisfy (1.27) as well as Assumptions (B) and (C). Then (1.23) holds. Furthermore, if ω satisfies the symmetry condition (1.1) and τ (x, y) = τ (y, x), then (1.24) holds.
We chose to make use of Assumption (A) instead of (1.25) and (1.27), because there are no known necessary and sufficient conditions for a multiplier to satisfy either of them. We give specific examples of multipliers that satisfy these conditions below.
Example 1.5 (Factorisable multipliers). If the function τ can be factorised as for some bounded function f, g, then it is easy to see that it induces a uniformly bounded Schur-Hadamard multiplier in the sense of (1.19), and furthermore As it was pointed out earlier in (1.20), the truncation to the upper N × N square is an example of such a multiplier. Another example is given by choosing the function τ 1 (x, y) = e −(x+y) = e −x e −y . This induces the regularisation in (1.13) and it is immediate to see that is not uniformly bounded on the bounded operators, see [1,6,12], where it was shown that However, τ 2 is uniformly bounded on any Schatten class S p , 1 < p < ∞, see [5], and so Proposition 1.3 holds. Proposition 1.4 shows that Theorems 1.1 and 1.2 still hold in the case that the Schur-Hadamard multiplier is only uniformly bounded on the set of bounded Hankel matrices. An example of such a multiplier is given by the indicator function, τ β,γ , of the region Even though τ β,γ does not induce, in general, a uniformly bounded Schur-Hadamard multiplier, it has been shown in [6, Theorem 1(a)] that this is the case on the set of bounded Hankel matrices for β = 1, 0 and any γ (at β = 1 and γ = 1, τ 1,1 reduces to the multiplier τ 2 considered above). With this at hand, an appropriate choice of the parameters β and γ gives (1.23) and (1.24).
Example 1.7 (General Criterion). For more complicated functions, the following criterion can be of help. Let Σ ⊂ R and m be a measure on Σ. Suppose that for the function τ we can write It is not hard to check that τ induces a uniformly bounded Schur-Hadamard multiplier and that Using this it is possible to show that the function induces a uniformly bounded Schur-Hadamard multiplier, since one has the following representation The multipliers induced by the functions are related to the Abel-Poisson, Dirichlet and Cesaro summation methods respectively and share some of the properties of the operators of convolution with the respective kernels, see Section 3 for more on the Poisson kernel.
1.6. Outline of the proofs. To prove Theorems 1.1 and 1.2 we use a similar approach to the one in [16] and [20] and combine abstract results concerned with the general properties of the functions n, n ± (see Section 2) and more hands-on function theoretic ones that are specific to the theory of Hankel matrices, see Section 3.
To prove Theorem 1.1, we firstly assume that the set of jump-discontinuities, Ω, of the symbol ω is finite and we write where γ is the symbol in (1.4) and η is a continuous function on T. The analysis of LogDens τ (t; Γ ( ω)) then proceeds with the study of each summand appearing in (1.31) and the interactions this has with all the others. In particular, Assumption (A) allows us to disregard the contribution coming from the matrix Γ ( η), i.e. it gives that The invariance of the functions LogDens τ with respect to the choice of multiplier, proved in Theorem 2.5, gives that where the multiplier σ 1 (x, y) = e −(x+y) is given in the Example 1.5 above, and it is shown to induce the regularisation in (1.13), i.e. (τ 1 ) N ⋆ Γ ( ω) = Γ N ( ω). For the multiplier τ 1 , we explicitly show that the operators Γ ( γ z ) are mutually "almost orthogonal" in the sense that if z = w ∈ Ω, then both are trace-class. From here, Theorem 2.7, gives that each jump contributes independently, or in other words that we can write We note here that the above is another instance of the general fact that jumps occurring at different points of the unit circle contribute independently to the spectral properties of the operator Γ ( ω). For this reason, we follow the terminology used by the authors of [20] and we refer to this fact as the "Localisation Principle". Finally, using once again the Invariance Principle, Theorem 2.5, and the result of Widom in (1.10), we obtain the identity (1.11) for a symbol ω with finitely many jumps. The proof of Theorem 1.2 roughly follows the same outline. However, instead of writing the symbol ω as in (1.31), we make use of the symmetry of the set of jump-discontinuities, Ω, to decompose it as follows where Ω + = {z ∈ Ω | Im z > 0} and, as before, γ z (v) = −iγ(zv) and η is a continuous symbol on T. The same strategy used in the proof of Theorem 1.1 leads to the following identity The fact that the jumps of ω are arranged symmetrically around T can be used to show that the positive and negative eigenvalues of the compact operator are arranged almost symmetrically around 0, in a sense that we will specify in Lemma 3.1-(ii). Using Theorem 2.8, we conclude that Using once again the result of Widom in (1.10), we arrive at (1.24). It is worth noting here that (1.33) shows that if ω has jumps occurring at a pair of complex conjugate points, then the upper and lower logarithmic spectral densities, LogDens ± (t, Γ ( ω)), contribute equally to the logarithmic spectral density of |Γ ( ω)|, we refer to this as the "Symmetry Principle", following the terminology used by the authors of [21]. Both Theorem 1.1 and 1.2 are then extended to the case of a symbol with infinitely-many jump-discontinuities using an approximation argument first presented by Power in [16] and subsequently in [15, Ch. 10, Thm. 1.10], see Section 4 below.
2. Abstract properties of the spectral density 2.1. First definitions and results. Let S ∞ denote the ideal of compact operators. For any p > 0, S p denotes the ideal of compact operators whose singular values are p-summable and let S 0 = ∩ p>0 S p . For p ≥ 1, the S p -norm is defined as Here {s n (A)} ∞ n=1 is the sequence of singular values of A ordered in a decreasing manner with multiplicities taken into account. All operators in this section are bounded operators acting on the space of square summable sequence ℓ 2 (Z + ).
The functions n, n ± were defined in the Introduction. It is clear that n(t; A) = n(t; A * ), as the non-zero singular values of A and A * coincide and, furthermore one has For any self-adjoint operator A, the functions n and n ± are linked via the following: The singular-value counting function of K ∈ S p , satisfies the following simple estimate: If K is self-adjoint, the same holds for the functions n ± (t; K).
We will also use the following inequalities, known as Weyl's inequalitites, see [3, Thm. 9, Ch. 9]: Lemma 2.2 (Weyl Inequality). Let A, B be compact operators and 0 < s < t, then with the last inequality holding for self-adjoint operators.
For a bounded τ on [0, ∞) 2 we have already defined in the Introduction the meaning of τ N ⋆ A. We have the following simple Lemma 2.3. Let τ be continuous at (0, 0) with τ (0, 0) = 1 and suppose it satisfies Assumption (A). Then for any bounded operator A, τ N ⋆ A → A as N → ∞ in the strong operator topology. Furthermore, if A is compact, the same is true in the operator norm.
Proof of Lemma. Recall that Assumption (A) implies that for any operator A one has Let e j , j ≥ 0 be the standard basis vectors of ℓ 2 (Z + ). Using continuity of τ at (0, 0) and the fact that τ (0, 0) = 1, a simple calculation shows (τ N ⋆ A)e j → Ae j in ℓ 2 (Z + ) and so we obtain that (τ N ⋆ A)x → Ax as N → ∞ for any finite sequence x ∈ ℓ 2 (Z + ).
For any x ∈ ℓ 2 (Z + ), the result follows from a standard ε/3 argument. In particular, for ε > 0, we find a finite sequence x ε so that x − x ε 2 < ε/3. Using the triangle inequality and (2.4), together with the fact that (τ N ⋆ A)x ε → Ax ε we obtain the assertion.
If A is compact, we have that for any given ε > 0 we can find a finite matrix B so that A − B < ε. For any finite matrix B, the convergence τ N ⋆ B → B in the strong operator topology implies convergence in the operator norm, so for N large we have (τ N ⋆B)−B < ε. The triangle inequality now yields As a consequence, we have the following Lemma 2.4. Let K ∈ S ∞ and τ be as in Lemma 2.3. Then for any t > 0 one has If τ (x, y) = τ (y, x) and K is self-adjoint, the same holds for the functions n ± .
Proof of Lemma. From Lemma 2.3, we have that τ N ⋆ K → K in the operator norm and, in particular, for ε > 0 we can find N suitably large so that τ N ⋆ K − K < ε, whereby it follows that n(ε; τ N ⋆ K − K) = 0. Using (2.2), we obtain for 0 < ε < t: The proof in the self-adjoint case follows exactly the same reasoning.
Define B 0 as the set of operators on ℓ 2 (Z + ): Clearly A ∈ B 0 if and only if there exists a sequence a ∈ ℓ ∞ (N 2 ) so that A j,k = a j,k π(j + k + 1) , ∀j, k ≥ 0.
From Hilbert inequality one obtains the estimate A ≤ a ℓ ∞ , and so A is also bounded. If the multiplier τ satisfies assumption (C), i.e. if for some α > 1/2, one has it is not difficult to see that when A ∈ B 0 one has that τ N ⋆ A ∈ S 2 , since we have the following estimate In particular, τ N ⋆A is a compact operator for any given N and so it makes sense to study how the functions n(t; τ N ⋆ A) and n ± (t; τ N ⋆ A) (whenever A is self-adjoint and τ (x, y) = τ (y, x)) behave for large N. To this end, it is useful to define the following two functionals If LogDens τ (t; A) = LogDens τ (t; A), we denote by LogDens τ (t; A) their common value. For a self-adjoint operator A ∈ B 0 , we define the functionals LogDens ± τ (t; A), LogDens ± τ (t; A) with the functions n ± replacing n in (2.6) and (2.7) respectively and denote by LogDens ± τ (t; A) their common value, if it exists.

2.2.
Invariance of spectral densities. For a fixed operator A ∈ B 0 , we wish to study the relation between the asymptotic behaviour of n(t; τ N ⋆ A) for large N and the Schur-Hadamard multiplier τ . In particular, the result below tells us that the function n(t; τ N ⋆ A) (as well as n ± (t; τ N ⋆ A)) asymptotically behaves independently of the multiplier τ . We refer to this phenomenon as the Invariance Principle and we state it as follows Theorem 2.5 (Invariance Principle). Suppose τ 1 , τ 2 are multipliers satisfying assumptions (B) and (C). Then for A ∈ B 0 and for t > 0 one has that Similarly, for a self-adjoint A ∈ B 0 and τ i (x, y) = τ i (y, x), then one has that Before proving the result, let us prove the following auxiliary lemma. Lemma 2.6. Let σ satisfy Assumption (C) and be such that σ(0, 0) = 0 and such that for some ε > 0 and some β > 1/2, there exists C β > 0, so that (2.8) For any A ∈ B 0 , one has σ N ⋆ A ∈ S 2 and furthermore there exists C > 0, independent of N, such that Proof of Lemma. We need to estimate the following quantity A modification of the integral test and the assumption that A ∈ B 0 , shows that one can find C > 0 so that the last inequality follows from the change of variables x = Ns, y = Nt. Let Ω ε = {(s, t) ∈ R 2 + | s 2 + t 2 < ε} and Ω c ε = R 2 + \ Ω ε , then: We will show that each summand is uniformly bounded. Since σ satisfies (2.8), it follows The second inequality is a consequence of writing the integral in polar coordinates and, since β > 1/2, the last integral is finite. Using (C), it follows that We have thus obtained that I N is uniformly bounded in N, whereby the assertion follows. 2 ) = O s (1) as N → ∞, and so we obtain that LogDens τ 1 (t + s; A) ≤ LogDens τ 2 (t; A) ≤ LogDens τ 1 (t − s; A), Sending s → 0 gives the desired inequalities. In the self-adjoint setting, the same reasoning carries through once we replace the function n with the functions n ± .
2.3. Almost symmetric and almost orthogonal operators. As mentioned in the Introduction, we will use the following two results which are similar, at least in spirit, to Theorems 2.2 in [20] and Theorem 2.7 in [19] and their proofs follow the same scheme. From now on, we make no assumptions on the uniform boundedness and smoothness of our multiplier τ and write A (N ) = τ N ⋆ A. The first of the two results discussed below is about the interactions at the level of their spectral densities between two operators. Namely, if A, B are bounded operators whose truncations A (N ) , B (N ) are almost orthogonal in the sense that A (N ) * B (N ) ∈ S p , A (N ) B (N ) * ∈ S p , for some p ≥ 1 uniformly in N, then each of the logarithmic spectral densities of |A| and |B| contributes independently to the logarithmic spectral density of |A + B|. Let us state the result as follows Then, for A = L j=1 A j and for any t > 0: LogDens ± τ (t − 0; A j ). (2.12) Proof of Theorem 2.7. We will prove only (2.9), since (2.10),(2.11) and (2.12) follow the same line of reasoning. Put H = ⊕ L i=1 ℓ 2 (Z + ) and define the block diagonal operator A N = diag{A

Since the operator
Furthermore, since the operators A j are so that sup N A Thus, Weyl inequality (2.2) gives where in the second line we used the fact that A * N A N is diagonal and so Just as in the proof of Theorem 2.5, we swap the roles of J A N and A N and, using (2.1), we obtain  LogDens Recall now that we set A = L j=1 A j . We have then from our assumptions it follows that sup N ≥1 D N Sp < ∞ and using (2.1) in conjunction with the Weyl inequality (2.2), we obtain Whereby we obtain that The above, in conjunction with (2.13), gives the result.
The second result applies to a self-adjoint operator A and establishes a relation between LogDens + τ (t; A) (resp. LogDens + (t; A)) and LogDens − τ (t; A) (resp. LogDens − (t; A)). More precisely, if a self-adjoint operator A is so that its truncation A (N ) is almost symmetric under reflection around 0, in the sense that for some unitary operator U one has UA (N ) + A (N ) U ∈ S p for some p ≥ 1 uniformly in N, then its upper and lower logarithmic spectral densities contribute equally to the logarithmic spectral density of |A|. In other words, the positive and negative eigenvalues of τ N ⋆ A accumulate to the spectrum of A in the same way. We can formulate this as follows Theorem 2.8. Let A be a self-adjoint operator and let τ be such that τ (x, y) = τ (y, x). Suppose there exists a unitary operator U for which In particular, we get that where 0 < s < t. In particular, this gives that The result follows once we divide through by log(N), send N → ∞ and use Lemma 2.1.

Hankel operators and the Abel summation method
3.1. Hankel operators. In the Introduction, we defined Hankel matrices acting on ℓ 2 (Z + ), equivalently they can also be defined as integral operators acting on L 2 (T). Let T be the unit circle in the complex plane, and m the Lebesgue measure normalised to 1, i.e dm(z) = (2πiz) −1 dz. Define the Riesz projection as For a symbol ω, the Hankel operator H(ω) is: where J is the involution Jf (v) = f (v) and, by a slight abuse of notation, ω denotes both the symbol and the induced operator of multiplication on L 2 (T). We can immediately see that if ω satisfies (1.1), H(ω) is self-adjoint. Furthermore, it is easy to see that For any non-negative integers j, k, one has For 0 < r < 1, let P r be the Poisson kernel, defined as For ω r = P r * ω, we have the identity where C r is the operator of convolution by P r on L 2 (T). Furthermore, it is unitarily equivalent (modulo kernels) to the Hankel matrix Note that for r = e −1/N , the above reduces to the truncation considered in (1.13 (ii) if H(ω) ∈ S p for some 1 ≤ p ≤ ∞, then (3.4) implies H(ω r ) Sp ≤ H(ω) Sp .

Almost Orthogonal and Almost Symmetric Hankel operators.
Recall that for a function η : T → C, its singular support, denoted sing supp η, is defined as the smallest closed subset, M, of T such that η ∈ C ∞ (T\ M).
To prove the statements in Lemma 3.1, we use the following: if ω ∈ C 2 (T) and furthermore there exists C > 0 such that: (iii) if ω ∈ C 2 (T), the commutator [P + , ω] is trace-class.
Proof of Lemma 3.3. (i) is folklore. It can be proved by approximating the kernel k by trigonometric polynomials. Let us prove (ii). First, recall two facts: (a) any ω ∈ C 2 (T) is the uniform limit of the sequence Thus for any N and v ∈ T, the Cauchy-Schwarz inequality together with Plancherel Identity give: Putting these two facts together and noting that rank (H(ω N )) ≤ N, we have that for N ≥ 2: Thus we see that H(ω) ∈ S 1 and, furthermore, (iii). Write P − = I − P + , where I is the identity operator. Since P + is a projection and P + P − = P − P + = 0, one has Using the identity P − = JP + J − P + JP + , it follows that [P + , ω] = H(ω)J − JH(ω) * − P + ωP + JP + + P + JP + ωP + .
Since P + JP + is a rank-one operator (projection onto constants), [P + , ω] is trace-class if and only if H(ω)J − JH(ω) * is, which follows immediately from (ii).
With these facts at hand, we are now ready to prove Lemma3.1.
Proof of Lemma 3.1. (i): we will only show the first inequality, as the second can be proved in the same way. From the assumptions on ω 1 , ω 2 , we can find ζ 1 , ζ 2 ∈ C ∞ (T) such that supp ζ 1 ∩ supp ζ 2 = ∅ and such that (1 − ζ i )ω i vanishes identically in a neighbourhood of sing supp ω i . We will repeatedly use the following two facts: (a) for any ϕ ∈ L ∞ (T), Young's inequality holds, i.e one has the estimate: (3.6) (b) one has that P r * ω ∈ C ∞ (T) and furthermore (P r * ω) → ω as r → 1− locally uniformly on T \ sing supp ω. The same is true for its derivatives (P r * ω) (n) .
We set ζ i = 1 − ζ i , i = 1, 2 and use the triangle inequality to obtain from which we see that it is sufficient to find uniform bounds for each summand above.
Recall that H(ω i ) = P + ω i JP + and P + is a projection, thus: Since ζ 1 and ζ 2 have disjoint supports, the operator ζ 1 P + ζ 2 has a C ∞ (T 2 ) integral kernel given by Lemma 3.3-(i) shows that ζ 1 P + ζ 2 ∈ S 1 . Furthermore, using Hölder inequality for the Schatten classes and (3.6), we deduce that By Lemma 3.3-(ii), we also have that H( ζ 1 (ω 1 ) r ) ∈ S 1 and furthermore: for some C > 0 independent of r. In (3.7) we used once more the Hölder inequality for Schatten classes together with the estimates (3.3) and (3.6). From (b) and the fact that ζ i ω i vanishes identically on sing supp ζ i , we conclude that ( ζ i (ω i ) r ) ′′ → ( ζ i ω i ) ′′ uniformly on the whole of T, and so Similarly one can show that (ii) Since ±1 / ∈ sing supp ω, then we can write ω = ϕ + η for some η ∈ C ∞ (T) and some ϕ vanishing identically in a neighbourhood U of ±1. With this decomposition of ω, we can see that H(ω r ) = H(ϕ r ) + H(η r ). Since η is smooth, then H(η) ∈ S 1 and so the triangle inequality and Hölder inequality for Schatten classes imply that sup r<1 sH(ω r ) + H(ω r )s S 1 ≤ 2 H(η) S 1 + sup r<1 sH(ϕ r ) + H(ϕ r )s S 1 .
So it is sufficient to consider those symbols ω vanishing on a neighbourhood, U, of ±1. Fix a smooth function ζ such that 0 ≤ ζ ≤ 1, it vanishes identically on some open V ⊂ U so that ±1 ∈ V , ζ ≡ 1 on T\U and such that ζ(v) = ζ(v), v ∈ T. We can write: Let us study these operators more closely. Using the triangle inequality, we obtain that Using (b) and the fact that (1 − ζ)ω ≡ 0 on T, we conclude that ((1 − ζ)ω r ) ′′ → 0 on T and so Lemma 3.3-(ii) gives For the operators appearing in the second line of (3.9), write sH(ζω r ) + H(ζω r )s = ([s, P + ] ζ) ω r JP + + P + ω r J (ζ [P + , s]) .

3.3.
Spectral density of our model operator: the Hilbert matrix. An important ingredient to the proof of all our results is the model operator for which it is possible to explicitly compute the spectral density. Following the ideas of previous works, [16,18], a natural candidate is the Hilbert matrix, given by the symbol γ defined in (1.4 where c has been defined in (1.10). If τ (x, y) = τ (y, x), then we also have As an immediate consequence of the above, we obtain Proof of Proposition. The Invariance Principle, Theorem 2.5, shows that it is sufficient for the statement to hold for τ (x, y) = ½ (x, y) defined in (1.20). This has already been done in [23,Theorem 5.1] and it has already been discussed in the Introduction in (1.10). Since the Hilbert matrix is a positive-definite operator, it is easy to see that τ N ⋆ Γ ( γ) is positive-definite and so The statement can be independently proved using the function τ 1 (x, y) = e −(x+y) discussed in the Introduction, however we postpone this to the Appendix.

Proof of Theorem 1.1
The proof of the result will be broken down in two Steps. For brevity, we denote by Γ (N ) ( ω) the operator τ N ⋆ Γ ( ω). We also recall that Ω is the set of jump-discontinuities of the symbol ω and c is the function in (1.10).

Proof of Theorem 1.2
Just as in the proof of Theorem 1.1, we break the argument into two steps, and use the same notation as before for the operator τ N ⋆ Γ ( ω) and for the symbols γ z . We also set Ω + = {z ∈ Ω | Im z > 0}.
Step 1. Finitely many jumps. Just as before, suppose that the symbol ω has finitely-many jump-discontinuities. Write where η is continuous on T. If ω has no jump at ±1, the corresponding quantities do not appear in the above. Denoting by Φ the sum in the brackets, Weyl inequality (2.3) gives for 0 < s < t By Lemma 2.4, we obtain that n ± (s; Γ (N ) ( η)) = O s (1), and so, for any t > 0, it follows that Integration by parts once again shows that Applying Theorem 2.5 to Γ ( Φ), we see that it is sufficient to prove the result for the multiplier τ (x, y) = e −(x+y) . Since the symbols have mutually disjoint singular supports for z ∈ Ω + , Lemma 3.1-(ii) and Theorem 2.7 imply that for t > 0 The operators κ ±1 (ω)Γ ( γ ±1 ) are sign definite, and furthermore one has that In either case, Proposition 3.4 gives that where ½ ± is the indicator function of R ± = (0, ±∞).
From Lemma 3.1-(ii), Theorem 2.8 and Theorem 1.1 above, we get that for any z ∈ Ω +
Remark 5.1. As we wrote earlier in the Introduction, if the symbol has a pair of complex conjugate jumps, then (5.5) shows that the upper and lower logarithmic spectral density of Γ ( ω) contribute equally to the logarithmic spectral density of |Γ ( ω)|. This is an effect of the Symmetry Principle we referred to in the Introduction.
Proof of Proposition 1.3. The same reasoning of Step 1. in both proofs above applies in this case, with only one minor change. Since we assume that τ induces a uniformly bounded multiplier on S p , p > 1, i.e. that (1.25) holds, in (4.1) and (5.1) we need to assume that η is a symbol so that Γ ( η) ∈ S p . Then Lemma 2.4 shows that n(s; Γ (N ) ( η)) = O s (1) and, in the self-adjoint case n ± (s; Γ (N ) ( η)) = O s (1). The rest follows immediately.
Proof of Proposition 1.4. Exactly the same reasoning of the proofs of Theorems 1.1 and 1.2 above applies in this case, with the only difference being that in this case τ is no longer inducing a uniformly bounded multiplier on the whole space of bounded operators, just on Hankel matrices. However, all of the terms appearing in the arguments just presented are bounded Hankel operators and so the same arguments apply in this case.
Its boundedness can be established using the Schur test. A simple calculation yields the identity Γ ( γ) = LL * , from which if follows that, with Γ (r) ( γ) = Γ ( γ r ) where ½ r is the characteristic function of the interval (0, r) and so one obtains r m Tr Γ (r) ( γ) m = Tr (½ r L * L½ r ) m , (A.1) therefore we only need to compute the latter trace. Recall now that for any bounded operator X, there is a unitary equivalence between XX * | ker(XX * ) ⊥ and X * X| ker(X * X) ⊥ . Hence, the trace of (½ r L * L½ r ) m and that of (½ r L * L½ r ) m coincide. Note however that the operator L * L is an operator acting on L 2 (0, 1) whose integral kernel is: k(t, s) = 1 π(1 − ts) , t, s ∈ (0, 1).
In this way, we have reduced our problem to evaluating the trace of the integral operator ( ½ r B ½ r ) m , where ½ r is the characteristic function of the interval (0, arctanh(r)). By adding 0 to its spectrum, we also consider ½ r B ½ r as an integral operator acting on L 2 (R), with integral kernel ½ r (s) ½ r (t) π cosh(s − t) , s, t ∈ R.
Whence we can estimate Tr( ½ r B ½ r ) m by: We also have that: S 2 , thus we need to find an estimate for the Hilbert-Schmidt norm of the integral operator [ ½ r , B], which has integral kernel given by: k(t, s) = ½ r (t) − ½ r (s) π cosh(π(t − s)) , t, s ∈ R.
Finally, an application of the Dominated Convergence Theorem gives the result.