Can one hear a matrix? Recovering a real symmetric matrix from its spectral data

The spectrum of a real and symmetric $N\times N$ matrix determines the matrix up to unitary equivalence. More spectral data is needed together with some sign indicators to remove the unitary ambiguities. In the first part of this work we specify the spectral and sign information required for a unique reconstruction of general matrices. More specifically, the spectral information consists of the spectra of the $N$ nested main minors of the original matrix of the sizes $1,2,\dots,N$. However, due to the complicated nature of the required sign data, improvements are needed in order to make the reconstruction procedure feasible. With this in mind, the second part is restricted to banded matrices where the amount of spectral data exceeds the number of the unknown matrix entries. It is shown that one can take advantage of this redundancy to guarantee unique reconstruction of {\it generic} matrices, in other words, this subset of matrices is open, dense and of full measure in the set of real, symmetric and banded matrices. It is shown that one can optimize the ratio between redundancy and genericity by using the freedom of choice of the spectral information input. We demonstrate our constructions in detail for pentadiagonal matrices.


Introduction
The answer to the question posed in the title is definitely: No! The spectrum determines the matrix up to unitary equivalence. The question is then, what additional spectral information is required for this purpose. This question accompanies Mathematical Physics already for a long time, and it is usually referred to as the spectral inversion or spectral reconstruction problems [1,2]. The related reconstruction methods have a wide scope of applications in different areas of Physics, Materials Science and Engineering. They appear e.g., in studies of mechanical systems near equilibrium, where one tries to construct a quadratic Hamiltonian model [3], or in atomic, molecular and nuclear physics where spectra are measured, with the hope that they will provide information on the underlying interactions. In most cases the Hamiltonian operator is expressed as an ODE or PDE acting on an appropriate function the Lanczos algorithm, bring the reconstructed matrix to a (2d + 1)-banded form [17][18][19][20]. Importantly, the Lanczos algorithm changes the spectra of all remaining main minors and thus this method differs from the approach presented in this work. Similar problems have been considered for matrices with complex entries. A reconstruction algorithm using spectra of all 2 N − 1 main minors has been developed [21]. The algorithm produces an example of a complex matrix whose main minors have the given spectra and, under some regularity assumptions, the result is unique up to similarity transformations by nonsingular diagonal matrices and the transposition operation [22]. Even more general settings concerning matrices with entries from an algebraically closed field have been considered. In particular, it has been shown that there exists a finite number of square N × N matrices with a prescribed spectrum and prescribed off-diagonal entries [23]. A different approach to the problem is presented in [24] where any Hermitian matrix can be reconstructed from its spectrum and the spectra of a suitable number of its perturbations. Recently, there has been some revived interest in the so-called eigenvector-eigenvalue identity [9] which allows one to find the amplitudes of eigenvector's entries of a hermitian matrix using the spectra of its (N − 1) × (N − 1)-minors and the spectrum of the full matrix itself. As the authors of [9] point out, the eigenvectoreigenvalue identity appears in different guises in many places in the literature (see e.g. [10] -the authors of [9] also provide a thorough review of the relevant literature). In the present paper we derive (Appendix B) an identity which bears some similarity to the eigenvectoreigenvalue identity. This identity was proved previously [17][18][19] using a different method.
The present paper consists of three sections. The first deals with the inverse problem of full, real and symmetric matrices of dimension N , where the number of unknown entries is 1 2 N (N + 1). The spectral data to be used is the union of the spectra of the first N nested main minors of the matrix of the sizes 1, 2, . . . , N , and 1 2 N (N + 1) sign indicators needed for the complete reconstruction. The precise definition of the sign indicators will be given below. The actual construction is inductive: given a matrix A and a minor A (n) of dimension n, its next neighbor A (n+1) is obtained by computing the (n + 1)'th column from the given spectra and sign indicators. The uniqueness of the resulting solution is proved. Thus, the matrix unfolds like a telescope, hence its name -the telescopic construction. The fly in the ointment is that the computation of the sign indicators is rather cumbersome.
The second section uses the telescopic method restricted to banded matrices with band width D = 2d + 1 much smaller than N . The spectral input exceeds the number of unknowns, but this redundancy enables circumventing the need to compute the sign-indicators, and the only sign information required consists of the signs of the off-diagonal matrix elements in the diagonal which is d -steps away from the main diagonal, just as it was the case for the Jacobi matrices. The proposed method is proved to provide a unique solution only for generic D-diagonal matrices. Namely, this subset is open, dense and of full measure in the set of real, symmetric and banded matrices specified by entries in R (N − 1 2 )(d+1) . An explicit criterion which distinguishes non-generic cases is proposed.
Finally, in the last section it is shown that the large redundancy which exists in the telescopic approach can be reduced appreciably by studying a different construction: One considers only the d + 1 dimensional principal minors M (n) which are distinguished by the position of their upper corner of M (n) along the diagonal. Their spectra and appropriate sign indicators are used to compute successive minors in a recursive process. This method will be referred to as the sliding-minor construction. Also in the sliding-minor construction we are able to introduce certain redundancy by considering (d + 2)-dimensional sliding minors. Then, we show that the required sign data typically reduces to the signs of the off-diagonal matrix elements in the first row. The application of the last two methods for banded matrices, requires a special treatment of the upper d × d main minor, as will be explained in detail for each case. To allow a smooth flow of the main ideas, some of the more technical proofs of lemmas and theorems which are stated in the text are deferred to the appendices.
We believe that turning the focus to generic rather than to unconditionally applicable methods increases the domain of practical application and the scope of the study of inverse problems.

Notations and preliminaries
Before turning to the main body of the paper, the present subsection introduces notations and conventions that will be used throughout.
Let A be a symmetric real N × N matrix. Denote by A (n) its n × n upper main diagonal minor, so that A (1) = A 1,1 and A (N ) = A. The upper suffix (n) will be used throughout for the dimension of the subspace under discussion.
The Dirac notation for vectors is used. That is, given a list x (n) = {x (n) j } n j=1 , then x (n) denotes the column vector in dimension n with entries x (n) j . Its transpose (row) will be denoted by x (n) so that the scalar product is x (n) y (n) . It is convenient also to introduce the unit vectors e (n) (j) , 1 ≤ j ≤ n whose entries all vanish but for the j'th which is unity. Thus, clearly e (n) (j) x (n) = x (n) j , and A (n) i,j := e (n) (i) A (n) e (n) (j) . For every minor A (n) , 1 ≤ n ≤ (N − 1) define the upper off-diagonal part of the next column The spectra of A (n) for 1 ≤ n ≤ N will be denoted by ∀ k ≥ j, and the corresponding normalized eigenvectors (determined up to an overall sign) are v (n) (j) n j=1 . In general, the choice of eigenvectors is ambiguous. In the following, we will often remove this ambiguity by fixing the choice of eigenvectors of A (n) . If the spectrum is non-degenerate, this is done, for instance by demanding the last entry of each v (n) (j) to be positive. For a fixed choice of eigenvectors, the projections of a (n) on the eigenvectors of A (n) will be denoted by s The s (n) j will play the role of the sign-indicators in what follows. However, the arbitrary choice of the overall signs of the vectors v (n) (j) will not have any effect on the results, as long as the chosen signs are not altered throughout the proof.

Inversion by the telescopic construction
Given the spectra σ (n) for 1 ≤ n ≤ N , one can use very simple arguments to derive the following information on the matrix A, without relaying on any sign information. They are known in different guises, see e.g. references [17][18][19][20]. We bring them here for completeness and as a background necessary for the ensuing developments. Theorem 1. The spectra σ (n) for 1 ≤ n ≤ N suffice to determine the diagonal elements of A and the norms of the vectors |a (n) .
n+1,n+1 to be denoted by h, and the off-diagonal column a (n) . Clearly Hence h is deduced directly from the spectral data. Also, Thus h, and the value of a (n) 2 can be computed for all n.
Remark 1. This result can be considered as the inverse of the Gerschgorin theorem [28], which for symmetric matrices states that given a symmetric matrix A, its spectrum lies in the union of real intervals centered at the points A n,n and are of lengths 2 N k=1 |A n,k |. Theorem 1 states that given the spectra of the successive minors, the diagonal elements of the matrix are determined precisely, and the vectors of off-diagonal elements a (a) are restricted to a sphere of a radius determined by Equation (3).

Remark 2. If
A is a Jacobi (symmetric, tridiagonal) matrix, only the nth component of a (n) is different from zero, and therefore the off-diagonal entries are determined up to a sign. In other words, one has to provide their signs so that the Jacobi matrix could be heard.
The following identities are similar in spirit to the former ones, and can be proven by similar ways. They will be used in the sequel. and, which is valid if 0 is not in the spectrum of A (n) . (If 0 ∈ σ (n) one can add to A a constant multiple of the identity which renders all the spectra non-vanishing, without affecting any of the proofs or the results derived in the sequel.) The identities (3,4,5) are quadratic in the n components of the vector |a (n) . When A is assumed to be a pentadiagonal (D = 5), only the last two components of |a (n) do not vanish, and the quadratic forms describe three concentric curves -a circle and two conic sections in R 2 . Their mutual intersections are candidates for the solution for the inversion problem. This is discussed in detail in section (3.2.2).

A general algorithm
Unlike the case of tridiagonal matrices where the signature indicators required are just the signs of the off diagonal entries (see Remark 2), for the general case one requires the sign indicators s In the following, it will often be convenient to assume certain regularity properties of the matrix A. (ii) The successive spectra do not share a common value, σ (n) ∩ σ (n+1) = ∅ for n = 1, . . . , N − 1.
Remark 3. The following theorem was kindly provided by Percy Deift [29], and it is quoted without proof. Let A be an N × N real symmetric and regular matrix. We prove Theorem 4. Given the spectra σ (n) N n=1 and the sign indicators s (n) N n=1 as defined in (1.1) . Then, these data suffice to construct the original matrix A uniquely.
( Irregular matrices will be discussed in a subsequent subsection ).
(i) The full spectral data of A (n) , i.e. its spectrum σ (n) and eigenvectors v (n) (j) Proof. The value of h is determined by (2). The next steps are based on the relation between the two spectra σ (n) and σ (n+1) which is used in the proof of (6). Define an auxiliary matrix A (n+1) with A (n) being its main n × n minor,Ã (n+1) n+1,n+1 = h, and whose off diagonal entries of the (n + 1)'th column and row all vanish. Then, the two matrices of dimension (n + 1), A (n+1) andÃ (n+1) differ by a matrix of rank 2.
To express the spectrum and eigenvalues of A (n+1) , expand any of its eigenvectors in the basis and Replacing A (n+1) by its form (7) one easily finds that the expansion coefficients {b k,r } n+1 r=1 of the kth eigenvector satisfy From these relations and the definitions above we conclude that: Since by the conditions of the theorem, λ r for all the pairs 1 ≤ r ≤ n, 1 ≤ k ≤ n + 1, one can write Moreover, all minors are assumed to be of full rank, and both σ (n+1) and σ (n) are given. Therefore, one can consider (13) as a set of linear equations from which the unknown n parameters (ξ (n) r ) 2 := a (n) v (n) (r) 2 , r = 1, . . . , n could be computed and none vanishes as will be shown in (15) below. However, the number of equations exceeds by 1 the number of unknowns, and therefore there exists a solution only if the equations are consistent. To prove consistency, we start by obtaining an explicit solution of the first n equations in (13). Introducing the Cauchy matrix the unknown parameters {(ξ (n) r ) 2 } n r=1 then read (see Appendix B): Formula (15) can also be found in earlier works [17][18][19] and the recursive relation for the eigenvectors (12) has been also used in [18]. Similarly, one can combine Equation (12) with Equation (15) to obtain (after requiring the eigenvectors of A (n+1) to be normalized to one) the following expression for the coefficients |b k,n+1 | 2 Equation (16) is an instance of the eigenvector-eigenvalue identity from [9] which allows one to compute the amplitudes e (n+1) r ṽ (n+1) (k) directly from the spectra of the n main minors of A (n+1) of the size n × n. Here, we are using only one n × n minor (namely, A (n) ), thus we recover only the last squared entry of each eigenvector. By combining Equation (16) with Equation (12) we can express the coefficients b k,r in terms of the spectra of A (n) and A (n+1) , but the resulting formulae will be different from the eigenvector-eigenvalue identity [9] as we use different spectral data and the entries refer to the basis of the eigenvectors of A (n) only. The consistency of the linear n+1 equations (10) and (11) could be proved by substituting the (ξ (n) r ) 2 computed above in the last equation in (13), and showing that the resulting equation is satisfied identically. An explicit expression for (C −1 ) r,k is given in [33] : The identities which are proved in Appendix A show that equation (17) is satisfied identically r as is the case in the present context. To recapitulate, the solution of equations (13) exists and it provides the (ξ (n) r ) 2 in terms of the spectra in a unique way as shown in Equation (15). This information together with the sign indicators determine a (n) |v (n) (r) = s

Remark 4.
A complementary result to the above was recently reported in [30]. Using another approach it is proved that given a real symmetric matrixÃ (n) whose spectrumσ (n) is simple (no multiplicities) and a list µ (n+1) of (n + 1) arbitrary numbers which interlace withσ (n) , one can construct a column vector |ã (n) and a realh which, when used to completeÃ(n) tõ A (n+1) yieldsÃ (n+1) with the spectrum µ (n+1) . Moreover, |ã (n) can be constructed uniquely once an orthant (whose choice provides the sign indicators) is prescribed in the n dimensional space spanned by the eigenvectors ofÃ (n) . It is also proved that the assignment of |ã (n) and h to µ (n+1) is a homeomorphism onto its image and a differomorphism.
The knowledge of the a v (n) (r) with their signs enables the use of (12) to obtain the coefficients of the eigenvectors of A (n+1) up to the constant factor b k,n+1 which can be determined by requiring a proper normalization.
Proof. (of Theorem 4) The proof proceeds by a recursive construction: Start with a known A (1) and with the known spectrum of A (2) . Clearly, λ Hence, Equation (15) gives Therefore, we get two possible solutions for A (2) corresponding to choosing s The normalized eigenvectors can be computed from Equation (12). In the standard basis, their forms read Thus the theorem is valid for n = 1, 2. Note also that one can show that (13) is satisfiedwhich is not clear at first sight but can be easily checked. The lemma provides the final stage of the inductive proof.
The above inductive procedure is optimal for generic matrices whose all entries are typically nonzero. However, the requirement of providing signs of projections of the unknown column to the eigenvectors of A (n) seems awkward for practical use. However, when one wants to apply this result to banded matrices, it turns out that there is large redundancy which can be used to our advantage. In particular, as we argue in the following sections, the inductive reconstruction procedure for banded matrices can be improved so that the number of required sign data can be tremendously reduced. (3) . It is instructive to work out the above inductive procedure explicitly for n = 3. The results will prove useful in the following sections. Consider the problem of reconstructing a regular 3 × 3 real symmetric matrix, A (3) , given its spectrum λ

Explicit reconstruction of A
3 and its 2 × 2 top-left minor A (2) . It has a spectral decomposition r=1 are given by Equation (15). Thus, there are four possible solutions for the last column of A (3) corresponding to different choices of signs s i . Next, let us show how to determine s i directly from the sign of a certain expression involving only matrix elements of A (3) . Using Equation (15) and Equation (21), we get that 2 |. This allows us to express s Because the denominators of the above expressions are positive numbers, we finally get 2,3 , s 2,3 |α| + A The main advantage of these expressions is that they provide expressions for the signatures s 1 , s 2 in terms of data directly available from the matrix.

Non-regular matrices
In this subsection we show that non-regular matrices that have degeneracy in σ (n) or where the overlaps σ (n) ∩ σ (n+1) are nonempty can be effectively treated using the methods developed above for regular matrices. We consider the situation where there is a single block of degeneracy in σ (n) , i.e. for some 1 ≤ l ≤ n and m ≥ 0 we have λ l+m := λ and other eigenvalues of A (n) are non-degenerate. Let us denote the respective degeneracy indices by The interlacing property (6) dictates the following possibilities for the respective degeneracy indices D (n+1) (l, m) (see Fig. 1): In what follows we assume that the eigenvalues λ (n+1) k with k / ∈ D (n+1) (l, m) are nondegenerate. Furthermore, the degenerate eigenspaces of A (n) and A (n+1) will be denoted by V (n) (λ) and V (n+1) (λ) respectively, and their respective orthogonal complements in R n and R n+1 by V (n) (λ) ⊥ and V (n+1) (λ) ⊥ . We will also embed R n into R n+1 by appending a zero at the end of every vector. With a slight abuse of notation we will also denote by V (n) (λ) and V (n) (λ) ⊥ the images of these spaces under the above embedding.
Theorem 6 below shows how to reduce each of the above four degenerate cases to a regular problem, where Theorem 4 can be applied. Interestingly, each case requires a different procedure.
Theorem 6. Assume that the spectra of A (n) and A (n+1) as well as the eigenvectors v (n) (r) n r=1 of A (n) are given and fixed. Assume that the spectra have single degeneracy blocks with the eigenvalue λ given by degeneracy indices D (n) (l, m) and D (n+1) (l, m) = D (n+1) i (l, m) for some i ∈ {I, II, III, IV } respectively. The matrix A (n+1) is reconstructed from the above spectral data as follows.
I. The vector a (n) as well as the non-degenerate eigenvectors of A (n+1) belong to the space V (n) (λ) ⊥ . They are constructed by applying Theorem 4 to the truncated regular The remaining coefficients b k,n+1 and b r,k with k ∈ D III. The same as case II above.
can be freely chosen as any orthonormal subset of V (n) (λ). They determine the embedding φ : The vector a (n) , 0 as well as the remaining eigenvectors of A (n+1) belong to the orthogonal complement of the image of the embedding φ in R n+1 , φ(V (n+1) (λ)) ⊥ . They are constructed using Theorem 4 applied to the regular matricesÃ (n−m+1) := A (n+1) The proof of Theorem 6 is deferred to Appendix C.

Applications to banded matrices
In the present chapter we shall discuss the application of the above formalism for banded matrices. The fact that the spectral data exceeds the number of unknowns will be shown to provide stringent constraints on the sign distribution of the off diagonal elements, so that generically their signs are determined up to an overall sign per column. If not stated otherwise, we will assume that the considered matrices are regular.

Tridiagonal matrices
In Remark 1 above we have shown that the present construction applies for the tridiagonal case. However it is certainly not efficient since the two spectra and the signs of the offdiagonal matrix elements suffice for the purpose -as is well known [27].

Pentadiagonal matrices
The next simple case which illustrate however the main ingredients of the general case are the D = 5 -banded matrices a.k.a. pentadioagonal matrices. Here, hence the inductive step requires solving a number of equations in the two real variables a (n) n and a (n) n−1 . We shall address the subject using two different approaches. In the first we shall apply the method based on Theorem 4. In the second we shall use the quadratic forms (4,5), which give an alternative point of view that applies exclusively for pentadiagonal matrices.
Note that for both approaches, one should compute the upper 3 × 3 minor using the procedure outlined in subsection (2.1.1). Alternatively, it suffices to provide as input the spectrum and the eigenvectors of the upper 3 × 3 minor, and use this data as the starting data for the ensuing inductive steps.
3.2.1. Applying Theorem 4 to the pentadiagonal case In this subsection, we will focus on answering the following two questions (i) Given that the spectra σ (n) , n = 1, . . . , N correspond to a pentadiagonal matrix, what additional information is needed to uniquely reconstruct the matrix?
(ii) What are the necessary conditions for a matrix A (n) and the spectrum σ (n+1) of A (n+1) so that A (n+1) is a pentadiagonal matrix?
Starting with the first question, the general answer is given by Theorem 4. However, we will make use of the redundant information to show that typically, one can do away with the computation of the sign indicators as prescribed in the theorem. It will be shown that the vectors |a (n) are determined up to an overall sign, which is uniquely provided by the easily available signs of the entries in the upper diagonal.
Consider the inductive step, where A (n) and the spectrum σ (n+1) are known. All vectors |a (n) have the same norm determined by Equation (3). Hence, Furthermore, the solutions of Equation (15) imply that either a (n) for every r = 1, . . . , n. Thus, every |a (n) belongs to the intersection of one of the above lines (for every r) with the circle of radius R (n) (recall also that ξ (n) r were defined to be positive). By the assumption of Theorem 4, σ (n) is non-degenerate and none of the elements of σ (n) belongs to σ (n+1) . By Equation (15), this implies that ξ (n) r > 0 for all r = 1, . . . , n. Then, Equation (24) defines two distinct parallel lines for each r. Moreover, for a fixed r, the intersection of the two lines with the circle occurs at two distinct pairs of points, where a single pair consists of a point and its antipode (see Fig. 2). Since the spectral data comes from a pentadiagonal matrix, each of the lines from Equation (24) must have an intersection point |ã (n) or − |ã (n) where |ã (n) is the nth column (with only two non zero entries) of the original matrix (see Fig. (3).
However, there exist special situations when such intersections with the circle give more than just one solution up to a sign. These are highly degenerate cases where all lines are determined by a fixed pair of parallel lines (l + , l − ). More precisely, for every r, the lines from Equation (24) are either equal to (l + , l − ) or to another fixed pair of lines (l + , l − ) where l +/− ⊥ l +/− (see Fig. 4). The precise conditions when that happens are as follows. Let us denote the slope of l +/− by α. Then, the slope of l +/− is − 1 α . Thus, for every r the slope of the lines from Equation (24) Note that, we can never have I = ∅ or J = ∅, because this implies that the eigenvectors of  A (n) are linearly dependent, which is a contradiction. Furthermore, because the eigenvectors of A (n) are orthonormal, we necessarily have Using α, we can express v It is straightforward to see that Finally, note that conditions (25) readily imply that the intersecting lines (24) with the circle results with only four points and it is not necessary to look at the free coefficients of the lines. This is because we assume that the spectral data comes from an existing pentadiagonal matrix and hence all lines must intersect in at least one common point.
To sum up, we have the following theorem.
Theorem 7. For a regular pentadiagonal matrix, the inductive step leads to at most four solutions for |a (n) . Typically, by intersecting the lines from Equation (24) Remark 5. The set of matrices satisfying the α-condition from Theorem 7 is a subset of all pentadiagonal symmetric matrices of codimension at least one. This explains the results of simulations where sets of a few millions of randomly chosen pentadiagonal matrices of dimensions up to n = 10 where tested, and none satisfied the α-condition. This fact can be understood by noting that the matrices satisfying the α-condition satisfy certain polynomial equations, as explained in the following Section 3.2.2. Thus, the set of matrices which do not satisfy the α-condition is of a positive codimension and thus it is a full-measure set.
Another outcome of the above analysis provides the answer to the second question stated at the beginning of the section: It provides a set of necessary conditions that ensures that the spectral data indeed belong to a pentadiagonal matrix. The conditions are that the distance of each line from Equation (24) from the origin must be no greater than R (n) , i.e.
3.2.2. Using the conic sections for recovering pentadiagonal matrices Before proceeding to the general case, we shall show that using the three quadratic forms from Equation (3), Equation (4) and Equation (5) typically allow one to reconstruct a pentadiagonal matrix from its spectral data up to an overall sign of each of its columns. For a pentadiagonal matrix, we have that a (n) has only two non-zero entries and the quadratic forms from Equation (4) and Equation (5) describe either ellipses or hyperbolae (a.k.a. conic sections). Their explicit forms read where The three conic sections are centered at (0, 0), thus each of the curves from Equation (29) and Equation (30) typically intersects the circle given by Equation (28) at exactly four points (or none). There may also be only two intersection points in certain degenerate cases. If any of the intersections is empty or both intersections have zero overlap, then there is no pentadiagonal matrix A (n+1) whose top-left main minor is A (n) . If such a pentadiagonal matrix A (n+1) exists, then the three conic sections intersect at at-least two points which yield a (n) up to an overall sign. However, in certain non-generic cases it may happen that the three conic curves intersect at four points (see Fig. 5). Then, one cannot decide which of the intersection points give a correct solution. This happens if and only if the principal axes of the two conic sections are identical. This can be written as a condition for the eigenvectors of the quadratic forms defining the conic sections. Namely, both sets of eigenvectors must be the same (up to a scalar factor) [26]. This in turn happens if and only if the matrices of both quadratic forms commute. This boils down to the following algebraic condition for the coefficients defining the conic intersections (29)- (30).
In the special case when n = 2, Equation (31) is automatically satisfied and all four points of intersection give correct solutions for a (n) . For n > 2, the set of pentadiagonal n × n matrices satisfying condition (31) is a strict subset of the set of all n × n pentadiagonal matrices of codimension one. Thus, a typical (generic) pentadiagonal matrix can be completely reconstructed using the above conic curves' method. However, considering non-generic cases brings in a few subtleties. In particular, satisfying the α-condition from Theorem 7 implies satisfying Equation (31), but the converse statement is not true. The set of n×n pentadiagonal matrices satisfying the α-condition is of codimension at least one, but it is a strict subset of the set of matrices satisfying Equation (31). Thus, in non-generic cases only Theorem 7 can give a definite answer about the possible solutions (see Fig. 6).

A finite algorithm for reconstructing a D-diagonal matrix.
Given that the spectra σ (n) , n = 1, . . . , N correspond to a D-diagonal real symmetric matrix, we provide a finite algorithm which produces the set of possible D-diagonal matrices whose main minors have the given spectra {σ (n) } N n=1 . By definition, the nth column of a D-diagonal matrix satisfies a where R (n) depends only on the spectral input data via Equation (3). Thus, we have the following algorithm for realizing the inductive step of constructing A (n+1) from A (n) (Algorithm 1). One can see that the main contribution to the numerical complexity of the Algorithm 1 Algorithm for constructing A (n+1) from A (n) for a D-banded matrix.
1: Input: A (n) and its spectral decomposition, σ (n+1) , -numerical accuracy. solutions.append a (n) (s) 10: return solutions above inductive step relies on the loop going through all 2 n choices of the possible signs. Thus, the complexity scales like poly n,d 2 n , where the polynomial factor poly n,d comes from the necessity of evaluating expressions from Equation (32) and Equation (33) in each step of the loop. What is more, heuristics shows that the required numerical accuracy grows with d and n, as the potential solutions tend to bunch close to the surface of the sphere. Let us next argue that for generic input data, the result of the above Algorithm 1 for sufficiently small are just two antipodal solutions for a (n) . To this end, consider the hyperplanes H (n),± r : a n−d+1 v (n) (r) n−d+1 + . . . + a n v (n) (r) n = ±|ξ r |.
The procedure of intersecting lines with the circle from Subsection 3.2 for d > 2 generalizes here to a procedure involving the study of intersections of hyperplanes H (n),± r with the (hyper)sphere of radius R (n) . As before, typically, as a result of such a procedure one obtains only a single pair of antipodal solutions for a (n) . Non-typical situations arise when some hyperplanes are perpendicular to each other. This requires the existence of certain relations between the eigenvectors of A (n) analogously to the relations stated in Theorem 7. However, the precise form of these conditions seems to be much more complicated and we leave this problem open for future research. For an illustration of the intersection procedure for a typical heptadiagonal matrix (d = 3), see Fig. 7. , r = 1, 2, 3, 4 for a typical random heptadiagonal matrix. Intersecting one of the hyperplanes with the sphere of radius R (n) results with a single circle. The common points where n circles meet (here n = 4) determine the solutions for the vector a (n) (the red dots). As one can see, typically there are just two such points which lie on the opposite sides of the sphere. The orange and blue colors were used to mark the relevant groups of intersecting circles.
Similarly to the pentadiagonal case, one can find necessary conditions for the spectral data to correspond to a D-diagonal matrix. Every hyperplane H (n),± r must have a nontrivial intersection with the sphere of radius R (n) , hence its distance from the origin must be no greater than R (n) .
Remark 6. Using Theorem 6 we can apply the above methodology mutatis mutandis to nonregular banded matrices. In particular, whenever the dimension of the smaller truncated matrix from Theorem 6 is greater than d, we can use the same redundancy effects to our advantage when dealing with typical matrices.

Improved strategies for D-diagonal matrices -the sliding minor construction
The number of independent non vanishing entries of a D-diagonal real symmetric matrix is The spectral data for all the main minors of a D-diagonal N × N matrix is N N −1 , which exceeds N D by far. In this section, we provide two alternative methods for reconstructing Ddiagonal matrices and which use much less spectral data. The first utilizes the minimum number of spectral parameters needed for the purpose, together with the necessary sign indicators. The second method makes use of N − d − 1 more spectral data. This introduces enough redundancy to render the method much more straight forward to use at the cost of being applicable only for generic matrices.

Inverse method with minimal spectral input
The number of non vanishing data needed to write a D-diagonal matrix, N D , can also be written as . It can be interpreted as the sum of two terms: (i) The spectra of the minors A (1) = A 1,1 , A (2) , . . ., A (d) which give a total of 1 2 d(d + 1) numbers needed to reconstruct the first d × d minor.
The reconstruction algorithm consists of the following steps.
(1) Reconstruct A (d+1) from the spectral data of A (1) through A (d) and the sign indicators ( 1 2 d(d + 1) signs in total) as described in the inductive procedure from the proof of Theorem 4.
(2) The matrix elements of the d × d minor M  , · · · , M (d+1) N −d . The main drawback of this method is the need to provide the sign indicators at each step. To overcome this, one can also introduce minimal redundancy to the above sliding-minor method so that for a typical matrix the required sign data reduces only to the overall signs of columns of A (N ) . This applies only for generic matrices, as explained in the previous section.

Inverse method with optimal spectral input
Here, the sliding minors are the M minors (see (36)). The increase of their dimension amounts also to the vanishing of the (1, d + 2) and (d + 2, 1) entries. This enables turning the redundant information to impose constraints, which for generic matrices remove the need for computing the sign -indicators required in the previous construction.
The inversion algorithm consists of the following steps.
(1) Reconstruct A (d+1) from the spectral data of A (1) through A (d) and the sign indicators ( 1 2 d(d + 1) signs in total) as described in the inductive procedure from the proof of Theorem 4.
(2) The matrix elements of the (d + 2) × (d + 2) minor M N −d−1 . In order to minimize further the set of exceptional matrices where this strategy fails, one could choose a larger dimension for the sliding minor. In choosing an "optimum" strategy, one has to weigh computational effort against minimizing the exceptional set. This is an individual decision left for the discretion of the practitioner.

Pentadiagonal matrices
The first inversion strategy for regular pentadiagonal matrices (d = 2) is a combination of two steps which we have already analyzed in detail in the previous sections. The first step of reconstructing A (2) has been described in the proof of Theorem 4 (see Equation (19) and Equation (20)). The second as well as all other steps rely on reconstructing 3 × 3 matrices from their top-left 2 × 2 minors. This has been done explicitly in subsection 2.1.1. In total, the 2 × 2 → 3 × 3 steps require the following sign data.
(i) The sign of the matrix element A 1,2 .
(ii) A pair of signs from Equation (23) applied to each minor M The second inversion strategy which uses the larger 4 × 4 sliding minor gives extra redundancy and, as explained above, it typically reduces the required sign data to the overall signs of columns of A (N ) . Here, the notion of typicality is particularly straightforward to state. Namely, all the sliding minors of a typical matrix do not satisfy the α-condition from Theorem 7 . The computation of the spectral decomposition of the (3 × 3) minors can be carried out analytically.

Summary
We revisited here the question of how much spectral data is needed to determine a real symmetric matrix. In principle, a matrix is determined by its spectrum only up to unitary equivalence. Hence, more spectral information is needed in order to reconstruct a nondiagonal matrix. Heuristically, the number of spectral data should match the number of unknowns. Hence, for a generic real symmetric N × N matrix, we need N (N + 1)/2 numbers coming from the spectral data. In our work, these numbers come from the spectra of the main minors. Given such spectra, we provide a finite algorithm that reconstructs the matrix up to a finite number of possibilities. The construction can be made unique by supplementing additional sign data as stated in Theorem 4 which comprises of N (N − 1)/2 signs in total. However, the signs are not easily deduced from the matrix itself, therefore the requirement of supplementing the signs is a notable limitation to our procedure. Thus, in further parts of this paper, we focus on ways of reducing the required sign data in the case of banded matrices. For (2d + 1)-banded matrices, there are less unknowns, hence one can find a smaller optimal set of (d + 1) × (d + 1) minors whose spectra allow one to reconstruct the matrix up to a finite number of possibilities. As before, some additional sign information is required to make the construction unique. By analysing the case of pentadiagonal matrices, we compare both methods of reconstruction. We find that the redundant information coming from the spectra of all main minors in a typical case (where the notion of typicality has been made precise in Subsection 3.2) allows one to reduce the number of required signs to N − 1 that are just the overall signs of columns of the upper-triangular part of A (N ) , while using the optimal amount of spectral information requires 2N − 1 signs. In Section 4 we argue that this is a general fact, i.e. the redundancy coming from all main minors typically allows one to uniquely reproduce a general banded matrix from the spectral data using only the overall signs of columns of the upper-triangular part of A (N ) .

Acknowledgment
We would like to thank Professor Raphael Loewy for suggestions and critical comments during the first stages of the present work. Professor Percy Deift accompanied the work since its beginning and offered critique and advise in many instances. We are obliged for his invaluable help and in particular for his permission to quote an unpublished theorem quoted in Remark 3. Professor Carlos Meite is acknowledged for bringing to our attention his work (Remark 4) before its publication. We also thank Professor Jon Keating for useful suggestions and Doctor Yotam Smilansky for helpful comments on the manuscript. The inspiration for the work came by listening to a seminar by Professor Christiane Tretter within the webinar series "Spectral Geometry in the Clouds". Thanks Christiane. The seminar is organized by Doctors Jean Lagacé and Alexandre Girouard whom we thank for initiating and running such successful seminar during the miserable Corona days.

Appendix A. Identities for Cauchy Matrices
For two sequences of real numbers, x := {x i } n i=1 and y := {y i } n i=1 , define the Cauchy matrix C(x, y) as (C(x, y)) i,j := 1 x i −y j . The matrix C(x, y) has the followng symmetry (C(x, y)) T = −C(y, x). (A.1) C will be used to denote C(x, y) by default. An explicit expression for C −1 reads [33] The sum of the matrix elements of C −1 is known to be [35] n i,j=1 We aim to prove the following identities Note first, that the LHS identities imply the RHS identities by symmetry A.1. Hence, it is enough to prove the LHS identities. To this end, we write them down as the following matrixvector products where vectors |1 (n) , | 1 x (n) and |y (n) are defined as We will use the following lemma.
Lemma 8 (Vandermonde Matrix Identity for Cauchy Matrix). C(x, y) has the following decomposition where P and Q are diagonal matrices defined as and V x , V y are the Vandermonde matrices By Lemma 8, we have Next, we define the vectors |φ 1 , |φ y and |φ x as .
Calculation of A.6 boils down to computing overlaps Let us start with finding |φ x . A straightforward calculation using Equation (A.7) shows (A. 8) In order to compute the sums in Equation (A.8), we use the following Lemma [34].
Lemma 9 (Summation of Powers over Product of Differences). Let (t 1 , t 2 , . . . , t n ) be a sequence of real numbers. Then, The only nonzero entry is which is evaluated in the following Proposition.
Proposition 10. Let (t 1 , t 2 , . . . , t n ) be a sequence of real numbers. Then, Proof. Consider the following complex integral over a circle of radius R > max{|t i | : 1 ≤ i ≤ n} Because the integrand is a regular function at z = ∞, we have I R (t 1 , . . . , t n ) = 0. On the other hand, the integrand has simple poles at z = 0, z = t 1 , . . ., z = t n . Hence, the residue theorem asserts that which finishes the proof.
Next, let us compute the vectors |φ 1 and |φ y . Note first that because the first entry of |φ x is its only nonzero entry, we only need to find the first entries of |φ 1 and |φ y . By Equation (A.7), the desired vectors are solutions to the following linear equations We use now Crammer's rule. It follows that det (V (y 1 , . . . ,ŷ j , . . . , y n )) T det V T y = (−1) n−j k =j y k k =j (y k − y j ) = (−1) n−j k =j y k p j (y j ) .
The result follows now directly by multiplying the above expressions by e Appendix B. Derivation of the formula for (ξ r (n) ) 2 .
The aim is to prove the identity used in Equation (15)  where C = C(x, y) is a Cauchy matrix and h := n+1 k=1 x k − n k=1 y n . Note first that Next, let us use Lemma 8 to find that C −1 = −QV −1 Y V x P −1 . By a reasoning relying on the use of Lemma 9 (which is analogous to the one used in Equation (A.9)), we obtain that  The next step is the multiplication by V −1 y . To this end, we apply the following result concerning the inverse of a Vandermonde matrix [34]. , where e m denotes the mth Viete sum.
We only need the last two columns of (V y ) −1 , because using Equation