A theorem on the number of distinct eigenvalues

A theorem on the number of distinct eigenvalues of diagonalizable matrices is obtained. Some applications related to the problem of separation of close eigenvalues, triangular defective matrices as well as adjacency and walk matrices of graphs are discussed. Other ideas and examples are provided.


Introduction
Most of the work done in the recent years about the number of distinct eigenvalues of matrices revolves around the relationships between graphs and matrices. In particular, the concept of minimum number of distinct eigenvalues of graphs has been studied through some articles such as [1,3,6,8,14,17]. Other recent articles about the number of distinct eigenvalues are related to low-rank perturbations of matrices, see [11,15,19,20]. In this paper, the question about the distinct eigenvalues is formulated as follows. Is there a way to find the number of distinct eigenvalues of a given complex matrix without actually calculating the eigenvalues?
In the next section, we provide an affirmative answer to this question in the general case of diagonalizable matrices. Then we present some applications in the third section.

Main result
We start this section by stating and proving the following lemma which will be used in the proof of our main result. Proof We prove the lemma for C n and the proof extends naturally to every vector space endowed with an inner product and having dimension ≥ 2.
The trivial case is where v is orthogonal to H . If v ∈ H , then the existence of B is ensured by the Gram-Schmidt process. A description of this process can be found in [13,Sect.
which is orthogonal to v. Moreover, u 2 , u 3 , . . . , u k are linearly independent, belong to H and span a subspace G ⊂ H of dimension k − 1. Therefore, the Gram-Schmidt process is applied to obtain an orthonormal basis {w 2 , . . . , w k } of G from u 2 , u 3 , . . . , u k . Note that the vectors w 2 , . . . , w k are orthogonal to v since G itself is orthogonal to v. Let Proof Let β 1 , . . . , β n be the eigenvalues of M (not necessarily distinct). Since M is diagonalizable, it has a spectral decomposition of the form where x i and y i are, respectively, right and left eigenvectors of M associated with β i and satisfying n i=1 x i y * i = I n , the n × n identity matrix.
To take into account the multiplicities of the eigenvalues of M, we use the following notation: 1. λ 1 , . . . , λ r are the distinct eigenvalues of M for some r ≤ n. 2. g i is the multiplicity of λ i . 3. In (1), the right and left eigenvectors of M associated with λ i are denoted, respectively, by x i 1 , . . . , x i g i and y i 1 , . . . , y i g i . 4. The eigenspace of M associated with λ i is denoted by H i . Note that x i 1 , . . . , x i g i form a basis of H i .
Then (1) can be written as and (2) as Note also that Using (3), (4), (5) and the notation M 0 = I n , we have Let t ∈ {1, . . . , r } and v be a nonzero vector in C n not orthogonal to H i for i = 1, . . . , t and orthogonal to H i for i = t + 1, . . . , r . Then (6) implies By Lemma 2.1, eigenvectors x i 1 , . . . , x i g i associated with λ i can be chosen to form an orthonormal set in which . It follows from (7) that Now, let a 0 , . . . , a t−1 be complex numbers such that From (8) and (9) we have Since v * x i 1 = 0 and the eigenvectors {y i 1 | 1 ≤ i ≤ t} are linearly independent, (10) implies If at least one of the coefficients (a i ) is different than zero, then (11) implies that the polynomial f (z) = t−1 k=0 a k z k has t distinct roots λ 1 , . . . , λ t , while its degree does not exceed t − 1. This is a contradiction. Hence, a 0 = a 1 = · · · = a t−1 = 0. Then we deduce from (9) that the vectors v, M * v, . . . , (M * ) t−1 v are linearly independent. Since there are t of them, those vectors span the subspace spanned by the set of vectors {y i 1 | 1 ≤ i ≤ t} as can be seen from (8). This subspace contains all the vectors of the form (M * ) k v, k ∈ N∪{0} as also can be seen from (8). Now we complete the proof using the fact that the eigenvalues of M * are complex conjugate of those of M and the eigenspace of M * associated with λ * i is the left eigenspace of M associated with λ i .
The practical aspect of Theorem 2.2 is reflected in the following two corollaries.

Corollary 2.3 Let M be an n × n complex diagonalizable matrix and q(M) be the number of its distinct eigenvalues. Then
Proof It is always possible to find a vector v 0 that is not orthogonal to any left eigenspace of M. Then the corollary follows from Theorem 2.2.

Corollary 2.4 An n × n complex diagonalizable matrix M has at least k distinct eigenvalues if and only if there exists a nonzero vector v ∈ C n such that the matrix
Proof Suppose that M has at least k distinct eigenvalues. Following the same notation as in Theorem 2.2, we can use (5) to verify that the vector v = y 1 1 + y 2 1 + · · · + y k 1 is not orthogonal to any of the k subspaces of M associated with λ 1 , . . . , λ k , but orthogonal to each of its subspaces associated with λ k+1 , . . . , λ r . This tells us that it is always possible to find a vector v that is not orthogonal to exactly k left eigenspaces of M and orthogonal to the remaining ones (if any If we choose the canonical vector v = [1 0 0] T , then the matrix is nonsingular since its determinant equals 1. According to Corollary 3.1, matrix M should have 3 distinct eigenvalues. However this is not the case since M is defective as it can be seen from its Jordan form

Remark 2.7 The inequality
q(M) ≤ rank(M) + 1 (13) holds for every n × n complex matrix M. If, in addition, M is diagonalizable, then it follows from (12) and (13) that Replacing y * i by y T i in the proof of Theorem 2.2, we can see that this theorem applies to M.

Diagonalizable matrices with distinct eigenvalues
It is possible to check that all the eigenvalues of a given diagonalizable matrix are simple using the following corollary.
Since det(A) = 5 = 0, Corollary 3.1 implies that the eigenvalues of M are distinct. If we use approximation to one decimal digit, then the eigenvalues of M are going to look equal to each other,  The eigenvalue λ = 2 has algebraic multiplicity 2 but its geometric multiplicity is hidden so we do not know if M is diagonalizable or not. We choose, for example, v = [0 0 0 1] to form the matrix

Separation of close eigenvalues
which is nonsingular since its determinant is nonzero, det(A) = 504. It follows from Corollary 3.4 that M is defective and the geometric multiplicity of the eigenvalue λ = 2 is equal to 1. In fact, where the matrix in the middle is a Jordan canonical form of M.
In the next two subsections, we show how two existing theorems in graph theory become consequences of our main result.

Application to the adjacency matrix of an undirected graph
The following theorem is a well-known result on the adjacency matrix of an undirected graph. It has an elegant proof based on the fact that the roots of the minimal polynomial of a symmetric matrix are distinct, see [ We prove this theorem based on Corollary 2.3.
Proof Let a 1 , . . . , a n be the vertices of G. Without loss of generality, we assume that the diameter of G occurs between a 1 and a d+1 for some d ∈ {1, 2, 3, . . . , n − 1} and consists of the edges: a 1 a 2 , a 2 a 3 , . . . , a d a d+1 . Hence, for i = 2, . . . , d + 1, the shortest path between a i and a 1 has length i − 1. It is known that the (i, j)th entry of A t is equal to the number of paths of length t between a i and a j for t = 1, 2, . . . . Therefore, Denote by (A t ) 1 the first column of the matrix A t and let where e 1 = [1, 0, . . . , 0] T . Then by (14), b j j = 0 and b j+1, j = b j+2, j = · · · = b nj = 0, which implies that the columns of B are linearly independent. Since it follows from Corollary 2.3 that the matrix A has at least d + 1 distinct eigenvalues.
It is known that Theorem 3.6 extends to all nonnegative symmetric matrices associated with G (matrices in which the (i, j)th off-diagonal entry is zero if and only if there is no edge between a i and a j ). Theorem 3.2 in [1] is another extension of Theorem 3.6. Both of these extensions can be proved based on Corollary 2.3 as it is done above in the proof of theorem 3.6. If we let S(G) be the set of all symmetric matrices associated with G and q(G) be the minimum number of distinct eigenvalues a matrix in S(G) can have, then it follows from (12) that The question that arises now is the following: for a given graph G with adjacency matrix A, how can we find a vector v that leads to the best possible lower bound for q(A) or q(G)? Extensive research has been made about the graph eigenvalues, their number and multiplicities. Beside the articles cited in the introduction, we invite the reader to look at [5,9] and [10].

The walk matrix and main eigenvalues of a graph
Let G be an undirected graph with n vertices a 1 , . . . , a n and A be its adjacency matrix. Denote by q(A) the number of distinct eigenvalues of A. Let e = [1, . . . , 1] T , the all 1's vector of n components. An eigenvalue λ of A is said to be main eigenvalue of G if it is associated with at least one eigenvector v of A that is not orthogonal to e. Some work done on this matter can be found in [2,7,12,16,18] and [21]. The walk matrix W of the graph G is given by W = [w i j ] = e Ae . . . A n−1 e . The walk matrix acquires its importance from the fact that w i j is equal to the number of paths of length j − 1 that start at vertex a i with 1 ≤ i ≤ n and 2 ≤ j ≤ n. It follows from Corollary 2.4 that q(A) ≥ rank(W ).
The following theorem that relates the walk matrix and the main eigenvalues is obtained in [12]. Theorem 3.7 [12, Theorem 2.1] The rank of the walk-matrix of a graph G is equal to the number of its main eigenvalues. In this theorem, the graph G is simple as explained in the introduction of [12]. Saying that λ is associated with an eigenvector v that is not orthogonal to e is the same as saying that e is not orthogonal to the eigenspace of A associated with λ. Since A is symmetric, the right and left eigenspaces of M associated with the same eigenvalue are equal. Therefore, the above theorem is a consequence of Theorem 2.2.

Conclusion
We have shown how the rank of the matrix M Mv . . . M n−1 v , v ∈ C n can give information about the number of distinct eigenvalues of the diagonalizable matrix M. Some applications have been discussed briefly throughout the paper and it seems that this idea has more applications in linear algebra, combinatorics and numerical analysis. Therefore, it deserves further exploration.