Skip to main content

A theorem on the number of distinct eigenvalues

Abstract

A theorem on the number of distinct eigenvalues of diagonalizable matrices is obtained. Some applications related to the problem of separation of close eigenvalues, triangular defective matrices as well as adjacency and walk matrices of graphs are discussed. Other ideas and examples are provided.

Introduction

Most of the work done in the recent years about the number of distinct eigenvalues of matrices revolves around the relationships between graphs and matrices. In particular, the concept of minimum number of distinct eigenvalues of graphs has been studied through some articles such as [1, 3, 6, 8, 14, 17]. Other recent articles about the number of distinct eigenvalues are related to low-rank perturbations of matrices, see [11, 15, 19, 20].

In this paper, the question about the distinct eigenvalues is formulated as follows.

Is there a way to find the number of distinct eigenvalues of a given complex matrix without actually calculating the eigenvalues?

In the next section, we provide an affirmative answer to this question in the general case of diagonalizable matrices. Then we present some applications in the third section.

Main result

We start this section by stating and proving the following lemma which will be used in the proof of our main result.

Lemma 2.1

Let V be a vector space of dimension \(n \ge 2\) and let H be a subspace of V of dimension k with \(2\le k \le n\). For every vector \(v \in V\), there exists an orthonormal basis \(B = \{w_1, \dots , w_k\}\) of H such that at least \(k-1\) elements of B are orthogonal to v.

Proof

We prove the lemma for \({\mathbb {C}}^n\) and the proof extends naturally to every vector space endowed with an inner product and having dimension \(\ge 2\).

The trivial case is where v is orthogonal to H. If \(v \in H\), then the existence of B is ensured by the Gram–Schmidt process. A description of this process can be found in [13, Sect. 0.6.4]. Suppose that \(v \notin H\) and v is not orthogonal to H. Then there exists a vector \(v_1 \in H\) such that \(v_1^*v \ne 0\). Let \(\{v_1, \dots , v_k\}\) be a basis of H.

For \(i=2, 3, \dots , k\), we use \(v, v_1\) and \(v_i\) to form the vector

$$\begin{aligned} u_i = -\displaystyle {\frac{v^*v_i}{v^*v_1}}v_1 + v_i, \end{aligned}$$

which is orthogonal to v. Moreover, \(u_2, u_3, \dots , u_k\) are linearly independent, belong to H and span a subspace \(G \subset H\) of dimension \(k-1\). Therefore, the Gram–Schmidt process is applied to obtain an orthonormal basis \(\{w_2, \dots , w_k\}\) of G from \(u_2, u_3, \dots , u_k\). Note that the vectors \(w_2, \dots , w_k\) are orthogonal to v since G itself is orthogonal to v. Let

$$\begin{aligned} w_1 = \frac{v_1-{\sum _{i=2}^{k} (w_i^*v_1)w_i}}{\left\| v_1-{\sum _{i=2}^{k}(w_i^*v_1)w_i}\right\| }. \end{aligned}$$

Then \(w_1 \in H, ~\Vert w_1\Vert =1\) and \(\{w_1, \dots , w_k\}\) forms an orthonormal basis of H. \(\square \)

Now we state our main theorem.

Theorem 2.2

Let M be an \(n \times n~\) complex diagonalizable matrix with \(n\ge 2\) and let \(\lambda _1, \dots , \lambda _r\) be the distinct eigenvalues of M with \(r\in \{1, \dots , n\}\).

For every nonzero vector \(v \in {\mathbb {C}}^n\), let \(R_M(v)\) be the subspace of \({\mathbb {C}}^n\) defined by

$$\begin{aligned} R_M(v) = \text {span}~\left\{ v,~ Mv,~ M^2v, \dots \right\} . \end{aligned}$$

If v is not orthogonal to t left eigenspaces of M but is orthogonal to the \(r-t\) remaining ones for some \(t\in \{1, \dots , r\}\), then the vectors \(v,~ Mv,~ \dots ,~ M^{t-1}v\) are linearly independent and span \(R_M(v)\).

Proof

Let \(\beta _1, \dots , \beta _n\) be the eigenvalues of M (not necessarily distinct). Since M is diagonalizable, it has a spectral decomposition of the form

$$\begin{aligned} M = \sum _{i=1}^n\beta _ix_iy_i^*, \end{aligned}$$
(1)

where \(x_i\) and \(y_i\) are, respectively, right and left eigenvectors of M associated with \(\beta _i\) and satisfying

$$\begin{aligned} \sum _{i=1}^n x_iy_i^* = I_n, ~\text { the}\ n \times n~\ \text {identity matrix.} \end{aligned}$$
(2)

To take into account the multiplicities of the eigenvalues of M,

we use the following notation:

  1. 1.

    \(\lambda _1, \dots , \lambda _r\) are the distinct eigenvalues of M for some \(r\le n\).

  2. 2.

    \(g_i\) is the multiplicity of \(\lambda _i\).

  3. 3.

    In (1), the right and left eigenvectors of M associated with \(\lambda _i\) are denoted, respectively, by \(x_{i_1}, \dots , x_{i_{g_i}}\) and \(y_{i_1}, \dots , y_{i_{g_i}}\).

  4. 4.

    The eigenspace of M associated with \(\lambda _i\) is denoted by \(H_i\). Note that \(x_{i_1}, \dots , x_{i_{g_i}}\) form a basis of \(H_i\).

Then (1) can be written as

$$\begin{aligned} M&= \sum _{i=1}^r \sum _{j=1}^{g_i} \lambda _i x_{i_j} y_{i_j}^* \nonumber \\&= \sum _{i=1}^r \lambda _i \sum _{j=1}^{g_i} x_{i_j} y_{i_j}^* \end{aligned}$$
(3)

and (2) as

$$\begin{aligned} \sum _{i=1}^r \sum _{j=1}^{g_i} x_{i_j} y_{i_j}^* = I_n. \end{aligned}$$
(4)

Note also that

$$\begin{aligned} y_{i_j}^*x_{s_t} = {\left\{ \begin{array}{ll} ~1 ~~ \text {if}~~ i=s ~~\text {and}~~ j=t,\\ ~0 ~~\text {otherwise}. \end{array}\right. } \end{aligned}$$
(5)

Using (3), (4), (5) and the notation \(M^0=I_n\), we have

$$\begin{aligned} M^k = \sum _{i=1}^r \lambda _i^k \sum _{j=1}^{g_i}x_{i_j} y_{i_j}^* ~~\text {for every}~~ k\in {\mathbb {N}}\cup \{0\}. \end{aligned}$$
(6)

Let \(t \in \{1, \dots , r\}\) and v be a nonzero vector in \({\mathbb {C}}^n\) not orthogonal to \(H_i\) for \(i=1, \dots , t\) and orthogonal to \(H_i\) for \(i=t+1, \dots , r\). Then (6) implies

$$\begin{aligned} v^*M^k&= \sum _{i=1}^r \lambda _i^k \sum _{j=1}^{g_i} (v^*x_{i_j}) y_{i_j}^* \nonumber \\&= \sum _{i=1}^t \lambda _i^k \sum _{j=1}^{g_i}(v^*x_{i_j}) y_{i_j}^*, ~~~ k \in {\mathbb {N}}\cup \{0\}. \end{aligned}$$
(7)

By Lemma 2.1, eigenvectors \(x_{i_1}, \dots , x_{i_{g_i}}\) associated with \(\lambda _i\) can be chosen to form an orthonormal set in which \(v^*x_{i_1} \ne 0\) and \(v^*x_{i_j} = 0\) for \(j=2, \dots , g_i\) (if \(g_i \ge 2\)). It follows from (7) that

$$\begin{aligned} v^*M^k = \sum _{i=1}^t \lambda _i^k (v^*x_{i_1}) y_{i_1}^*, ~~~ k \in {\mathbb {N}}\cup \{0\}. \end{aligned}$$
(8)

Now, let \(a_0, \dots , a_{t-1}\) be complex numbers such that

$$\begin{aligned} a_0v^*+a_1v^*M+\dots +a_{t-1}v^*M^{t-1}=0. \end{aligned}$$
(9)

From (8) and (9) we have

$$\begin{aligned} \sum _{k=0}^{t-1}a_k\left[ \sum _{i=1}^t \lambda _i^k (v^*x_{i_1}) y_{i_1}^*\right] =0 \end{aligned}$$

or equivalently,

$$\begin{aligned} \sum _{i=1}^t (v^*x_{i_1}) \left[ \sum _{k=0}^{t-1} a_k \lambda _i^k\right] y_{i_1}^* = 0. \end{aligned}$$
(10)

Since \(v^*x_{i_1} \ne 0\) and the eigenvectors \(\{y_{i_1} ~|~ 1\le i \le t\}\) are linearly independent, (10) implies

$$\begin{aligned} \sum _{k=0}^{t-1} a_k \lambda _i^k = 0 ~~\text {for}~~ i=1, \dots , t. \end{aligned}$$
(11)

If at least one of the coefficients \((a_i)\) is different than zero, then (11) implies that the polynomial \(f(z) =\sum _{k=0}^{t-1} a_kz^k\) has t distinct roots \(\lambda _1, \dots , \lambda _t\), while its degree does not exceed \(t-1\). This is a contradiction. Hence, \(a_0=a_1=\dots =a_{t-1}=0\). Then we deduce from (9) that the vectors \(v,~ M^*v,~ \dots ,~ (M^*)^{t-1}v\) are linearly independent. Since there are t of them, those vectors span the subspace spanned by the set of vectors \(\{y_{i_1} ~|~ 1\le i \le t\}\) as can be seen from (8). This subspace contains all the vectors of the form \((M^*)^kv,~ k\in {\mathbb {N}}\cup \{0\}\) as also can be seen from (8). Now we complete the proof using the fact that the eigenvalues of \(M^*\) are complex conjugate of those of M and the eigenspace of \(M^*\) associated with \(\lambda _i^*\) is the left eigenspace of M associated with \(\lambda _i\). \(\square \)

The practical aspect of Theorem 2.2 is reflected in the following two corollaries.

Corollary 2.3

Let M be an \(n \times n~\) complex diagonalizable matrix and q(M) be the number of its distinct eigenvalues. Then

$$\begin{aligned} q(M) = \underset{v \in {\mathbb {C}}^n }{\max } \Big \{ \text {rank}\left( \left[ v ~~ Mv ~~ \dots ~~ M^{n-1}v\right] \right) \Big \}. \end{aligned}$$
(12)

Proof

It is always possible to find a vector \(v_0\) that is not orthogonal to any left eigenspace of M. Then the corollary follows from Theorem 2.2. \(\square \)

Corollary 2.4

An \(n \times n~\) complex diagonalizable matrix M has at least k distinct eigenvalues if and only if there exists a nonzero vector \(v \in {\mathbb {C}}^n\) such that the matrix

$$\begin{aligned} A = \left[ v ~~ Mv ~~ \dots ~~ M^{n-1}v\right] \end{aligned}$$

has rank k.

Proof

Suppose that M has at least k distinct eigenvalues. Following the same notation as in Theorem 2.2, we can use (5) to verify that the vector \(v = y_{1_1}+y_{2_1}+\dots +y_{k_1}\) is not orthogonal to any of the k subspaces of M associated with \(\lambda _1, \dots , \lambda _k\), but orthogonal to each of its subspaces associated with \(\lambda _{k+1}, \dots , \lambda _r\). This tells us that it is always possible to find a vector v that is not orthogonal to exactly k left eigenspaces of M and orthogonal to the remaining ones (if any). It follows from Theorem 2.2 that A has rank k. Conversely, suppose that the rank of A is k. Then by Corollary 2.3, M cannot have less than k distinct eigenvalues. \(\square \)

Remark 2.5

Theorem 2.2 and its corollaries do not hold in the general case of defective (non-diagonalizable) matrices. Here is a counterexample.

Example 2.6

Consider the following matrix:

$$\begin{aligned} M =\left[ \begin{array}{rrr} 1 &{} 0 &{} -1 \\ 2 &{} -1 &{} 3 \\ 1 &{} -1 &{} 3 \end{array}\right] . \end{aligned}$$

If we choose the canonical vector \(v = [1 ~~ 0 ~~ 0]^T\), then the matrix

$$\begin{aligned} A = [v ~~ Mv ~~ M^2v] =\left[ \begin{array}{rrr} 1 &{} 1 &{} 0 \\ 0 &{} 2 &{} 3 \\ 0 &{} 1 &{} 2 \end{array}\right] \end{aligned}$$

is nonsingular since its determinant equals 1. According to Corollary 3.1, matrix M should have 3 distinct eigenvalues. However this is not the case since M is defective as it can be seen from its Jordan form

$$\begin{aligned} M =\left[ \begin{array}{rrr} 1 &{} 1 &{} 0 \\ 1 &{} -1 &{} -1 \\ 0 &{} -1 &{} -1 \end{array}\right] \left[ \begin{array}{rrr} 1 &{} 1 &{} 0 \\ 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 1 \end{array}\right] \left[ \begin{array}{rrr} 0 &{} 1 &{} -1 \\ 1 &{} -1 &{} 1 \\ -1 &{} 1 &{} -2 \end{array}\right] . \end{aligned}$$

Remark 2.7

The inequality

$$\begin{aligned} q(M) \le \text {rank}(M) +1 \end{aligned}$$
(13)

holds for every \(n \times n~\) complex matrix M. If, in addition, M is diagonalizable, then it follows from (12) and (13) that

$$\begin{aligned} \text {rank}(M) \ge \underset{v \in {\mathbb {C}}^n }{\max } \Big \{\text {rank}\left( \left[ v~~Mv~~\dots ~~M^{n-1}v\right] \right) \Big \}-1. \end{aligned}$$

Remark 2.8

Theorem 2.2 extends to the general case of diagonalizable matrices over an algebraically closed field \({\mathbb {K}}\). In fact, every diagonalizable matrix M in \(M_n({\mathbb {K}})\), the set of \(n \times n~\) matrices with elements in \({\mathbb {K}}\), has the form

$$\begin{aligned} M = \sum _{i=1}^n \lambda _i x_i y_i^T, \end{aligned}$$

where \(\{x_i\}\) and \(\{y_i\}\) satisfy

$$\begin{aligned} \sum _{i=1}^n x_i y_i^T = I_n, ~~\text {the identity matrix in} ~M_n({\mathbb {K}}). \end{aligned}$$

Replacing \(y_i^*\) by \(y_i^T\) in the proof of Theorem 2.2, we can see that this theorem applies to M.

Applications

Diagonalizable matrices with distinct eigenvalues

It is possible to check that all the eigenvalues of a given diagonalizable matrix are simple using the following corollary.

Corollary 3.1

Let M be an \(n \times n~\) complex diagonalizable matrix. Then M has n distinct eigenvalues if and only if there exists a vector \(v \in {\mathbb {C}}^n\) such that \(v,~ Mv,~ \dots ,~ M^{n-1}v\) are linearly independent.

Proof

Follows immediately from Corollary 2.4. \(\square \)

Example 3.2

Consider the \(3\times 3\) diagonalizable matrix

$$\begin{aligned} M = \left[ \begin{array}{rrr} 1 &{} 2 &{} 1 \\ 1 &{} -1 &{} 2 \\ 0 &{} 1 &{} -1 \end{array}\right] . \end{aligned}$$

Let us choose v to be the canonical vector \(v = [0, ~1, ~0]^T\) and form the matrix

$$\begin{aligned} A = \left[ v ~~~ Mv ~~~ M^2v \right] =\left[ \begin{array}{rrr} 0 &{} 2 &{} 1 \\ 1 &{} -1 &{} 5 \\ 0 &{} 1 &{} -2 \end{array}\right] . \end{aligned}$$

Since \(\det (A) = 5 \ne 0\), Corollary 3.1 implies that the eigenvalues of M are distinct.

Separation of close eigenvalues

Example 3.3

Consider the diagonalizable matrix

$$\begin{aligned} M = \left[ \begin{array}{rrr} -8.98 &{} -9.99 &{} -10.09\\ 12 &{} 13.01 &{} 12.11\\ -2 &{} -2 &{} -1 \end{array} \right] . \end{aligned}$$

Let \(v = [ 0~~ 0~~ 3]^T\) and

$$\begin{aligned} A&= [v ~~ Mv ~~ M^2v] \\&= \left[ \begin{array}{rrr} 0 &{} -30.27 &{} -60.8421 \\ 0 &{} 36.33 &{} 73.0833 \\ 3 &{} -3 &{} -9.12 \\ \end{array}\right] . \end{aligned}$$

If we use approximation to one decimal digit, then the eigenvalues of M are going to look equal to each other, \(\lambda _1 \approx \lambda _2 \approx \lambda _3 \approx 1.0\). However, matrix A has a determinant neatly different than zero; det\((A) = -5.514\), which implies by Corollary 3.1 that the eigenvalues of M are surely distinct. In fact, the eigenvalues of M are exactly \(\lambda _1 = 1, ~~ \lambda _2 =1.01~\) and \(~\lambda _3 = 1.02\). This example shows that Corollary 3.1 can be used as a supporting algorithm to check if the close eigenvalues of large diagonalizable matrices are distinct.

Criterion for a triangular matrix to be defective

Corollary 3.4

Let M be an \(n \times n~\) triangular complex matrix and suppose that some of the diagonal elements of M repeat. If there exists a vector \(v \in {\mathbb {C}}^n\) such that the matrix \(\left[ v ~~ Mv ~~ M^2v ~~ \dots ~~ M^{n-1}v \right] \) is nonsingular, then M is defective.

Proof

Follows from Corollary 3.1 and the fact that the diagonal elements of a triangular matrix are its eigenvalues. \(\square \)

Example 3.5

Consider the upper-triangular matrix

$$\begin{aligned} M = \left[ \begin{array}{rrrr} 1 &{} -2 &{} 5 &{} 14 \\ 0 &{} 2 &{} -1 &{} -5 \\ 0 &{} 0 &{} 2 &{} 6 \\ 0 &{} 0 &{} 0 &{} -1 \end{array}\right] . \end{aligned}$$

The eigenvalue \(\lambda =2\) has algebraic multiplicity 2 but its geometric multiplicity is hidden so we do not know if M is diagonalizable or not. We choose, for example, \(v = [0 ~ 0 ~ 0 ~ 1]\) to form the matrix

$$\begin{aligned} A = [v ~~ Mv ~~ M^2v ~~ M^3v] = \left[ \begin{array}{rrrr} 0 &{} 14 &{} 40 &{} 106 \\ 0 &{} -5 &{} -11 &{} -33 \\ 0 &{} 6 &{} 6 &{} 18 \\ 1 &{} -1 &{} 1 &{} -1 \end{array}\right] , \end{aligned}$$

which is nonsingular since its determinant is nonzero, det\((A) =504\). It follows from Corollary 3.4 that M is defective and the geometric multiplicity of the eigenvalue \(\lambda = 2\) is equal to 1. In fact,

$$\begin{aligned} M=\left[ \begin{array}{rrrr} 1&{} 2&{} 1&{} 1\\ 0&{} -1&{} 1&{} -1\\ 0&{} 0&{} 1&{} 2\\ 0&{} 0&{} 0&{} -1 \end{array}\right] \left[ \begin{array}{rrrr} 1&{} 0&{} 0&{} 0\\ 0&{} 2&{} 1&{} 0\\ 0&{} 0&{} 2&{} 0\\ 0&{} 0&{} 0&{} -1 \end{array}\right] \left[ \begin{array}{rrrr} 1&{} 2&{} -3&{} -7\\ 0&{} -1&{} 1&{} 3\\ 0&{} 0&{} 1&{} 2\\ 0&{} 0&{} 0&{} -1 \end{array}\right] , \end{aligned}$$

where the matrix in the middle is a Jordan canonical form of M.

In the next two subsections, we show how two existing theorems in graph theory become consequences of our main result.

Application to the adjacency matrix of an undirected graph

The following theorem is a well-known result on the adjacency matrix of an undirected graph. It has an elegant proof based on the fact that the roots of the minimal polynomial of a symmetric matrix are distinct, see [4, Theorem 2.2.1].

Theorem 3.6

Let A be the adjacency matrix of an undirected graph G. The number of distinct eigenvalues q(A) of A is at least one more than the diameter d of G. That is,

$$\begin{aligned} q(A) \ge d+1. \end{aligned}$$

We prove this theorem based on Corollary 2.3.

Proof

Let \(a_1, \dots , a_n\) be the vertices of G. Without loss of generality, we assume that the diameter of G occurs between \(a_1\) and \(a_{d+1}\) for some \(d\in \{1, 2, 3, \dots , n-1\}\) and consists of the edges: \(a_1a_2, a_2a_3, \dots \), \(a_da_{d+1}\). Hence, for \(i=2, \dots , d+1\), the shortest path between \(a_i\) and \(a_1\) has length \(i-1\). It is known that the (ij)th entry of \(A^t\) is equal to the number of paths of length t between \(a_i\) and \(a_j\) for \(t=1, 2, \dots \). Therefore,

$$\begin{aligned} (A^{i-1})_{i1} \ne 0 ~~\text {and}~~ (A^t)_{i1} =0 ~~\text {if}~~ t\le i-2. \end{aligned}$$
(14)

Denote by \((A^t)_1\) the first column of the matrix \(A^t\) and let

$$\begin{aligned} B = [b_{ij}] = \left[ e_1 ~~ A_1 ~~ (A^2)_1 ~~ \dots ~~ (A^{d})_1\right] , \end{aligned}$$

where \(e_1 = [1, 0, \dots , 0]^T\). Then by (14),

$$\begin{aligned} ~b_{jj} \ne 0 ~~\text {and}~~ b_{j+1,j} = b_{j+2,j} =\dots = b_{n j} =0, \end{aligned}$$

which implies that the columns of B are linearly independent.

Since

$$\begin{aligned} B = \left[ e_1 ~~ Ae_1 ~~ A^2e_1 ~~ \dots ~~ (A^{d})e_1\right] , \end{aligned}$$

it follows from Corollary 2.3 that the matrix A has at least \(d+1\) distinct eigenvalues. \(\square \)

It is known that Theorem 3.6 extends to all nonnegative symmetric matrices associated with G (matrices in which the (ij)th off-diagonal entry is zero if and only if there is no edge between \(a_i\) and \(a_j\)). Theorem 3.2 in [1] is another extension of Theorem 3.6. Both of these extensions can be proved based on Corollary 2.3 as it is done above in the proof of theorem 3.6.

If we let S(G) be the set of all symmetric matrices associated with G and q(G) be the minimum number of distinct eigenvalues a matrix in S(G) can have, then it follows from (12) that

$$\begin{aligned} q(G) = \underset{M \in S(G)}{~~ \min ~~} \left\{ \underset{v \in {\mathbb {C}}^n}{~~ \max ~~} \Big \{ \text {rank}\left( \left[ v ~~ Mv ~~\dots ~~ M^{n-1}v\right] \right) \Big \} \right\} . \end{aligned}$$

The question that arises now is the following: for a given graph G with adjacency matrix A, how can we find a vector v that leads to the best possible lower bound for q(A) or q(G)?

Extensive research has been made about the graph eigenvalues, their number and multiplicities. Beside the articles cited in the introduction, we invite the reader to look at [5, 9] and [10].

The walk matrix and main eigenvalues of a graph

Let G be an undirected graph with n vertices \(a_1, \dots , a_n\) and A be its adjacency matrix. Denote by q(A) the number of distinct eigenvalues of A. Let \(e = [1, \dots , 1]^T\), the all 1’s vector of n components. An eigenvalue \(\lambda \) of A is said to be main eigenvalue of G if it is associated with at least one eigenvector v of A that is not orthogonal to e. Some work done on this matter can be found in [2, 7, 12, 16, 18] and [21]. The walk matrix W of the graph G is given by

$$\begin{aligned} W = [w_{ij}] = \left[ e ~~ Ae ~~ \dots ~~ A^{n-1}e \right] . \end{aligned}$$

The walk matrix acquires its importance from the fact that \(w_{ij}\) is equal to the number of paths of length \(j-1\) that start at vertex \(a_i\) with \(1\le i \le n\) and \(2\le j \le n\). It follows from Corollary 2.4 that

$$\begin{aligned} q(A) \ge \text {rank}(W). \end{aligned}$$

The following theorem that relates the walk matrix and the main eigenvalues is obtained in [12].

Theorem 3.7

[12, Theorem 2.1] The rank of the walk-matrix of a graph G is equal to the number of its main eigenvalues.

In this theorem, the graph G is simple as explained in the introduction of [12]. Saying that \(\lambda \) is associated with an eigenvector v that is not orthogonal to e is the same as saying that e is not orthogonal to the eigenspace of A associated with \(\lambda \). Since A is symmetric, the right and left eigenspaces of M associated with the same eigenvalue are equal. Therefore, the above theorem is a consequence of Theorem 2.2.

Conclusion

We have shown how the rank of the matrix \(\left[ M~~ Mv~~ \dots ~~ M^{n-1}v\right] \), \(v \in {\mathbb {C}}^n\) can give information about the number of distinct eigenvalues of the diagonalizable matrix M. Some applications have been discussed briefly throughout the paper and it seems that this idea has more applications in linear algebra, combinatorics and numerical analysis. Therefore, it deserves further exploration.

References

  1. Ahmadi, B.; et al.: Minimum number of distinct eigenvalues of graphs. Electro. J. linear Algebra 26, 673–691 (2013)

    MathSciNet  MATH  Google Scholar 

  2. Andelic, M.; et al.: Some new considerations about double nested graphs. Linear Algebra Appl. 483, 323–341 (2015)

    MathSciNet  Article  Google Scholar 

  3. Barrett, W.; et al.: Generalization of the strong Arnold property and the minimum number of distinct eigenvalues of a graph. Electron. J. Combin. 24, 2–40 (2017)

    MathSciNet  Article  Google Scholar 

  4. Brualdi, R.; Ryser, H.J.: Combinatorial Matrix Theory. Cambridge University Press, Cambridge (1991)

    Book  Google Scholar 

  5. Bu, C.; Zhang, X.; Zhou, J.: A note on the multiplicities of graph eigenvalues. Linear Algebra Appl. 422, 69–74 (2014)

    MathSciNet  Article  Google Scholar 

  6. da Fonseca, C.M.: A lower bound for the number of distinct eigenvalues of some real symmetric matrices. Electro. J. linear Algebra 21, 3–11 (2010)

    MathSciNet  MATH  Google Scholar 

  7. Deng, H.; Huang, H.: On the main signless Laplacian eigenvalues of a graph. Electron. J. Linear Algebra 26, 381–393 (2013)

    MathSciNet  Article  Google Scholar 

  8. Duarte, A.L.; Johnson, C.R.: On the minimum number of distinct eigenvalues for a symmetric matrix whose graph is a given tree. Math. Inequal. Appl. 5, 175–180 (2002)

    MathSciNet  MATH  Google Scholar 

  9. Erić, A.; da Fonseca, C.M.: Some consequences of an inequality on the spectral multiplicity of graphs. Filomat 27, 1455–1461 (2013)

    MathSciNet  Article  Google Scholar 

  10. Erić, A.; da Fonseca, C.M.: Unordered multiplicity lists of wide double paths. Ars Math. Contemp. 6, 279–288 (2013)

    MathSciNet  Article  Google Scholar 

  11. Farrell, P.E.: The number of distinct eigenvalues of a matrix after perturbation. SIAM J. Matrix Anal. Appl. 37, 572–576 (2016)

    MathSciNet  Article  Google Scholar 

  12. Hagos, E.M.: Some results on graph spectra. Linear Algebra Appl. 356, 103–111 (2002)

    MathSciNet  Article  Google Scholar 

  13. Horn, R.A.; Johnson, C.R.: Matrix Analysis, 2nd edn Cambridge University Press, Cambridge (2013)

    MATH  Google Scholar 

  14. Levene, R.H.; Oblak, P.; Šmigoc, H.: A Nordhaus-Gaddum conjecture for the minimum number of distinct eigenvalues of a graph. Linear Algebra Appl. 564, 236–263 (2019)

    MathSciNet  Article  Google Scholar 

  15. Moon, S.; Park, S.: Upper bound for the number of distinct of eigenvalues of a perturbed matrix. Electron. J. Linear Algebra 34, 115–124 (2018)

    MathSciNet  Article  Google Scholar 

  16. Rowlinson, P.: The main eigenvalues of a graph: a survey. Appl. Anal. Disc. Math. 1, 455–471 (2007)

    MathSciNet  Article  Google Scholar 

  17. Saiago, C.M.: Diagonalizable matrices whose graph is a tree: The minimum number of distinct eigenvalues and the feasibility of eigenvalue assignment. Spec. Matrices 7, 316–326 (2019)

    MathSciNet  Article  Google Scholar 

  18. Stanic, Z.: Main eigenvalues of real symmetric matrices with application to signed graphs. Czechoslovak Math. J. 70, 1091–1102 (2020)

    MathSciNet  Article  Google Scholar 

  19. Wand, Y.; Wu, G.: Refined bound on the number of distinct eigenvalues of a matrix after perturbation. Linear Multilinear Algebra 68, 903–914 (2020)

    MathSciNet  Article  Google Scholar 

  20. Xu, X.: An improved upper bound for the number of distinct eigenvalues of a matrix after perturbation. Linear Algebra Appl. 523, 109–117 (2017)

    MathSciNet  Article  Google Scholar 

  21. Zhou, H.: The main eigenvalues of the Seidel matrix. Math. Moravica 12, 111–116 (2008)

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

The author very much appreciates the comments and suggestions made by Professor Frank J. Hall.

Funding

No external funds or grants were received for this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rachid Marsli.

Ethics declarations

Conflict of interest

I declare that I have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. No competing interests to declare.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Marsli, R. A theorem on the number of distinct eigenvalues. Arab. J. Math. (2022). https://doi.org/10.1007/s40065-022-00377-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40065-022-00377-x

Mathematics Subject Classification

  • 15A18
  • 15A42
  • 15B51