Global properties of eigenvalues of parametric rank one perturbations for unstructured and structured matrices

General properties of eigenvalues of $A+\tau uv^*$ as functions of $\tau\in\Comp$ or $\tau\in\Real$ or $\tau=\e^{\ii\theta}$ on the unit circle are considered. In particular, the problem of existence of global analytic formulas for eigenvalues is addressed. Furthermore, the limits of eigenvalues with $\tau\to\infty$ are discussed in detail. The following classes of matrices are considered: complex (without additional structure), real (without additional structure), complex $H$-selfadjoint and real $J$-Hamiltonian.


Introduction
The eigenvalues of matrices of the form A+τ uv * , viewed as a rank one parametric perturbation of the matrix A, have been discussed in a vast literature. We mention the classical works of Lidskii [22], Vishik and Lyusternik [39], as well as the more general treatment of eigenvalues of perturbations of the matrix in the books by Kato [17] and Baumgärtel [3]. Recently, Moro, Burke and Overton returned to the results of Lidskii in a more detailed analysis [31], while Karow obtained a detailed analysis of the situation for small values of the parameter [16] in terms of structured pseudospectra. Obviously, parametric perturbations appear in many different contexts. The works most closely related to the current one concern rank two perturbations by Kula, Wojtylak and Wysoczański [20], matrix pencils by De Terán, Dopico and Moro [6] and Mehl, Mehrmann and Wojtylak [28,29] and matrix polynomials by by De Terán and Dopico [7].
While the local behaviour of eigenvalues is fully understood, the global picture still has open ends, cf. e.g. the recent paper by C.K. Li and F. Zhang [21]. The main problem here is that the eigenvalues cannot be defined neither analytically nor uniquely, even if we restrict the parameter τ to real numbers. As is well-known the problem does not occur in the case of Hermitian matrices where an analytic function of τ with Hermitian values has eigenvalues and eigenvectors which can be arranged such that they are analytic as functions of τ (Rellich's theorem) [33]. Other cases where the difficulty is detoured appear, e.g., in a paper by Gingold and Hsieh [13], where it is assumed that all eigenvalues are real, or in the series of papers of de Snoo (with different coauthors) [8,9,36,37] where only one distinguished eigenvalue (the so called eigenvalue of nonpositive type) is studied for all real values of τ .
Let us review now our current contribution. To understand the global properties with respect to the complex parameter τ we will consider parametric perturbations of two kinds: A + tuv * , where t ∈ R, or A + e i θ uv * , where θ ∈ [0, 2π). The former case was investigated already in our earlier paper [32], we review the basic notions in Section 2. However, we have not found the latter perturbations in the literature. We study them in Section 3, providing elementary results for further analysis.
Joining these two pictures together leads to new results on global behaviour of the eigenvalues in Section 4. Our main interest lies in generic behaviour of the eigenvalues, i.e., we address a question what happens when a matrix A (possibly very untypical and strange) is fixed and two vectors u, v are chosen numerically (we intentionally do not use the word 'randomly' here). One of our main results (Theorem 11) shows that the eigenvalues of A + τ uv can be defined globally as analytic functions in this situation for real τ . On the contrary, if one restricts only to real vectors u, v this is no longer possible (Theorem 13).
In Section 5 we study the second main problem of the paper: the limits of eigenvalues for large values of the parameter. Although similar results can be found in the literature we have decided to provide a full description, for all possible (not only generic) vectors u, v. This is motivated by our research in the following Section 6, where we apply these results to various classes of structured matrices. We also note there the classes for which a global analytic definition of eigenvalues in not possible (see Theorem 24). In Section 7 we apply the general results to the class of matrices with nonnegative entries.
Although we focus on parametric rank one perturbations, we mention here that the influence of a possibly non-parametric rank one perturbation on the invariant factors of a matrix has a rich history as well, see, e.g., the papers by Thompson [38] and M. Krupnik [19]. Together with the the works by Hörmander and Melin [15], Dopico and Moro [11], Savchenko [34,35] and Mehl, Mehrmann, Ran and Rodman [23,24,25] they constitute a linear algebra basis for our research, developed in our previous paper [32]. What we add to these methods is some portion of complex analysis, by using the function Q(λ) = v (λI n −A) −1 u and its holomorphic properties. This idea came to us through multiple contacts and collaborations with Henk de Snoo (cf. in particular the line of papers [14,36,37]), for which we express our gratitude here.

Preliminaries
If X is a complex matrix (in particular, a vector) then byX we define the entrywise complex conjugate of X, further we set X * =X . We will deal with rank one perturbations with A ∈ C n×n , u, v ∈ C n . The parameter τ is a complex variable, we will often write it as t e i θ and fix either one of t and θ. We review now some necessary background and fix the notation. Let a matrix A be given. We say that a property (of a triple A, u, v) holds for generic vectors u, v ∈ C n if there exists a finite set of nonzero complex polynomials of 2n variables, which are zero on all u, v not enjoying the property. Note that the polynomials might depend on the matrix A. In some places below a certain property will hold for generic u,v. This happens as in the current paper we consider the perturbations uv * , while in [32] uv was used (even for complex vector u, v) . In any case, i.e., either u, v generic or u,v generic, the closure of the set of 'wrong' vectors has an empty interior.
By m A (λ) we denote the minimal polynomial of A. Define and observe that it is a polynomial, due to the formula for the inverse of a Jordan block (cf. [32]). Let λ 1 , . . . , λ r be the (mutually different) eigenvalues of A, and corresponding to the eigenvalue λ j , let n j,1 ≥ n j,2 ≥ · · · ≥ n j,κj be the sizes of the Jordan blocks of A. We shall denote the degree of the polynomial m A (λ) by l, so l = r j=1 n j,1 .

Then
(2) deg p uv (λ) ≤ l − 1 and equality holds for generic vectors u, v ∈ C n , see [32]. It can be also easily checked (see [34] or [32]) that the characteristic polynomial of B(τ ) satisfies Therefore, the eigenvalues of A + τ uv * which are not eigenvalues of A, are roots of the polynomial Note that some eigenvalues of A may be roots of this polynomial as well. Saying this differently, we have the following inclusion of spectra of matrices , τ ∈ C, but each of these inclusions may be strict. Further, let us call an eigenvalue of A frozen (by u, v) if it is an eigenvalue of B(τ ) for every complex τ . Directly from (3) we see that each frozen eigenvalue is either a zero of det(λI n − A)/m A (λ), then we call it structurally frozen, or an eigenvalue of both m A (λ) and p uv (λ), and then we call it accidentally frozen. Note that, due to a rank argument, λ j is structurally frozen if and only if it has more than one Jordan block in the Jordan canonical form. Although being structurally frozen obviously does not depend on u, v, the Jordan form of B(τ ) at these eigenvalues may vary for different u, v, which was a topic of many papers, see, e.g., [15,34,32].
In contrast, generically m A (λ) and p uv (λ) do not have a common zero [32], i.e., a slight change of u, v leads to defrosting of λ j (which explains the name accidentally). In spite of this, we still need to tackle such eigenvalues in the course of the paper. The main technical problem is shown by the following, almost trivial, example.
has a single eigenvalue at λ 1 = 0 with a possibly nontrivial Jordan structure and let u = v = e 1 . The eigenvalues of B(τ ) are clearly τ and λ 1 and the eigenvalue λ 1 is accidentally frozen. Observe that if we define λ 0 (τ ) = τ then for τ = λ 1 there is a sudden change in the Jordan structure of B(τ ) at λ 0 (τ ).
To handle the evolution of eigenvalues of B(τ ) without getting into the trouble indicated above we introduce the rational function It will play a central role in the analysis. Note that Q(λ) is a rational function with poles in the set of eigenvalues of A, but not each eigenvalue is necessarily a pole of Q(λ). More precisely, if λ j (j ∈ {1, . . . r}) is an accidentally frozen eigenvalue of A then Q(λ) does not have a pole of the same order as the multiplicity of λ i as a root of m A (λ), i.e, in the quotient Q(λ) = puv(λ) m A (λ) there is pole-zero cancellation.
Proposition 2. Let A ∈ C n×n , let τ 0 ∈ C, let u, v ∈ C n and assume that λ 0 ∈ C is not an eigenvalue of A. Then λ 0 is an eigenvalue of A + τ 0 uv * of algebraic multiplicity κ ∈ {1, 2, . . . } if and only if If this happens, then λ 0 has geometric multiplicity one, i.e., A + τ 0 uv * has a Jordan chain of size κ at λ 0 . Finally, λ 0 is not an eigenvalue of A + τ 1 uv * for all τ 1 ∈ C \ {τ 0 }.
Remark 4. One may be also tempted to define the eigenvalues via solving the equation Q(λ) = 1/τ at λ 0 being an accidentally frozen eigenvalue of A for which Q(λ) does not have a pole at λ 0 . This would be, however, a dangerous procedure, as λ 0 might get involved in a larger Jordan chain. For example let B(τ ) = 1 1 0 τ with an accidentally frozen eigenvalue 1 and Q(λ) = 1/λ. Here for τ = 1 we get a Jordan block of size 2, but clearly the eigenvalues in a neighbourhood of λ 0 = 1 and τ 0 = 1 do not behave as 1 plus the square roots of τ − 1. For this reason we will avoid the accidentally frozen eigenvalues.
Remark 5. Note that in case m A and p uv have no common zeroes, i.e., there are no accidentally frozen eigenvalues, Q (λ) can be expressed in terms of m A and p uv as follows where cancellation of roots between numerator and denominator occurs in an eigenvalue of A when corresponding to that eigenvalue there is a Jordan block of size bigger than one.
Proof of Proposition 2. For the proof of the first statement we start from the definition of Q(λ). Note that m A (λ 0 ) is necessarily non zero, and so p uv (λ 0 ) is non-zero as well. If λ 0 is an eigenvalue of B(τ 0 ) which is not an eigenvalue of A, then, since p B(τ0) (λ 0 ) = 0, we have from (6) that Q(λ 0 ) = 1 τ0 , which proves the first equation in (7).
For the proof of the second statement, note that as λ 0 I n −A is invertible, any rank one perturbation of λ 0 I n − A can have only a one dimensional kernel. Therefore, the Jordan structure of the perturbation at λ 0 is fixed. The last statement for τ 1 = 0 follows from the assumption that λ 0 / ∈ σ(A) and for τ 1 / ∈ {0, τ 0 } directly from (7).
The statements in Proposition 2 can also be seen by viewing 1 − τ Q(λ) = 1 − τ v * (λI n − A) −1 u as a realization of the (scalar) rational function 1 − τ Q(λ). From that point of view the connection between poles of the function and eigenvalues of A, respectively, zeroes of the function and eigenvalues of B(τ ) = A + τ uv * is well-known. For an in-depth analysis of this connection, even for matrix-valued rational matrix functions, see [1], Chapter 8. We provided above an elementary proof of the scalar case for the reader's convenience.
Note the following example, now more involved than the one in Remark 4.
Example 6. In this example we return to the consideration of accidentally frozen Then we have:  , which has eigenvalues 1, 2 and τ + 1.
Note that both 1 and 2 are, by definition, accidentally frozen eigenvalues, although their character is a rather different. Let us consider Proposition 2 for this example. Note that Q (λ) has no zeroes, which tells us that there are no double eigenvalues of B(τ ) which are not eigenvalues of A. However, note that the zeros of m A (λ) and p uv (λ) are not disjoined. In particular, which detects the double eigenvalue of B(0) at λ 1 = 1 and a double semisimple eigenvalue of B(1) at λ 2 = 2, however, as can be seen from (8) the roots of this polynomial are cancelled by the roots of m 2 A (λ).

Angular parameter
In this section we will study the perturbations of the form where t > 0 is a parameter. More precisely, we will be interested in the evolution of the sets with the parameter t > 0. It should be noted that the sets σ(A, u, v; t) are strongly related to the pseudospectral sets as introduced in e.g., [16], Definition 2.1. In fact they can be viewed as the boundaries of pseudospectral sets for the special case of rank one perturbations. The interest in [16], see in particular the beautiful result in Theorem 4.1 there, is in the small t asymptotics of these sets. Our interest below is hence more in the intermediate values of t and in the large t asymptotics of these sets.
By z 1 , . . . , z d we denote the (mutually different) zeroes of Q (λ), note that some of them might happen to be accidentally frozen eigenvalues, a slight modification of Example 6 is left to the reader, see also Remark 9 below. We define t j as We group some properties of the sets σ(A, u, v; t) in one theorem. Below by a smooth closed curve we mean a C ∞ -diffeomorphic image of a circle.
Theorem 7. Let A ∈ C n×n and let u, v ∈ C n be two nonzero vectors, then the following holds.
(i) For t > 0, t = t j (j = 1, . . . , d) the set σ(A, u, v; t) consists of a union of smooth closed algebraic curves that do not intersect mutually. (ii) For t = t j (j = 1, . . . , d) the set σ(A, u, v; t) is locally diffeomorphic with the interval, except the intersection points at those z i for which t j = 1/|Q(z i )| (possibly there are several such z i 's). (iii) For generic u, v ∈ C n and for all j = 1, . . . d the point z j is a double eigenvalue of A + τ uv * , for τ = 1/Q(z j ). Two of the curves σ(A, u, v, t) meet for t = t j at the point z j . These curves are at the point z j not differentiable, they make a right angle corner, and meet at right angles as well. (iv)

Proof. Statements (i) and (ii) become clear if one observes that
i.e., it is a level sets of the modulus of a rational function 1/Q(z). Since these level sets can also be written as the set of all points z ∈ C for which |m A (z)| 2 = t 2 |p uv (z)| 2 it is clear that for each t they are algebraic curves. For t = t j (j = 1, . . . , d) the curves have no self-intersection and hence are smooth.
Let us now prove (iii). First note that for generic u, v ∈ C n there are no accidentally frozen eigenvalues, as remarked in the end of Section 2. Hence, every eigenvalue of A + τ uv * of multiplicity κ, which is not an eigenvalue of A, is necessarily a zero of Q(λ) − 1/τ of multiplicity κ, see Proposition 2. However, by Theorem 5.1 of [32] for generic u, v ∈ C n all eigenvalues of A + τ uv * which are not eigenvalues of A are of multiplicity at most two, and by Proposition 2 the geometric multiplicity is one. Therefore the meeting points are at z j with Q (z j ) = 0, Q (z j ) = 0. The behaviour of the eigenvalue curves concerning right angle corners follows from the local theory on the pertubation of an eigenvalue of geometric and algebaic multiplicity two for small values of t − t j (see e.g., the results of [22], but in particular, because of the connection with pseudospectral see [16]).
To see (iv) let λ 0 ∈ C be neither an eigenvalue of A nor a zero of Q(λ). Then [16]. To see (vi) note that 1/|Q(λ)|, as an absolute value of a holomorphic function, does not have any local extreme points on C\p −1 uv (0) and it converges to infinity with |λ| → ∞.
In Section 5 we will study in detail the rate of convergence in point (v) above.
In Figure 1 one may find the graph of the corresponding function |1/Q(λ)|, and a couple of curves σ(A, u, v, t) at values of t where double eigenvalues occur. Observe that these curves are often called level curves or contour plots of the function |1/Q(λ)|.
Remark 9. Observe that one may easily construct examples with z 1 , . . . , z d given. Let with a 1 , . . . a n ∈ C \ {0}. Then for τ = 1 the matrix B(τ ) = A + τ uv * is equal to the n × n Jordan block with eigenvalue zero, and hence has an eigenvalue of multiplicity n. By a construction similar to Example 6 we may also make this eigenvalue accidentally frozen.

Eigenvalues as global functions of the parameter
We return now to the problem of defining the eigenvalues as functions of the parameter. Recall that l stands for the degree of the minimal polynomial of A. We start with the case where we consider the parameter τ to be real.
Theorem 11. Let A ∈ C n×n and u ∈ C n \ {0} be fixed. Then for all v ∈ C n except some closed set with empty interior the following holds.
(i) The eigenvalues of which are not eigenvalues of A, can be defined uniquely (up to ordering) as functions λ 1 (τ ), . . . , λ l (τ ) of the parameter τ ∈ (0, +∞). (ii) The remaining part of the spectrum of B(τ ) consists of structurally frozen eigenvalues of A, i.e., there are no accidentally frozen eigenvalues (see formula (5) and the paragraphs following it for definitions). Proof. First let us write explicitly for which u, v all the statements will hold. Due to Proposition 2 and Remark 3 the necessary and sufficient condition for this is the following: there are no accidentally frozen eigenvalues and Q(z j ) / ∈ R for all zeros z j of Q (λ). We will now show that given arbitrary u 0 , v 0 which do not satisfy the above condition one may construct u, v, lying arbitrarily close to u 0 , v 0 such that the condition holds on some open neighbourhood of u, v. We will do this in two steps. First let us chooseũ,ṽ such that there are no accidentally frozen eigenvalues, i.e., there are no common eigenvalues of m A (λ) and pũṽ(λ). By [32] one may pick u andṽ arbitrarily close to u 0 , v 0 and the desired property will hold in some small neighbourhood ofũ,ṽ. Furthermore,ũ, e i θṽ will also obey this property for all θ ∈ (−π, π). Note that one may find θ = 0 arbitrarily small enough, so that with v = e iθ v (|θ −θ| small enough) and u =ũ one has Q(z i ) / ∈ R for i = 1, . . . d.
Observe that the statement is essentially stronger and the proof is much easier than in Theorem 6.2 of [32].
Proof. The equivalent condition for all the statements is in this case: there are no accidentally frozen eigenvalues and |Q(z j )| = 1 for all zeros z j of Q (λ). Hence, in the last step of the proof we need to replaceṽ by tṽ with t > 0 small enough.
However, note that if we replace the complex numbers by the real numbers the statement is false, as the following theorem shows.
Theorem 13. Let A ∈ R n×n and u, v ∈ R n be such that for some τ 0 > 0 an analytic definition of eigenvalues of A + τ uv is not possible due to for some x ∈ R which is not an eigenvalue of A, cf. Remark 3. Then for all A ∈ R n×n ,ũ ∈ R n ,ṽ ∈ R n with ṽ − v , ũ − u and Ã − A sufficiently small the analytic definition of eigenvalues ofÃ + τũṽ * is not possible due to existence of x ∈ R,τ 0 > 0, depending continuously onÃ,ũ,ṽ with where Q(z) corresponds to the perturbationÃ + τũṽ * as in (6).

Remark 14.
To give a punch line to Theorem 13 we make the obvious remark that A, u, v satisfying the assumptions do exist. For each such A the set of vectors u, v ∈ R n for which a double eigenvalue appears has a nonempty interior in R n , contrary to the complex case discussed in Theorem 11.
Note another reason for which the eigenvalues cannot be defined globally analytically for real matrices.
Proposition 15. Assume that the matrix A ∈ R n×n has no real eigenvalues and let u, v be two arbitrary nonzero real vectors. Then for some τ 0 ∈ R \ {0} an analytic definition of eigenvalues of A + τ uv is not possible due to for some x ∈ R, cf. Remark 3.
Proof. Note that Q(λ) is real and differentiable on the real line, due to the assumptions on A. As Q(τ ) → 0 with |τ | → ∞, one has a local real extreme point of Q(λ).

The eigenvalues of A + τ uv * for large |τ |
We shall also be concerned with the asymptotic shape of the curves σ(A, u, v; t). The proof of the following result was given in [5]: let A be an n × n complex matrix, let u, v be generic complex n-vectors. Asymptotically, as t → ∞, these curves are circles, one with radius going to infinity centered at the origin, and the others with radius going to zero, and centers at the roots of p uv (λ). The result will be restated in a more precise form below, in Theorem 17, part (v). For this we first prove the following lemma.
Proof. Recall that p uv (λ) = m A (λ)v * (λI n − A) −1 u. Expanding (λI n − A) −1 in Laurent series for |λ| ≥ A we obtain Put k − j − 1 = i and interchange the order of summation to see that However, p uv (λ) is a polynomial in λ, hence the sum from i = −∞ to −1 vanishes, and we arrive at formula (13).
Next, we analyze the roots of the polynomial p B(τ ) = m A (λ)−τ p uv (λ) as τ → ∞. We have already shown in [32] that if u * v = 0, then l − 1 of these roots will approximate the roots of p uv (λ), while one goes to infinity. The condition u * v = 0 obviously holds for generic u, v, however the next theorem presents the full picture in view of later applications to structured matrices.
Dividing by λ l−κ−1 in numerator and denominator we arrive at . which concludes the proof of part (iii).
Note that also part (iv) follows directly from part (iii) combined with the continuity of the eigenvalues as a function of θ (even when taking θ ∈ R, rather than restricting to θ ∈ [0, 2π)).
For part (v) and the remaining parts of part (iii) in the general case, recall that the eigenvalues of A + τ uv * which are not eigenvalues of A are (among the) roots of m A (λ) − τ p uv (λ), which are the same as the roots of the polynomial 1 τ m A (λ) − p uv (λ). First we take θ = 0, so τ = t.
Put s = 1/t, and consider the polynomial sm A (λ) − p uv (λ) as a small perturbation of the polynomial −p uv (λ). By general theory concerning the behavior of the roots of a polynomial under such a perturbation (see, e.g., [18], [3] Appendix, or Puiseux's original 1850 paper) for small s the roots of sm A (λ) − p uv (λ) near the roots of p uv (λ) are locally described by a Puiseux series of the form ζ j + c 1j s 1 k j + c 2j s 2 k j + · · · , j = 1, . . . , ν with c 1j = 0. Here k j is the multiplicity of ζ j as a root of p uv (λ), and there are k j roots of sm A (λ) − p uv (λ) near ζ j .
Next we do not consider t ∈ R but τ = te i θ ∈ C. Replacing u by e i θ u we see that the roots of m A (λ) − τ p uv (λ) for large τ near ζ j behave as ζ j + c 1j τ −1/kj . For fixed |τ | = r these k j roots near ζ j trace out a curve Γ j (θ) = ζ j + c 1,j r −1/kj exp(i θ) + o(r −1/kj ) with r → ∞.
We shall make these arguments much more precise as follows. Remember that an eigenvalue λ 0 of A + τ uv * which is not also an eigenvalue of A is a solution to Q(λ) = 1/τ . Consider large values of |τ | and consider also the large eigenvalues of A + τ uv * , for instance for τ large enough there is at least one eigenvalue with |λ| > A . Then λ satisfies by the definition of κ. Hence and so Again, we can be much more precise than this: we know that λ as a function of τ has a Puiseux series expansion, and if we set one checks from the equation (15) that the following hold: This completes the proof of part (iii), and gives the precise form of Γ ν+1 (θ) for r = |τ | large enough. Next we consider the eigenvalues of A + τ uv * which are close to ζ j for large τ . Recall that ζ j is a root of p uv (λ), so m A (ζ j )v * (ζ j I n − A) −1 u = 0. If ζ j would be a zero of m A (λ), then ζ j is an accidentally frozen eigenvalue for A, u, v and so is an eigenvalue of B(τ ) for all τ . Otherwise, ζ j is not an eigenvalue of A, and we have Since the first term is zero, we have Again use the fact that any eigenvalue of A + τ uv * which is not an eigenvalue of A satisfies So, if the root ζ j of p uv (λ) has multiplicity k j , then For the moment, let us denote v * (ζ j I n − A) −k u by a j,k . We know from the considerations in an earlier paragraph of the proof that λ can be expressed as a Puiseux series in τ −1 , let us say Then λ − ζ j = c 1,j τ − 1 k j + c 2,j τ − 1 k j + · · · , and inserting that in the above equation we obtain k j a j,kj +2 + smaller order terms = 1,j a j,kj +2 + · · · . Equating terms of equal powers in τ gives and using this we can derive a formula for c 2,j , which after some computation becomes Let us denote v * (ζ j I n − A) −(kj +1) u = ρ j e i θj , j = 1, . . . , ν. Then k j and the k j eigenvalues near ζ j trace out a curve Γ j (θ) which is of the form This completes the proof of part (v).
ConsiderB(s) as a perturbation of uv * . Note that uv * has eigenvalues vu * and 0, the latter with multiplicity n − 1, and that generically, when vu * = 0, uv * is diagonalizable. In the non-generic case, when v * u = 0, uv * has only eigenvalue 0, with one Jordan block of size two, and n − 2 Jordan blocks of size one.
First we take θ = 0, so τ = t. In that case s = 1 τ = 1 t . For j = 2, . . . , n we have This works the same for j = 1, then λ 1 (t) = tv * u + k 1,1 + 1 t k 2,1 + · · · . Now consider the limit of λ j (t) as t → ±∞ for j = 2, . . . , n. By (ii) this is one of the roots of p uv (λ) or an eigenvalue of A. Generically, the roots of p uv (λ) will be simple. After possibly rearranging the eigenvalues we may assume that for j = 2, . . . , l the eigenvalue λ j (t) converges to one of the roots of p uv (λ), while for j = l + 1, . . . , n the eigenvalue λ j (t) is constantly equal to an eigenvalue of A. Then where k 1,j is either one of the roots of p uv (λ) or an eigenvalue of A. Thus λ j (t) = k 1,j + 1 t k 2,j + O 1 t 2 for j = 2, . . . , l. Next we do not consider t ∈ R but τ = te i θ ∈ C. Make the following transformation: σ(A + τ uv * ) = σ(A + te i θ uv * ) = σ(A + tũv * ) whereũ = e i θ u. Note that p uv (λ) and pũ v (λ) only differ by the constant e i θ and so they have the same roots. Applying the arguments from the previous paragraphs we obtain . . , l. Consider for fixed t the curve ζ 2,t = {λ j (te i θ ) | j = 2, . . . , n, 0 ≤ θ < 2π}.
The arguments above show that asymptotically ζ 2,t is a circle with radius 1 t k 2,j centered at k 1,j , which is a root of p uv (λ).

Remark 19.
Note that if (14) holds for κ = l − 1 then p uv (λ) ≡ 0 and by (4) the characteristic polynomial of the perturbed matrix coincides with the characteristic polynomial of A.
Remark 20. Consider the case κ = 2. Then the eigenvalues that go to infinity will trace out a circle, but each of them only traces out half a circle. In addition the speed with which the eigenvalues go to zero is considerably slower than when κ = 1.
Example 21. As an extreme example, consider A = J n (0), the n × n Jordan block with zero eigenvalue, and let u = e n and v = e 1 , where e j is the j'th unit vector. Then p uv (λ) = 1, and the eigenvalues of A + τ uv * are the n'th roots of τ , λ k (τ ) = n √ re i 2k n π for k = 1, · · · , n, and dλ dτ = 1 n τ 1 n −1 = e * 1 A n−1 en n · 1 λ n−1 , as predicted by the theorem.
Example 22. Consider A = I 2 , and the same u and v as in the previous example. In this case v * A k u = 0 for all k. Consequently, as is also immediate by looking at the matrix, none of the eigenvalues moves. More generally, this happens as soon as A is upper triangular and uv * is strictly upper triangular.

Structured matrices
Generic rank one perturbations for several classes of structured matrices were studied intensively over the last decade. We refer the reader to: [23] for complex J-Hamiltonian complex H-symmetric matrices, [24] for complex H-selfadjoint matrices including the effects on the sign characteristic, [26] for complex H-orthogonal and complex J-symplectic as well as complex H-unitary matrices, [27] for the real cases including the effects on the sign characteristic, and [12] for the case of Hpositive real matrices. In [2] higher rank perturbations of structured matrices were considered. Another type of structure was treated in [4], where nonnegative rank one perturbations of M -matrices are discussed. Finally, the quaternionic case was discussed in [30].
In the present section we will treat the classes of complex H-selfadjoint and real J-Hamiltonian matrices, analysing the global definition of eigenvalues and the convergence for large values of the parameter. First let us recall the definitions.
We say that an n × n matrix A is (H) H-selfadjoint if A ∈ C n×n , HA = A * H, where H ∈ C n×n is some Hermitian nonsingular matrix. (J) J-Hamiltonian if A ∈ R n×n , JA = −A J, where J ∈ R n×n is a nonsingular real matrix satisfying J = −J . Note that rank one matrices in these classses are, respectively, of the form (H) uu * H, for some u ∈ C n \ {0}, (J) uu J for some u ∈ R n \ {0}. Consequently, the function Q(λ) takes, respectively, the form (H) Q(λ) = u * H(λI n − A) −1 u, (J) Q(λ) = u J(λI n − A) −1 u. It appears that in both these classes global analytic definition of eigenvalues is not a generic property, similarly to Theorem 13 in the real unstructured case. By inspection one sees that the proof remains almost the same, the key issue is that all polynomials involved are real on the real line and x is a simple real zero of q 0 (λ).
Theorem 24. Assume one of the following (H) A ∈ C n×n is H-selfadjoint with respect to some nonsingular Hermitian H, and u ∈ C n \ {0}, (J) A ∈ R n×n is J-Hamiltonian with respect to some nonsingular skew-symmetric J, and u ∈ R n \ {0}. If for some τ 0 > 0 an analytic definition of the eigenvalues of A + τ 0 uv * is not possible due to Q(x) = 1/τ 0 , Q (x) = 0, Q (x) = 0 for some x ∈ R, cf. Remark 3, then for all (H)Ã ∈ C n×n being H-selfadjoint,ũ ∈ C n (J)Ã ∈ R n×n being J-symmetric,ũ ∈ R n (respectively) with ũ − u and Ã − A sufficiently small the analytic definition of the eigenvalues is not possible due to existence ofx ∈ R,τ 0 > 0, depending continuously on A,ũ,ṽ with where Q(z) corresponds to the perturbation ofÃ as described above the theorem.
Remark 25. We remark here, that Proposition 15 holds for J-Hamiltonian matrices as well, and also for complex H-selfadjoint matrices.
We continue the section with corollaries from Theorem 17. While statement (ii) below is not surprising if one takes into account the symmetry of the spectrum of a J-Hamiltonian matrix with respect to both axes, statement (i) cannot be derived using symmetry principles only.

Corollary 26.
(i) Let A ∈ R n×n , consider the perturbation A+τ uu J, where J is real, nonsingular and skew symmetric, and u ∈ R n \ {0} and τ ∈ R. Then there are (at least) two eigenvalues of A + tuu J going to infinity as described by part (ii) of Theorem 17.
(ii) If, additionally to (i), A is also J-Hamiltonian the number of such eigenvalues is even, and (iii) In case A is J-Hamiltonian and u JAu > 0 then there are two real eigenvalues converging to infinity as τ goes to +∞, and two purely imaginary eigenvalues going to infinity as τ goes to −∞. In case u JAu < 0 the situation is reversed. (iv) In case A is J-Hamiltonian and u JAu = 0 there are at least four eigenvalues going to infinity as τ goes to +∞ and as τ goes to −∞. More precisely, let κ be the first (necessarily odd) integer for which u JA κ u = 0. If u JA κ u > 0 then for τ → +∞ there are at least two real eigenvalues going to infinity, and two purely imaginary eigenvalues going to infinity. If u JA κ u < 0 then for τ → −∞ there are at least two real eigenvalues going to infinity, and two purely imaginary eigenvalues going to infinity.
Proof. Part (i) follows from Theorem 17 and the fact that for any vector u we have u Ju = 0 by the skew-symmetry of J. Part (ii) follows from the same reasoning taking into account that for any even k the matrix JA k is skew-symmetric, from which one has u JA k u = 0 for even k.
Parts (iii) and (iv) follow from Theorem 17, part (iii), using the fact that Q(λ) is real on both real and imaginary axis. Then, one checks easily that A is J-Hamiltonian, and that u JAu = 0, while u JA 3 u = −4 = 0. The polynomial p uv (λ) for v = −Ju is constant, equal to −4. Hence all four eigenvalues of A + tuu J are going to infinity, as is shown in the following figure. Note also that the rate of convergence to infinity in this example should be as the fourth root of t, which is confirmed by the graph (the fourth root of 125000 is about 19).

Nonnegative matrices
We will apply Theorem 17 to the setting of nonnegative matrices. Recall that a nonnegative matrix A is called irreducible if there is no permutation matrix P such that P AP is of a block form P AP = X Y 0 Z with X and Z being nontrivial square matrices. By the graph associated with the matrix A = [a ij ] n ij=1 we understand the directed graph with vertices 1, . . . , n and with the set of edges consisting of only those pairs (i, j) for which a ij > 0. By a cycle we understand a directed path from the vertex i to itself.
Theorem 28. Let A = [a ij ] n ij=1 ∈ R n×n be a nonnegative, irreducible matrix. Let also l denote the length of the shortest cycle in the graph of the matrix A + e i0 e j0 containing the (i 0 , j 0 ) edge. Then the matrix A + τ e i0 e j0 , τ > 0 has precisely l eigenvalues converging to infinity with τ → +∞ Proof. Note that e j0 A k e i0 = (A k ) j0i0 = 0 if and only if k < l, as l is the length of the smallest cycle going through the (i 0 , j 0 ) edge. By Theorem 17 the matrix A + τ uv has precisely l eigenvalues converging to infinity.
Note that the number l of eigenvalues converging to infinity may be greater than the number of eigenvalues of A on the spectral circle, i.e, the imprimitivity index. However, by the theory of nonnegative matrices l, as the length of the (shortest) cycle, is always a multiple of the imprimitivity index, see, e.g., Theorem 1.6. of [10]. Then v u = 0, while v Au = 0. So both eigenvalues of B(τ ) will go to infinity. For τ ≥ 0 the matrix B(τ ) is an entrywise positive matrix, so one of the eigenvalues will be the spectral radius. By Theorem 17, both eigenvalues go to zero at the same rate, but as the eigenvalues are √ 1 + τ ±1 their moduli are not equal.