Decompositions and coalescing eigenvalues of symmetric definite pencils depending on parameters

In this work, we consider symmetric positive definite pencils depending on two parameters. That is, we are concerned with the generalized eigenvalue problem A(x)−λB(x)v=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\left (A(x)-\lambda B(x)\right )v=0$\end{document}, where A and B are symmetric matrix valued functions in ℝn×n\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathbb {R}^{n\times n}$\end{document}, smoothly depending on parameters x∈Ω⊂ℝ2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$x\in {\Omega }\subset \mathbb {R}^2$\end{document}; furthermore, B is also positive definite. In general, the eigenvalues of this multiparameter problem will not be smooth, the lack of smoothness resulting from eigenvalues being equal at some parameter values (conical intersections). Our main goal is precisely that of locating parameter values where eigenvalues are equal. We first give general theoretical results for the present generalized eigenvalue problem, and then introduce and implement numerical methods apt at detecting conical intersections. Finally, we perform a numerical study of the statistical properties of coalescing eigenvalues for pencils where A and B are either full or banded, for several bandwidths.


Introduction
An important and well studied problem in structural engineering is the second order problem (1) M ÿ + D ẏ + Cy = F (t) , where the matrices M, D, C ∈ R n×n are all symmetric and positive definite and typically arise from finite element discretization of beams' structures and the like, and a main interest of mechanical engineering is to study the way a structure responds to specific external solicitations (the forcing F above), in particular to solicitations taking place at specified sinusoidal forcing; e.g., see [12].There are at least two outstanding difficulties in dealing with (1).The first is that the problem typically depends, smoothly, on one or more parameters (that is, the matrices M, D, K, do), and this parameter dependence should be accounted for, both in the development of algorithms and in the theoretical implications of the dependence itself.The second difficulty is that typically n ≫ 1 and directly dealing with the 2-nd order problem (1) is not computationally feasible.For this reason, a widely adopted technique consists in projecting (1) onto a subspace spanned by a restricted set of eigenvectors v of the generalized (frictionless) eigenproblem (2) (M − λC) v = 0 .
For example, in [13] the projection is taken with respect to the m eigenvectors v 1 , . . ., v m , associated to the m smallest eigenvalues of (2), with m ≪ n.That is, writing V = v 1 . . .v m ∈ R n×m , instead of (1) one considers (3) M z + D ż + Cz = F (t) , where y = V z, z ∈ R m , and M = V T M V , D = V T DV , C = V T CV , and recall that the eigenvectors V can be chosen so that M and C are diagonal.
Remark 1.1.We are interested in carrying out the above plan in the parameter dependent case; more precisely, we will consider the eigendecomposition of the generalized eigenproblem (2) when the matrices depend on two parameters.See below.The motivation, as above, is being able to perform a dimension reduction by projecting into a desired dominant eigenspace.However, in this case, there is another very important aspect to consider: in general, the projection is well defined only if there is a gap between the eigenvalues associated to the eigenspace onto which we are projecting, and the other eigenvalues.(Note that this difficulty is true regardless of the smoothness of the function of eigenvectors.)For this reason, our specific emphasis will be to locate parameter values where the eigenvalues coalesce: these are (and will be) called conical intersections and are the parameter values where the projection is not well defined.
A plan of our paper follows.In the remainder of this introduction, we review basic results on theory and techniques for the static generalized eigenproblem (that is, when the given matrices do not depend on parameters).Section 2 contains smoothness and periodicty results for the parameter dependent case.In particular, we give smoothness results about square roots (and Cholesky factors) when the matrices are smooth functions of several parameters, and specialized results when they are analytic function of one real parameter.We give a block-diagonalization result, and also discuss the codimension of having equal eigenvalues, and finally give some periodicity results for the 1-parameter case.All of these results are needed for the development in Section 3 about detection of parameter values where the eigenvalues of the generalized eigenproblem coalesce.In Section 4, we discuss algorithmic development for detection of coalescing eigenvalues.Finally, in Section 5 we give a collection of results on locating conical intersections of random functions ensambles, and give evidence on the power law distribution in terms of the size of the problem.The concern of how to build appropriate random models is addressed as well.Throughout this work, the norm is always the 2-norm.
1.1.Model problem and eigenvalues continuity.Motivated by the discussion above, in particular (3), the basic problem we will consider is the following: (4) [A(x) − λB(x)] v = 0 , where x ∈ R p represents p parameters varying in an open and connected subset Ω of R p , and we are interested in the cases of p = 1, 2. The functions A and B in R n×n will always be symmetric and B will also be positive definite, a fact that we will indicate with B ≻ 0.Moreover, both A and B will be C k functions of the parameters with k ≥ 1; for the case of one parameter, we will also give some results in the case of A and B being real analytic functions of the parameter.We recall that the eigenvalues of (4) are roots of the characteristic polynomial det(A − λB) = 0 .
Note that the leading coefficient of this polynomial is given by det(B) and thus it is not 0, since B is positive definite, and therefore the polynomial is of exact degree n.As a consequence, there are n eigenvalues of (4) and it is well known (see below) that they are all real.Furthermore, since the coefficients of the polynomial det(A − λB) are as smooth as the entries of A and B and the roots of a polynomial of (exact) degree n depend continuously on the coefficients, then we observe that the n eigenvalues of ( 4) can be labeled so to be continuous functions of the parameter x.In particular, we can label them in decreasing order: Numerical methods for the "static" problem (that is, when A, B ∈ R n×n are given constant matrices, not depending on parameters), A = A T , B = B T ≻ 0 (cfr.( 2)) are quite well developed; see [14] for a review.In essence, the standard techniques pass either through taking the square root of B or its Cholesky factorization, the latter technique being the method of choice in the numerical community (e.g., it is the method implemented in Matlab).For convenience, we review these below for this static problem: (5) [A − λB] v = 0 .
(a) Square root.It is always possible to reduce the problem (5) to a standard eigenvalue problem, as follows: where B 1/2 is the unique symmetric positive definite square root of B, and Ã = B −1/2 AB −1/2 .Clearly, from the eigenvalues/eigenvectors of this last problem, we can get those of (5).Since Ã = ÃT , we note that the eigenvalues of (5) (and those of (4) for any given value of x) are real, as previously stated.(b) Cholesky.Similarly, since B is positive definite, it admits a Cholesky factorization (6) B = LL T , L lower triangular , from which it is immediate to obtain and again from the eigenvalues/eigenvectors of this last problem, we can get those of (5).We note that the Cholesky factor is not unique, but it can be made unique by fixing the signs of L ii , the standard choice being L ii > 0. In this work, we will always restrict to this choice L ii > 0. The following simple result will come in handy later on.Lemma 1.2.For (5), the eigenvector matrix V ∈ R n×n , V = [v 1 , . . ., v n ], can be chosen so to satisfy the relation If the eigenvalues λ i 's are distinct, then, for a given ordering of the eigenvalues, the matrix V in (7) is unique up to the sign of its columns.
Proof.Regardless of having used the square root of B or its Choleski factor, we saw that Since Ã = ÃT , then Ã has an orthogonal matrix of eigenvectors W : W T W = I, and thus In case the eigenvalues λ i 's are distinct, then it is well understood that, for given ordering of the eigenvalues, orthogonal W is unique up to the sign of its columns; that is, if W 1 and W 0 are two orthogonal matrices giving the same ordered eigendecomposition of Ã, we must have With the notation of the proof of Lemma 1.2, we also have

A collection of smoothness results
Here we give several results for the generalized eigenvalue problem (4) that extend known results from the standard eigenvalue problem (that is, B = I in (4)): It is well known (e.g., see [10,2,5]) that even for this standard eigenvalue problem (8) the eigenvalues/eigenvectors cannot be expected to inherit smoothness of A, unless eigenvalues are distinct.In the case of 2 parameters, in general there is a total loss of smoothness when the eigenvalues coalesce (e.g., take ), and even in the 1 parameter case there is a potential loss of smoothness of the eigenvectors when eigenvalues coalesce.Our goal in this section is to generalize these, and similar, results, for (4).At a high level, one may argue that our results follow from the fact that B (being positive definite) induces an inner product, and hence a related concept of orthogonality; see Definition 2.1; and, as a consequence, with the appropriate modifications with respect to this inner product, many results from the standard case (symmetric eigenproblem and Euclidean inner product) should follow.Yet, these "modifications" are both non-trivial and of theoretical interest; moreover, our results have practical engineering implications (e.g., see the discussion on smoothness of projection in the Introduction).Indeed, given the relevance in Engineering applications of the generalized eigenproblem, we believe that our study is both needed and timely.Definition 2.1 (B-orthogonality).Let B ∈ C k (R p , R) be a symmetric positive definite matrix valued function of p parameters.Two vector valued functions v(x), w(x) ∈ R n , are called B-orthogonal if v T Bw = 0 and further B-orthonormal if v T Bv = 1 and w T Bw = 1, for all x.
2.1.Square root.Before proceeding, we point out the following simple but important result, whose proof we give for completeness.Lemma 2.2.Let a ∈ C k (Ω, R), k ≥ 0 an integer, be a strictly positive function of p real parameters x ∈ Ω, where Ω is an open, bounded, and connected subset of R p , and let a be continuous and uniformly bounded in Ω: a(x) < α, α finite, ∀x ∈ Ω.Then, the function a(x), where a(x) is the unique positive square root of a, is also a C k function of x.Furthermore, if a ∈ C ω (J, R) is analytic in the parameter x ∈ J, where J is an open and bounded interval of the real line, then so is its square root.
Proof.The C k -result follows from Theorem 2.5 below.
For the case of analytic function of one parameter, we recall that composition of two analytic functions is analytic, and thus what we need to prove is that the function √ x is analytic, for x in some interval (c, b) with 0 < c < b < ∞.But this follows from the following argument.
2 ln(x/α) .(ii) Now, let y = x/α, where α > b, so that 0 < y < 1.Then, we have This series converges for y > 0, which is the case.Moreover, the power series of 1/(y + 1) is and this series converges for y < 1, which also holds true.(iii) Putting together the expressions in points (i) and (ii), we obtain a series expansion of ln(x/α) and hence obtain its analyticity.The end result then follows since the exponential is also an analytic function.
Next, we first observe that it is easy to infer that the Cholesky factor of B is as smooth as B itself (for functions of one parameter, this result, and the argument of proof are known; see [1]).
Theorem 2.3.Let B ∈ C k (Ω, R n×n ) be symmetric positive definite for all x ∈ Ω.Then, its Cholesky factor L in (6) with L ii > 0 is also a C k function for x ∈ Ω.Further, if B ∈ C ω (J, R n×n ) is analytic in the parameter x ∈ J, where J is an open and bounded interval of the real line, then so is the Cholesky factor.
Obviously, B 1 is symmetric, positive definite, and as smooth as B, and the result follows using Lemma 2.2.The analytic case also follows in the same way since in this case B 1 and √ b 11 are analytic.
The next question is if the square root B 1/2 is also a C k , respectively C ω , function.Below, we prove that the answer is yes.
We begin by showing that a symmetric positive definite function B, smoothly depending on parameters x ∈ R p , has a unique symmetric positive definite square root S, which depends continuously on x.We note that continuity of the square root function can be inferred from the general result [7, Proposition 2.1], but for completeness we give a different and more constructive proof.
where Ω is an open, bounded, connected subset of R p , p ≥ 1, and let B ∈ C( Ω, R n×n ).Further, let B be symmetric positive definite, and uniformly bounded, for all x ∈ Ω: sup x∈ Ω B(x) < γ.Then, there exists, unique, a symmetric positive definite square root S(x), for any x ∈ Ω.Moreover, S(x) is a continuous function of x.
Proof.We begin by scaling B so that it will have norm less than 1.Namely, we define the function C(x) = B(x) γ , with γ > 0, so that C < 1 in Ω; note that C is positive definite and as smooth as B.Then, we consider the function Observe that Y is also symmetric, but negative definite and its eigenvalues are −1 + µ, where µ are the eigenvalues of C. Therefore, the eigenvalues of Y are all in (−1, 0), and thus those of I + Y are all positive, and I + Y has a unique positive definite square root for any given value of the parameters x.
Next, consider the following series expansion: and observe that all terms are symmetric, and smooth functions of the parameters.Further, for any given parameters value x, and any associated (unit 1 + ν as defined by the right hand side of (9) with ν replacing Y there.Therefore, the right hand side defines a positive definite matrix, for any given value of the parameters.Now, let y = maxΩ Y , and note that this gives 0 < y < 1, and consider the numerical series By the ratio test, this numerical series converges if y < 1, which is the case.Therefore, from the Weierstrass M-test, we conclude that the series in (9) converges uniformly.As a consequence, the sum of the series is a continuous function of the parameters.Finally, we observe that (I + Y ) 1/2 = C 1/2 , and from C(x) = A(x) γ , we get that also The linear systems given by the Lyapunov equations in (10) are uniquely solvable, since S is positive definite and thus invertible.Now, the unique solution of an invertible linear system Cz = b with C and b continuously depending on parameters, obviously defines a continuous solution z, from which we conclude that the unique solutions X i of ( 10) are continuous functions of the parameters x.Finally, we observe that S x i = X i , i = 1, . . ., p.
At this point, we can look at higher derivatives.We see the situation for the second derivatives, from which the general argument will be evident.
Rewrite (10) B x i = S x i S + SS x i , i = 1, . . ., p , and consider the second partial derivatives from formally differentiating this relation.We get: (11) B x i x j = X ij S + S x i S x j + S x j S x i + SX ij , i, j = 1, . . ., p .
Rearranging terms in (11), we obtain which is again uniquely solvable and gives a continuous solution X ij , and we observe that S x i x j = X ij .Finally, observe that, from the left-hand-side of (11), we get that X ij = X ji , that is S x i x j = S x j x i , and thus the order of differentiation of the second partial derivatives does not matter.Continuing to formally differentiate, we obtain continuous higher derivatives and the result follows.
Finally, we specialize Theorem 2.5 to the case of B analytic.
Theorem 2.6.Let B ∈ C ω (R, R n×n ), symmetric and positive definite for all x.Then, the unique positive definite square root B 1/2 is analytic in x as well.
Proof.The proof rests on a fundamental theorem of Kato, see [10], whereby an analytic Hermitian function admits an analytic eigendecomposition.Thus, we can write B(x) = Q(x)D(x)Q T (x) where Q and D are analytic, Q is orthogonal, and D is diagonal with D ii (x) > 0 (we note that the eigenvalues in D are not necessarily ordered).Then, we have . ., n .The result now follows from Lemma 2.2.
Remark 2.7.Recalling that the positive definite square root of a positive definite matrix is unique, from the numerical point of view we infer that -in principle-any desired algebraic technique can be used to compute the square root of B.
Next, we look at a general block-diagonalization result for the parameter dependent generalized eigenproblem, specializing a result given by Hsieh-Sibuya and Gingold (see [9] and [6]) for the standard eigenvalue case.Then, we give more refined results for the case of one parameter, and further specialize some results to the case of periodic pencils.All of these results will form the justification for our algorithms to locate conical intersections.
2.2.General block diagonalization results.In order to simplify the problem we consider, the following result is quite useful.It highlights that the correct transformations for the pencil under study are "inertia transformations".
Since our interest in this work, for reasons which will be clarified below, is for the case where A and B depend on two (real) parameters, this is the case on which we focus in the theorem below.
and suppose that the eigenvalues of the pencil (A, B) can be labeled so that they belong to two disjoint sets for all x ∈ R: λ 1 (x), . . ., λ p (x) in Λ 1 (x) and λ p+1 (x), . . ., ), so that the eigenvalues of the pencil (A 1 , B 1 ) are those in Λ 1 , and the eigenvalues of the pencil (A 2 , B 2 ) are those in Λ 2 , for all x ∈ R. Furthermore, the function V can be chosen to be B-orthogonal (V T BV = I for all x).. Proof.We show directly that the transformation V can be chosen so that V T BV = I, from which the general result will follow.
One way to proceed is by using the unique smooth positive definite smooth square root of B, B 1/2 , so that the eigenvalues of the pencil are the same as those of the standard eigenvalue problem with function Ã = B −1/2 AB −1/2 .Because of Theorem 2.5, the function Ã is as smooth as A and it is clearly symmetric.Therefore, from the cited results in [9,6], we have that there exists smooth, orthogonal, W such that with the eigenvalues of A i being those in Λ i , and We notice that the function V of Theorem 2.8 is clearly not unique, not even if we select one for which V T BV = I.
The block diagonalization result Theorem 2.8 can easily be extended to several blocks.In the case of n distinct eigenvalues, one ends up with a full diagonalization.(Of course, having distinct eigenvalues is a sufficient, but not necessary, condition).Because of its relevance in what follows, we give this fact as a separate result, with proof, in the next subsection.
2.3.One parameter case: smoothness.We say that the pencil -equivalently, the generalized eigenproblem-is diagonalizable if there are n linearly independent eigenvectors v 1 , . . ., v n , associated to the eigenvalues λ 1 , . . ., λ n .Assembling these eigenvectors in a matrix and the associated eigenvalues along the diagonal of a matrix Λ, Λ = diag(λ 1 , . . ., λ n ), then we express the condition of diagonalizability in matrix form as (12) AV = BV Λ .
, and assume that the eigenvalues of (4) are distinct for all t.Then, the eigenvalues can be chosen to be C k functions of t.Moreover, we can also choose the corresponding eigenvector function V to be a C k function of t and to satisfy the relation V T BV = I, for all t.
Proof.The proof puts together known results, and we show an argument using the square root of B.
From Theorem 2.1, we know that the unique positive definite square root of B, call it B 1/2 is as smooth as B.Then, we rewrite and so we are left to show that we can choose W to be a smooth function of orthogonal eigenvectors of Ã, from which the result will follow.But, obviously the eigenvalues of Ã are the same as those of A, and under the assumption of having distinct eigenvalues it is known from [2, Proposition 2.4] that the eigenvalues can be chosen smooth, and W can be chosen smooth and orthogonal, from which the result follows (note that We can further refine the eigendecomposition result Theorem 2.9, even allowing for coalescing eigenvalues obtaining the following results about smoothness of the eigenvalues/eigenvectors of the generalized eigenproblem (4).
where J is some interval of the real line.
(i) (Finite order of coalescing) Suppose that the continuous eigenvalues λ 1 , . . ., λ n , satisfy for some e ≤ k and for all x ∈ J and i = j.Then, there exists a B-orthogonal function of eigenvectors V ∈ C k−e (J, R n×n ).The eigenvalues can be labeled so to be C k functions.(ii) (Analytic case) Moreover, if A, B ∈ C ω , then the eigenvalues can be labeled so to be analytic functions, and there is an associated B-orthogonal analytic function of eigenvectors.
Proof.As above, we reduce the problem to that of a symmetric eigenproblem ( Ã− λI)w = 0, with smooth, respectively analytic, function Ã.At this point, the stated results are a direct application of known results in the literature for symmetric functions of 1 parameter.See [2, Theorems 3.3 and 3.4] for statements (i) and see [10] for statement (ii).
Remark 2.11.The use of the square root of B in the proof above is not necessary, and other possibilities exist.For example, using the Cholesky factor of B: B = LL T , where L is lower triangular with positive diagonal; recall that, from Theorem 2.3 we know that L is as smooth as B. Using this, we get As before, A is smooth and symmetric and W C is smooth and orthogonal.This leads to an interesting consequence.Assume that the distinct eigenvalues are arranged along the diagonal of Λ in a fixed way, say in increasing fashion, for both the eigendecompositions of Ã and of A. Then, the functions Ã and A are two symmetric isospectral functions, and are orthogonally similar.Indeed, let us call W S the orthogonal factor of Ã (that is using the square root of B) and call W C the orthogonal factor of A (that is, using the Cholesky factor of B).
It is important to stress that, in spite of the differences in the orthogonal factors of Ã and A, the end result on V is essentially unique; see Corollary 2.12 below.
Next is the uniquess result for V .Corollary 2.12.Under the assumptions of Theorem 2.9, call V a smooth function of eigenvectors satisfying V T BV = I, and rendering a certain ordering for the diagonal of Λ.Such V is unique, and any other possible (smooth) function of eigenvectors yielding the same ordering of eigenvalues is obtained from V by sign changes of V 's columns.
Proof.Since the eigenvalues are distinct, then the eigenvectors are uniquely determined up to scaling.In other words, the only freedom in specifying V is given by V → V S where S = diag(s i , i = 1, . . ., n) with s i = 0.By requiring that V T BV = I, we get that we must have S 2 = I, that is s 2 i = 1, i = 1, . . ., n, as claimed.
2.3.1.Differential Equations for the factors.Our goal in this section is to derive differential equations satisfied by the smooth factors V and Λ, under the assumption of distinct eigenvalues.So doing, we will generalize known results in [2] for the standard eigenproblem (i.e., when B = I).As it turns out, the generalization is not entirely trivial.Consider (4), with , and assume that the eigenvalues of (4) are distinct for all t.As seen in Theorem 2.9, we can choose V and Λ smooth as well satisfying (12) and V satisfies the relation V T BV = I, for all t.
As seen in Corollary 2.12, we must fix a choice for V .So, suppose we have an eigendecomposition at t = 0, that is we have V 0 and Λ 0 so that We want to obtain differential equations satisfied by the factors V and Λ for all t ≥ 0, satisfying the initial condition V (0) = V 0 and Λ(0) = Λ 0 .
Since the factors are smooth, we can formally differentiate the two relations Differentiation of ( 13)-(a) gives from which using AV = BV Λ, and hence Now, using the structure of Λ (diagonal) we observe that relatively to the diagonal entries we have (using that the diagonals of V T B V and of (V T B V ) T are the same): that is the eigenvalues in general satisfy a linear non-homogeneous differential equation.Next, differentiating ( 13)-(b), we obtain V T BV + V T ḂV + V T B V = 0 from which we get hence we can obtain an expression for the symmetric part of (V T B V ), and in particular in (15) we can use What we are missing is an expression for the anti-symmetric part of (V T B V ).To arrive at this, we use ( 14) relative to the off-diagonal entries.This gives the following for the (i, j) and (j, i) entries: where we have used symmetry of V T ȦV and the fact that the (i, j)-th entry of a matrix is the (j, i)-th entry of its transpose.Now, adding the last two expressions in ( 17) and (18), we obtain and thus we can obtain an expression for the antisymmetric part of (V T B V ), upon using (16) for its symmetric part.
So, finally, using ( 16) and ( 19), we can obtain a formula for the term V T B V which depends on B, Ḃ, Λ and V .Let us formally set C = V T B V , and summarize the sought differential equations for V and Λ: and Example 2.13 (Standard Eigenproblem).The most important special case of the previous analysis is of course the case where B = I, the standard eigenproblem.In this case, since Ḃ = 0, we obtain major simplifications.For one thing, ( 15) is a simple integral not a linear differential equation for the eigenvalues: Further, from (16) we observe that V T V must be anti-symmetric, and thus we have that Formulas (21) and ( 22) of course match those derived for the standard eigenproblem in [2].
2.4.Periodicity.To justify our algorithms to locate conical intersections, we will need being able to smoothly find eigenvalues of the pencil (under the assumption that the eigenvalues are distinct) along a closed loop in parameter space.For this reason, we next give some results on periodicity for the square root and the Cholesky factors of a positive definite periodic function, as well as some general results on periodicity.
To begin with, let us properly define what we mean by a periodic function, and give an elementary result on periodicity of the square root of a function.
, for all t.Moreover, we say that 1 is the minimal period of f if there is no τ < 1 for which f (t + τ ) = f (t), for all t.In the same way, we say that the pencil (A, B) is periodic of period 1 if A(t + 1) = A(t) and B(t + 1) = B(t), and further of minimal period 1 if either A or B is such.
Lemma 2.15.Let the real valued function f ∈ C k (R, R), k ≥ 0, be strictly positive for all t, and let f be periodic of minimal period 1.Let s(t) = f (t), t ∈ R. Then also s is C k and periodic of minimal period 1.
Proof.The smoothness result is in Lemma 2.2.For the periodicity, we argue by contradiction.Observe that surely s(t+1) = s(t), for all t, as otherwise one could not have f (t+1) = f (t).Then, if there is τ < 1 s.t.s(t + τ ) = s(t), for all t, then also s(t)s(t) would be τ -periodic that is f would be τ -periodic.
Finally, we show that the Cholesky factor and the positive definite square root of a 1-periodic positive definite function are also 1-periodic.
Theorem 2.16.Let the function A ∈ C k (R, R n×n ), k ≥ 0, be symmetric positive definite and of minimal period 1.
(a) Let L be the unique Cholesky factor of A: A(t) = L(t)L T (t), where L is lower triangular with positive diagonal, for all t.Then, also L has minimal period 1.(b) Let S = A 1/2 tbe the unique positive definite square root of A: S = S T ≻ 0, S 2 = A.Then, also S has minimal period 1.
Proof.First, consider the case of the Cholesky factor.From Theorem 2.3, we know that L is as smooth as A, and L(t)L T (t) = A(t) for all t.Now, since A(t + 1) = A(t), then A(t + 1) = L(t)L T (t) as well as A(t + 1) = L(t + 1)L T (t + 1).From uniqueness of the Cholesky factor, we then must have L(t) = L(t + 1).Finally, if L had minimal period τ < 1, then necessarily so would A, but this contradicts that the minimal period of A is 1.
The proof for the square root is quite similar.Using Theorem 2.5, we know that S is as smooth as A and S 2 (t) = A(t) for all t.Since A(t + 1) = A(t), then also S(t + 1) is a positive definite square root of A(t).Since the square root is unique, we then have S(t + 1) = S(t).As before, if S had minimal period τ < 1, then necessarily so would A, contradicting that the minimal period of A is 1.
The next result is a corollary to Theorem 2.8 and will come in handy.
Corollary 2.17.Let V ∈ C k (R, R n×n ) be the function of which in Theorem 2.8.Let Γ be a simple closed curve in R, parametrized as a C p (p ≥ 0) function γ in the variable t, so that the function γ : t ∈ R → R is C p and of (minimal) period 1.Let m = min(k, p), and let V γ be the C m function V (γ(t)), t ∈ R.Then, V γ is C m and 1-periodic.
Proof.The result is immediate upon considering the composite function V γ and using the stated smoothness and periodicity results.
Remark 2.18.In case the eigenvalues of the pencil (A, B) are distinct in R, then the B-orthogonal function V has diagonalized (A, B).For given ordering of the eigenvalues, as we already remarked V is essentially unique: the degree of non-uniqueness is given only by the signs of the columns of V .Naturally, in this case Corollary 2.17 will give that a smooth V γ will be a 1-periodic function.
The last result we give is a generalization of [5, Lemma 1.7] and it essentially states that if the pencil (A, B) has minimal period 1, then there cannot coexist continuous eigendecompositions of minimal periods 1 and 2.

Lemma 2.19. Let the functions
0, be of minimal period 1 and let the pencil (A, B) have distinct eigenvalues for all t.Suppose that there exists V ∈ C 0 , invertible, and diagonal Λ such that where D is diagonal with D ii = ±1 for all i, but D = I n .Then, there cannot exist an invertible continuous matrix function T diagonalizing the pencil and of period 1.
Proof.By contradiction, suppose that there exists continuous T of period 1 such that A(t)T −1 (t) = B(t)T −1 (t)Λ(t), for all t ∈ R. Therefore, we must have A = BT −1 ΛT and A = BV ΛV −1 from which Λ(T V ) = (T V )Λ.But, Λ(t) has distinct diagonal entries for all t ∈ R, so that T (t)V (t) must be diagonal for all t ∈ R. Denote its diagonal entries by c 1 (t), . . ., c n (t), and so (since T V is invertible) c i = 0, for all t.But T (t + 1)V (t + 1) = T (t)V (t)D, for all t ∈ R, hence there must exist an index i for which c i (t + 1) = −c i (t), which is a contradiction, since the functions c i 's are continuous and nonzero for t ∈ R.

Coalescing eigenvalues of (4)
In this section, we study the occurrence of equal eigenvalues for (4) when A and B depend on two (real) parameters.We follow the skeleton of arguments given in [5] for the symmetric eigenproblem, and somewhat similar arguments to those used there.Still, the extension to the symmetric positive definite pencil is not automatic and needs to be done carefully.
First, we consider the case of a single pair of eigenvalues coalescing, then generalize to several pairs coalescing at the same parameter values.To begin with, we show that having a pair of coalescing eigenvalues is a codimension 2 property.
3.1.One generic coalescing in Ω.First, consider the 2 × 2 case.The following simple result is the key to relate a generic coalescing to the transversal intersection of two curves.
. Then, the generalized eigenproblem has identical eigenvalues at x if and only if . We observe that the sign of the entries of v and of w is the same, and we also note that d2 < 1 (since B is positive definite).We further have the following chain of equalities: (Note that we have reduced the problem to one for which A has 0-trace.)Now, an explicit computation gives Now, we have identical eigenvalues µ (hence λ) if and only if Rephrasing in terms of the original entries, this is precisely what we wanted to verify.
We now have and let λ 1 and λ 2 be the two continuous eigenvalues of the pencil (A, B), and labeled so that λ 1 (x) ≥ λ 2 (x) for all x in Ω. Assume that there exists a unique point ξ 0 ∈ Ω where the eigenvalues coincide: and assume that 0 is a regular value for both function aγ − αc and bγ − βx. 1 Then, consider the two C k curves Γ 1 and Γ 2 through ξ 0 , given by the zero-set of the components of F : Assume that Γ 1 and Γ 2 intersect transversally at ξ 0 . 2 Let Γ be a simple closed curve enclosing the point ξ 0 , and let it be parametrized as a C p (p ≥ 0) function γ in the variable t, so that the function γ : t ∈ R → Ω is C p and 1-periodic.Let m = min(k, p), and let A γ , B γ be the C m functions A(γ(t)), B(γ(t)), for all t ∈ R.Then, for all t ∈ R, the pencil (A γ , B γ ) has the eigendecomposition such that: , and Proof.The proof follows closely the one used in [5, Theorem 2.2] for the symmetric eigenproblem, with the necessary changes due to the dealing with the generalized eigenproblem, and also fixing some imprecisions in the proof of [5, Theorem 2.2].Because of Theorem 3.1, . Transversal Intersection at ξ 0 and, by hypothesis, ξ 0 is the unique root of F (x) in Ω.Moreover, under the assumption of ξ 0 being the only root of F in Ω, just like in the proof of Theorem 3.1, see (25), we can also rewrite the problem in the simpler form Further, 0 is a regular value for both function a and b, and therefore G(x) = 0 continues to define smooth curves intersecting transversally at ξ 0 , call them Γ 1 and Γ 2 (these are just rescaling and shifting of the curves Γ 1 and Γ 2 ).Moreover, we let ( A, B) be the pencil associated to these simpler functions: At this point, we will prove the asserted results for Γ 1 and Γ 2 by first showing that it holds true along a small circle C around ξ 0 , and then show that the same results hold when we continuously deform C into Γ.
Since Γ 1 and Γ 2 intersect transversally at ξ 0 , we let C be a circle centered at ξ 0 , of radius small enough so that the circle goes through each of Γ 1 and Γ 2 at exactly two distinct points, see Figure 1.
Consider the pencil ( A(ρ(t)), B(ρ(t))), t ∈ R, which is thus a smooth (and 1-periodic) pencil, with distinct eigenvalues, so that its smooth eigenvalues µ 1,2 in (26) (where all functions a, b, d are evaluated along C) will necessarily satisfy µ j (t + 1) = µ j (t), j = 1, 2. The smooth eigenvectors of ( A(ρ(t)), B(ρ(t))), call them W ρ (t), are uniquely determined (for each t) up to sign.Call u 1 u 2 the eigenvector relative to µ 2 , so that From this, a direct computation shows that (recall that, presently, all functions are com- Therefore, from these it follows that u 1 (respectively, u 2 ) changes sign if and only if b goes through zero and a > 0 (respectively, a < 0).Therefore, each of the two functions u 1 and u 2 changes sign only once over any interval of length 1, and since no continuous function of period 1 can change sign only once over one period, it follows that u 1 and u 2 must be 2-periodic functions and the periodicity assertions of the theorem follow relatively to the curve ρ(t) for the eigenvector function W .That is, along C we have that W has period 2. Finally, we note that the eigenvector function V has columns whose entries have the same sign as those of W (see the third line in the proof og Theorem 3.1), so that the periodicity assertion holds for V .Finally, the extension from the circle C to the curve Γ enclosing the point ξ 0 follows in the same way as was done in [5]; in particular, see the final part of the proof of Theorem 2.2 and Remark 2.5 in there.
The assumption of transversality for the curves Γ 1 and Γ 2 at ξ 0 is generic within the class of smooth curves intersecting at a point.As a consequence, we can say that ξ 0 is a generic coalescing point of eigenvalues of (23) when Γ 1 and Γ 2 intersect transversally at ξ 0 .As a consequence, within the class of C k functions A, B, generically we will need two parameters to observe coalescing of the eigenvalues of (23), and such coalescings will occur at isolated points in parameter space and persist (as a phenomenon, the parameter value will typically change) under generic perturbation.Using Theorem 3.2, and Theorem 2.8, we can characterize the case of a symmetricdefinite pencil in R n×n , whose eigenvalues coalesce at a unique point ξ 0 .
) and the pencil ( Ã, B) has eigenvalues λ k (x), λ k+1 (x) for each x ∈ R; (2) for all x ∈ R, write Ã . Assume that 0 is a regular value for the functions aγ − αc and bγ − βc, and define the function F and the curves Γ 1 and Γ 2 as in Theorem 3.2.Then, we call ξ 0 a generic coalescing point of eigenvalues in Ω, if the curves Γ 1 and Γ 2 intersect transversally at ξ 0 .Remark 3.5.Arguing in a similar way to [5, Theorem 2.7], it is a (lengthy, but simple) computation to verify that Definition 3.4 is independent of the transformation V used to bring the pencil (A, B) to block-diagonal form.
Corollary 3.6.Having exactly a pair of equal eigenvalues of (4) is a codimension 2 phenomenon.
Proof.This is because coalescence is expressed by the two relations in (24), or -as seen in the proof of Theorem 3.1-by the two relations a = 0 b = 0 .This, coupled with Definition 3.4, gives the claim.
As a consequence of its definition, and of Corollary 3.6, for a coalescing point of eigenvalues of a two-parameter symmetric-definite pencil to be a generic coalescing point is a generic property.Remark 3.7.Although the above reasoning on the codimension is done relative to matrices A and B that are "full", the stated codimension does not change when A and B are banded functions, both with bandwidth b ≥ 1.For example, this fact can be appreciated by pointing out that the pencil (A − λB)v = 0 has same eigenvalues as the symmetric eigenproblem ( Ã − λI)w = 0; e.g., with w = L T v and B = LL T .Although L is banded when B is so, the function Ã is full, hence the codimension of having a pair of equal eigenvalues is the same as that of a symmetric eigenproblem having a pair of equal eigenvalues, which is 2.
As already exemplified by Example 3.3, at a point where eigenvalues of the pencil coalesce, there is a complete loss of smoothness of the eigenvalues.In fact, the situation of Example 3.3 is fully general, as the next example shows.
Example 3.8.Without loss of generality (see the proof of Theorem 3.1), take the symmetricdefinite pencil (A, B) with and let ξ 0 be such that a(ξ 0 ) = b(ξ 0 ) = 0, and (because of transversality) we also have that the Jacobian the eigenvalues µ 1,2 of the pencil are given by (26): . Now, expand the function h(x) at ξ 0 .We get and a simple computation gives h(ξ 0 ) = 0, ∇h(ξ 0 ) = 0, and ) so that at ξ 0 : H 11 > 0, H 22 > 0, and det(H(ξ 0 )) = (1 − d 2 )(b x a y − a x b y ) 2 and this is positive, because of the previously remarked transversality.Therefore, H(ξ 0 ) is positive definite, and in the vicinity of ξ 0 the eigenvalues have the form µ where z = H 1/2 (ξ 0 ) (x − ξ 0 ).As a consequence, the eigenvalues's surface have a double cone structure at the coalescing point.This justifies calling the coalescing point a conical intersection, or CI for short.
Obviously, there is a total loss of differentiability through a CI point.Recall that the applications motivating our study is dimension reduction through a projection approach; but then CIs are particularly bothersome since the projection looses uniqueness at a CI point.For this reason, in this work we emphasize detecting parameter values where CIs occur, in particular we give criteria that enable detection of generic CIs.The case of a (2, 2) pencil was dealt with in Theorem 3.1.The case of a (n, n) pencil, with only a single generic coalescing of eigenvalues in Ω is dealt with in the next theorem.Theorem 3.9.
Let Γ be a simple closed curve in Ω enclosing the point ξ 0 , and let it be parametrized as a C p (p ≥ 0) function γ in the variable t, so that the function γ : t ∈ R → Ω is C p and 1-periodic.Let m = min(k, p), and let A γ , B γ be the C m restrictions of A, B, to γ(t), t ∈ R.
Then, for all t ∈ R, the pencil Proof.The proof combines the block-diagonalization result Theorem 2.8 with the (2, 2) case.So, we consider a rectangle R ⊆ Ω around ξ 0 , and consider a B-orthogonal function V ∈ C e (R, R n×n ) giving the block decomposition of Definition 3.4   , and Let C be a circle enclosing ξ 0 and contained in R, parametrized by a continuous 1-periodic function ρ, and let Ãρ (t) = Ã(ρ(t)), Bρ (t) = B(ρ(t)), t ∈ R. Let V ρ be the orthogonal function of Theorem 3.1 associated to the pencil ( Ãρ , Bρ ), so that V ρ (t + 1) = −V ρ (t), for all t, and moreover V T ρ B ρ V ρ = I 2 .Now, consider the following continuous function ) for all t, then the result follows relative to the circle C. The argument that the same periodicity properties hold relative to the simple closed curve Γ follow similarly to what we did in the proof of [5,Theorem 2.8].
It is worth emphasizing that for the eigenvectors associated to eigenvalues which do not coalesce inside Ω, we have v γ (t + 1) = v γ (t).In other words, a continuous eigendecomposition V along a simple curve Γ not containing coalescing points inside (or on) it, satisfies V (t + 1) = V (t).This consideration, coupled with the uniqueness up to sign of a B-orthogonal function eigendecomposing a pencil with distinct eigenvalues, gives the following.
Corollary 3.10.Let (A, B) be a C k symmetric-positive definite pencil for all x ∈ Ω.Let Γ be a simple closed curve in Ω, parametrized by the C p and 1-periodic function γ.Let m = min(k, p), and let (A γ , B γ ) be the smooth pencil restricted to Γ.If there are no coalescing points inside Γ (nor on it), then any C m eigendecomposition V of the pencil 3.2.Several generic coalescing points in Ω.Here we consider the case when several eigenvalues of the pencil coalesce inside a closed curve Γ.In line with our previous analysis of generic cases, we only consider the case when coalescing points are isolated and generic, as characterized next.

Definition 3.11. Consider the pencil (A, B), with
A parameter value ξ 0 ∈ Ω is called a generic coalescing point of eigenvalues if there is a pair of equal eigenvalues at ξ 0 , no other pair of eigenvalues coalesce inside an open simply connected region Ω 0 ⊆ Ω, and ξ 0 is a generic coalescing point of eigenvalues in Ω 0 .
In these cases, we have the following result.Theorem 3.12.Consider the pencil (A, B), where A = A T ∈ C k (Ω, R n×n ), B = B T ∈ C k (Ω, R n×n ) ≻ 0, and let λ 1 (x) ≥ . . .≥ λ n (x) be its continuous eigenvalues.Assume that for every i = 1, . . ., n − 1, at d i distinct generic coalescing points in Ω, so that there are n−1 i=1 d i such points 3 .Let Γ be a simple closed curve in Ω enclosing all of these distinct generic coalescing points of eigenvalues, and let it be parametrized as a C p (p ≥ 0) function γ in the variable t, so that the function γ : t ∈ R → Ω is C p and 1-periodic.Let m = min(k, p) and let A γ and B γ be the C m restrictions of A and B to γ(t).Then, for all t ∈ R, there exists V diagonalizing the pencil (A γ , B γ ): , for all t ∈ R, and Λ(t + 1) = Λ(t); where D is a diagonal matrix of ±1 given as follows: In particular, if D = I, then V is 1-periodic, otherwise it is 2-periodic with minimal period 2.
Proof.Since the eigenvalues are distinct on Γ, we know that there is a C m eigendecomposition V of the pencil (A γ , B γ ), and that V is B γ -orthogonal.The issue is to establish the periodicity of V .Our proof is by induction on the number of coalescing points.Because of Theorem 3.9, we know that the result is true for 1 coalescing point.So, we assume that the result holds for N − 1 distinct generic coalescing points, and we'll show it for N distinct generic coalescing points; note that N = n−1 i=1 d i .Since the coalescing points are distinct, we can always separate one of them, call it ξ N , from the other N − 1 points, with a curve α not containing coalescing points, and which stays inside the region bounded by Γ, joining two distinct points on Γ, y 0 = γ(t 0 ) and y 1 = γ(t 1 ), with t 0 , t 1 ∈ [0, 1), so that α leaves ξ N and all other coalescing points ξ i 's on opposite sides (see Figure 2).Let j, 1 ≤ j ≤ n − 1, be the index for which λ j (ξ N ) = λ j+1 (ξ N ).Now consider the following construction.Take a smooth eigendecomposition of (A γ , B γ ) along Γ, starting at y 0 and returning to it; the loop is done once, and to fix ideas, we will transverse it in the counterclockwise direction.Denote the continuous matrix of eigenvectors of (A γ , B γ ) at the beginning of this loop as V 0 and that at the end of the loop as V 1 .
Since the curve α does not contain any coalescing point, the matrix V 1 would be the same as if, instead of following the curve Γ, we were to follow Γ 0 from y 0 to y 1 , then go from y 1 to y 0 along α, back from y 0 to y 1 along α in opposite direction and then from y 1 to y 0 along Γ 1 : (Γ 0 ∪ α) ∪ ((−α) ∪ Γ 1 ).Denote the matrix of eigenvectors of (A γ , B γ ) at the end of the first loop (Γ 0 ∪ α) by V 1

2
. Using the induction hypothesis along the closed curve Γ 0 ∪ α, we have where D is a diagonal matrix D = diag( D 11 , . . .D nn ), with and d i = d i , for all i = j, and d j = d j − 1.Now, by looking at what happens on the second loop, by virtue of Theorem 3.9, we have that all columns of V 1 2 coincide with those of V 1 , except for the j-th and (j + 1)-st ones which have changed in sign.Putting everything together, we have V 0 = V 1 D with D as given in the statement of the Theorem.
We do not study nongeneric coalescings, since they are not robust under perturbation; see [5] for considerations on these cases, for the symmetric eigenproblem.With this in mind, in the final result we give we should think of all coalescings as being generic CIs.This theorem gives us a sufficient condition for the existence of CIs inside a certain region.This is the result on which we base our numerical algorithm to detect CIs.Theorem 3.13.Consider the pencil (A, B), where A = A T ∈ C k (Ω, R n×n ), B = B T ∈ C k (Ω, R n×n ) ≻ 0, and let λ 1 (x) ≥ . . .≥ λ n (x) be its continuous eigenvalues.Let Γ be a simple closed curve in Ω with no coalescing point for the eigenvalues on it, and let it be parametrized as a C p (p ≥ 0) function γ in the variable t, so that the function γ : t ∈ R → Ω is C p and 1-periodic.Let m = min(k, p) and let A γ and B γ be the C m restrictions of A and B to γ(t), and let V diagonalize the pencil (A γ , B γ ).Let V 0 = V (0) and V 1 = V (1), and define D such that Next, let 2q be the even 4 number of indices i i , i 1 < i 2 < • • • < i 2q , for which D i i i i = −1.Let us group these indices in pairs (i 1 , i 2 ), . . ., (i 2q−1 , i 2q ).Then, λ i and λ i+1 coalesced at least once inside the region encircled by Γ, if i 2j−1 ≤ i < i 2j for some j = 1, . . ., q.
Remarks 3.14.Some comments are in order.
(i) Relative to generic CIs, suppose that from Theorem 3.13 we have a D with D 11 = −1 = D 44 , all other D ii 's being 1.Then, we expect that inside the region encircled by Γ, the pairs (λ 1 , λ 2 ), (λ 2 , λ 3 ), and (λ 3 , λ 4 ), have coalesced.Moreover, relative 4 The reason for the even number of indices is that Aγ(t)V (t) = Bγ (t)V (t)Λ(t), and Since V is continuous and invertible, then its determinant is always positive or negative.But, since V (1) = V (0)D, then we must have det(D) = 1 to generic CIs, and in the notation of Theorem 3.13, we can say that there is an odd number of CIs points for λ i and λ i+1 inside the region encircled by Γ. (ii) Theorem 3.13 cannot distinguish whether, inside Γ, some pair of eigenvalues coalesced an even number of times or not at all.
A final remark pertains to the case when A and B are both tridiagonal.This case is quite difficult to handle for our algorithms of Section 4 that locate coalescing eigenvalues.The reasons for the difficulties have been already explained in our work on the symmetric eigenproblem; see the discussion on Veering and mingap in [4,  , a necessary condition to have repeated eigenvalues is that b i = 0, for some i.Unfortunately, in the case of a tridiagonal pencil (A, B), there is no such simple necessary condition that has to hold for having repeated eigenvalues.For these reasons, the case of A and B tridiagonal is left open for future study, and the results in Section 5 do not include the tridiagonal case.

Algorithms to locate coalescing eigenvalues
The procedure we implemented to locate coalescing generalized eigenvalues is based on Theorem 3.13, and on the smooth generalized eigendecomposition A(t)V (t) = B(t)V (t)Λ(t) along 1-d paths, as stated in Theorem 2.9.Our goal is to obtain a sampling of these smooth V and Λ at some values of t.Given a 1-parameter pencil (A(t), B(t)), for t ∈ [0, 1], with A = A T ∈ C k ([0, 1], R n×n ), and B = B T ∈ C k ([0, 1], R n×n ) ≻ 0, we can assume that the eigenvalues are distinct for all t ∈ [0, 1], and λ 1 (t) > λ 2 (t) > . . .> λ n (t).
To compute Λ = diag(λ 1 , λ 2 , . . ., λ n ) and V we used a continuation procedure of predictor-corrector type, similar to the one developed in [4] to obtain a sampling of the smooth ordered Schur decomposition for symmetric 1-d functions.For completness, we briefly describe here the step from a point t j to the new point t j+1 of the new procedure, further remarking on the differences between the present procedure and the one in [4], to which we refer for a discussion of some algorithmic choices.
Given an ordered decomposition at t j : A(t j )V (t j ) = B(t j )V (t j )Λ(t j ) and a stepsize h, we want the decomposition at t j+1 = t j + h: A(t j+1 )V (t j+1 ) = B(t j+1 )V (t j+1 )Λ(t j+1 ), where the factors V (t j+1 ) and Λ(t j+1 ) lie along the smooth path from t j to t j+1 .To get Λ(t j+1 ) is easy to do with canned software, like eig in Matlab, since the eigenvalues are distinct, so we will keep them ordered.Further, a B-orthogonal matrix V j+1 such that A(t j+1 )V j+1 = B(t j+1 )V j+1 Λ(t j+1 ) can be also obtained by standard linear algebra software, like the eig Matlab command, and re-ordering.Then, recalling Corollary 2.12, we know that V (t j+1 ) = V j+1 S, where S is a sign matrix, S = diag(s 1 , . . ., s n ), s i = ±1, i = 1, . . ., n, that is V (t j+1 ) can be recovered by correcting the signs of the columns of V j+1 .Specifically, by enforcing minimum variation with respect to a suitably predicted factor V pred , we set S equal to the sign matrix which minimizes SV T j+1 B(t j+1 )V pred −I F .Despite the overall simplicity of the basic step we just described, if the stepsize h is too large with respect to the variation of the factors, predicting the correct signs of the eigenvectors to follow the correct path may be a hard task.This difficulty is tipically encountered when there is a pair of close eigenvalues, as happens in presence of a veering phenomenon.In this case smoothness could be mantained only by using very small stepsizes, being the variation of the eigenvectors inversely proportional to the difference between eigenvalues (see the differential equations ( 20)).Therefore we proceed in two different ways, depending on the distance between consecutive eigenvalues.We say that a pair of eigenvalues (λ i , λ i+1 ) is close to veering at t j + h if the following condition holds: otherwise, the eigenvalues are considered well separated.At the starting point t j , eigenvalues are assumed to be well separated.
Case 1.Some pair of eigenvalues is close to veering at t j + h.
In practice during a veering close eigenvalues may become numerically undistinguishable, and the corresponding B-orthogonal eigenvectors change very rapidly within a very small interval, out of which the eigenvalues are again well separated.To overcome this critical veering zone, we proceed by computing a smooth block-diagonal eigendecomposition (see Theorem 2.8): , where close eigenvalues are grouped into one block, so that the eigenvalues of each Λ i are well separated from the others.We do not expect, nor consider, the nongeneric case of three or more close eigenvalues, hence each Λ i (t) is either an eigenvalue or a 2 × 2 block.Using the Cholesky factorization of B = LL T , we first re-write (27) as follows: Then to compute the smooth orthogonal transformation Q B and block-diagonal Λ B , we use a procedure for the continuation of invariant subspaces, which is based on Riccati transformations (see [3] and [4] for details of this technique).Starting at t j , we continue with this standard block eigendecomposition until all eigenvalues are again well separated; this happens at some value t f , and we set A key issue is how to recover the complete smooth eigendecomposition at t j+1 .Indeed Theorem 2.8 guarantees the existence of decomposition (27) but not its uniqueness, as can be easily verified by rotating the columns of V B -or Q B -corresponding to a 2 × 2 diagonal block.In [4], to which we refer for the details, we show how these subspaces can be rotated to obtain an accurate predicted factor V pred which allows to correct the signs 5 In our experiments we have used toldist = 10 6 eps ≈ 10 −10 of V j+1 's columns, and continue the complete smooth eigen-decompositon at t j+1 + h.
Case 2. All eigenvalues are well separated.
In this case, through our predictor-corrector strategy the stepsize is adapted based on both eigenvalues and eigenvectors variations.The following variation parameters are used both to update the stepsize and to accept or reject a step (see Steps 4 and 5 in Algorithm 4.1 below).Accurate predictors are hence mandatory for the efficiency of the overall procedure.We obtain them by taking an Euler step: in the differential equations (20), where the derivatives Λ(t j ) ≃ Λj and V (t j ) ≃ Vj are approximated by replacing Ȧ(t j ) with (A(t j+1 ) − A(t j ))/h and Ḃ(t j ) with (B(t where P = (I − B V )/2, and H is the skew-symmetric matrix such that We remark that ρ V , Λ pred and V pred reduce to the corresponding quantities we used in [4] for the standard symmetric eigenvalue problem, where B = I and V is orthogonal.Further, after we reached t j+1 with a successful step, we compute the following predicted eigenvalues of secant type: Step 7 in Algorithm 4.1 uses these secant prediction, and its rationale is that if λ sec i < λ sec i+1 for some i (recall that we have to obtain λ i > λ i+1 ), then the new step t j+1 + h is likely to fail, and therefore h will be safely reduced as in (30).
Remark 4.1.Observe that smooth factors Λ and V could be obtained also via a smooth Schur decomposition Ã(t) = Q(t)Λ(t)Q T (t) of the symmetric matrix Ã = L −1 AL −T , with B = LL T , by using the procedure developed in [4] and setting V = L T Q.However, Algorithm 4.1 which is tailored to the original generalized eigenproblem results much more efficient, for general A and B; e.g., in our experiments in Matlab the main cost is clearly in step 2 of the algorithm, which we resolve with a call to eig, and using eig(A, B) costs less than half of the execution time encountered forming Ã and using eig( Ã).
, with σ n and v i as above end for where N (0, 1) is the normal distribution with zero mean and variance 1, while Γ(a i , 1) is the gamma distribution with shape a i and rate 1.Then, for all (x, y) in R 2 , we define the following matrix functions: We point out that: a) all matrices L A,k and L B,k are strictly lower triangular and have bandwidth b, while D A and D B are diagonal; therefore, both A(x, y) and B(x, y) have bandwidth b; b) A(x, y) = A(x, y) T and B(x, y) = B(x, y) T are positive definite for all (x, y); c) the nontrivial entries of all matrices L A,k , L B,k , D A and D B are independent random variables; more precisely: A(x, y) ∈ SG + for all (x, y), and the same is true for B(x, y); d) the probability density function of the diagonal entries of D A and D B matrix depends on their position along the diagonal.
In our experiments, we have fixed five values of the dispersion parameter: δ = 0.05, 0.25, 0.45, 0.65, 0.85, and considered four possible bandwidths b = 3, 4, 5, full.For each combination of δ and b, and for dimensions n = 50, 60, . . ., 120, we have constructed 10 realizations of matrix pencils A(x, y) − λB(x, y) and performed a search for conical intersections for the pencil over the domain Ω = [0, π] × [0, 2π].The detection strategy consisted of subdividing the domain Ω into 64× 128 square boxes and computing a smooth generalized eigendecomposition of the pencil around the perimeter of each box.The presence of conical intersections inside each box is betrayed by sign changes of the columns of the smooth B-orthogonal matrix that diagonalizes the pencil, see Theorem 3.13 and the subsequent remarks.
Our pourpose was to fit the data with a power law (31) # of CIs = c dimension p , averaging the number of conical intersections over the 10 realizations.The outcome of the experiments is illustrated in Figure 3 and Table 1. Figure 3 shows the superposition of the 20 linear regression lines obtained by computing a least squares best fit over the logarithm of the data, so that p and c in (31) represent, respectively, slope and intercept of the lines.The Figure reveals 4 groups of 5 lines each, where lines in the same group share the value of the bandwidth.It is evident that the dispersion parameter δ has a negligible effect on the exponent p, and mostly also on the factor c (with the exception of the full bandwidth case).In contrast, bandwidth has a significant effect on the exponent p, that increases from p ≈ 2 of the full bandwidth case to p ≈ 2.6 of the heptadiagonal case.A similar study was conducted in [4] for the GOE (Gaussian Orthogonal Ensemble) model.A look at Finally, in Figure 4 we give an account of the computational time required by our experiments.The barplot indicates, for all values of bandwidth and dimension we have considered, the average elapsed time of each computation (averaged over all realizations and all values of δ).The data are normalized with respect to the computation that required the longest time, which corresponds to the heptadiagonal case and largest dimension n = 120 and was about 12.5 hours.By contrast, the fastest computation corresponds to the full bandwidth case and smallest dimension n = 50 and was about 15 minutes.The figure clearly indicates that the computational effort is directly proportional to the number of conical intersections.This fact comes with no surprise as, in the vicinity of each conical intersection, the eigenvectors exhibit rapid variations that require severe restrictions on the stepsize for our continuation algorithm.
We were not able to perform experiments for the tridiagonal and pentadiagonal cases.This is due to the fact that, as the bandwidth gets critically small and the dimension sufficiently large, a significant amount of sharp variations of the eigenvectors occur within intervals of size smaller than machine precision, ruling out our (but, actually, any) numerical continuation solver.These difficulties where already encountered (and explained) in [4] (see Remark 3.1 therein).See also the considerations at the end of Section 3 of this work.
All computations have been performed on the "Partnership for an Advanced Computing Environment" (PACE), the high performance computing infrastructure at the Georgia Institute of Technology, Atlanta, Georgia, USA.
Table 1.The table shows (left to right): bandwidth, dimension of the problem, number of conical intersections detected (average over 10 realizations from the SG + ensemble), outcome of the log-log linear least squares regression (including the root mean square deviation).The data refer to the case δ = 0.45.

Conclusions
In this work we have considered symmetric positive definite pencils, A(x) − λB(x), where A and B are symmetric matrix valued functions in R n×n , smoothly depending on parameters x, and B is also positive definite.We gave general smoothness results for the one parameter and two-parameter case, and gave results to characterize the (generic) case of coalescing eigenvalues in the two-parameter case, the so-called conical intersections.We further presented, justified, and implemented, new algorithms to locate parameter values where there are conical intersections.These algorithms were used to perform a statistical study of the number of conical intersections for pencils of several bandwidths.Several issues are still requiring a more ad-hoc study.For example, the case of both A and B tridiagonal (e.g., see [15]) is still not resolved in a satisfactory way for the generalized eigenvalue problem, and perhaps the technique of [11] can be adapted to the parameter dependent case examined by us.But also other problems remain to be examined, especially from the algorithmic point of view, like the case of large number of equal eigenvalues seen in some structural engineering works (e.g., see [12]).

Figure 2 .
Figure 2.Figure for proof of Theorem 3.12

Figure 3 .
Figure 3.For each value of b = 3, 4, 5, full and δ = 0.05, 0.25, 0.45, 0.65, 0.85, we have performed a log-log linear least squares regression.The Figure shows the best fit lines for all combinations of b and δ, and also the average exponent (slope after the log-log transformation) p for each value of b.

Figure 4 .
Figure 4. default With the same notation as in Lemma 2.4, the unique positive definite square root B 1/2 of the positive definite functionB ∈ C k (Ω, R n×n ) is also a C k function.Proof.Let S = B 1/2 and use that S 2 = B. We know that S(x) is continuous, and that B(x) is smooth.Next, we define the first partial derivatives from formally differentiating the relation S 2 = B.That is, consider Section  1.2].And, because of this, in [4, Section 2.3] we devised ad-hoc techniques for the tridiagonal case, techniques based upon the fact that for a symmetric tridiagonal ma- bn an

Table 1
Table 4in that work shows, for the GOE model, a faster growth of the exponent p as the bandwidth is decreased, compared to what we have observed here for the SG + model.For convenience, below we report the values of p obtained for the two models (for the SG + model we average over all values of δ, since variations for different values of δ are negligible): shows, as an example, a synopsis of the outcome of our experiments for the case δ = 0.45.The Table reports on the number of conical intersections that have been detected and on the results of the least squares best fits.