Wilkinson’s Bus: Weak Condition Numbers, with an Application to Singular Polynomial Eigenproblems

We propose a new approach to the theory of conditioning for numerical analysis problems for which both classical and stochastic perturbation theories fail to predict the observed accuracy of computed solutions. To motivate our ideas, we present examples of problems that are discontinuous at a given input and even have infinite stochastic condition number, but where the solution is still computed to machine precision without relying on structured algorithms. Stimulated by the failure of classical and stochastic perturbation theory in capturing such phenomena, we define and analyse a weak worst-case and a weak stochastic condition number. This new theory is a more powerful predictor of the accuracy of computations than existing tools, especially when the worst-case and the expected sensitivity of a problem to perturbations of the input is not finite. We apply our analysis to the computation of simple eigenvalues of matrix polynomials, including the more difficult case of singular matrix polynomials. In addition, we show how the weak condition numbers can be estimated in practice.


Introduction
The condition number of a computational problem measures the sensitivity of an output with respect to perturbations in the input.If the input-output relationship can be described by a differentiable function f near the input, then the condition number is the norm of the derivative of f .In the case of solving systems of linear equations, the idea of conditioning dates back at least to the work of von Neumann and Goldstine [47] and Turing [45], who coined the term.For an algorithm computing f in finite precision arithmetic, the importance of the condition number κ stems from the "rule of thumb" popularized by N. J. Higham [29, §1.6], forward error κ • (backward error).
The backward error is small if the algorithm computes the exact value of f at a nearby input, and a small condition number would certify that this is enough to get a small overall error.Higham's rule of thumb comes from a first order expansion, and in practice it often holds as an approximate equality and is valuable for practitioners who wish to predict the accuracy of numerical computations.Suppose that a solution is computed with, say, a backward error equal to 10 −16 .If κ = 10 2 then one would trust the computed value to have (at least) 14 meaningful decimal digits.The condition number can formally still be defined when f is not differentiable, though it may not be finite.If f is not locally Lipschitz continuous at an input, then the condition number is +∞; a situation clearly beyond the applicability of Higham's rule.Inputs at which the function f is not continuous are usually referred to as ill-posed.Based on the worst-case sensitivity one would usually only expect a handful of correct digits when evaluating a function at such an input, and quite possibly none. 1 On the other hand, a small condition number is not a necessary condition for a small forward-backward error ratio: it is not inconceivable that certain ill-conditioned or even ill-posed problems can be solved accurately.Consider, for example, the problem of computing an eigenvalue of the 4 × 4 matrix pencil (linear matrix polynomial) (1) this is a singular matrix pencil (the determinant is identically zero) whose only finite eigenvalue is simple and equal to 1 (see Section 3 for the definition of an eigenvalue of a singular matrix polynomial and other relevant terminology).The input is L(x) and the solution is 1.If the QZ algorithm [36], which is the standard eigensolver for pencils, is called via MATLAB's command eig 2 , the output is: >> e i g ( L0,−L1 ) ans = −138.1824366539536−0.674131242894470 1 .0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .4 4 4 1 1 4 4 8 6 0 6 5 6 8 3 All but the third computed eigenvalues are complete rubbish.This is not surprising: singular pencils form a proper Zariski closed set in the space of matrix pencils of a fixed format, and it is unreasonable to expect that an unstructured algorithm would detect that the input is singular and return only one eigenvalue.Instead, being backward stable, QZ computes the eigenvalues of some nearby matrix pencil, and almost all nearby pencils have 4 eigenvalues.On the other hand, the accuracy of the approximation of the genuine eigenvalue 1 is quite remarkable.Indeed, the condition number of the problem that maps L(x) to the exact eigenvalue 1 is infinite because the map from matrix pencils to their eigenvalues is discontinuous at any matrix pencil whose determinant is identically zero.To make matters worse, there exist plenty of matrix pencils arbitrarily close to L(x) and whose eigenvalues are all nowhere near 1.For example, for any > 0, where has characteristic polynomial 2 (γ 3 − x)(x 3 + γ 2 x 2 + γ 1 x + γ 0 ) and therefore, by an arbitrary choice of the parameters γ i , can have eigenvalues literally anywhere.Yet, unaware of this worrying caveat, the QZ algorithm computes an excellent approximation of the exact eigenvalue: 16 correct digits!This example has not been carefully cherry-picked: readers are encouraged to experiment with any singular input in 1 The number of accurate digits that can be expected when the problem is continuous but not locally Lipschitz continuous requires a careful discussion.It depends on the unit roundoff u, on the exact nature of the pathology of f , and on D. For example, computing the eigenvalues of a matrix similar to an n×n Jordan block for n > 1 is Hölder continuous with exponent 1/n but not Lipschitz continuous.Usually this translates into expecting only about n √ u accuracy, up to constants, when working in finite precision arithmetic.For a more complete discussion, see [28], where pathological examples of derogatory matrices are constructed, whose eigenvalues are not sensitive to finite precision computations (for fixed u), or also [33, §3.3].For discontinuous f , however, these subtleties alone cannot justify any accurately computed decimal digits.
2 MATLAB R2016a on Ubuntu 16.04 order to convince themselves that QZ often computes 3 accurately the (simple) eigenvalues of singular pencils, or singular matrix polynomials, in spite of being a discontinuous problem.See also [32] for more examples and a discussion of applications.Although the worst-case sensitivity to perturbations is indeed infinite, the raison d'être of the condition number, which is to predict the accuracy of computations on a computer, is not fulfilled.Why does the QZ algorithm accurately compute the eigenvalue, when the map f describing this computational problem is not even continuous?Two natural attempts at explaining this phenomenon would be to look at structured condition numbers and/or average-case (stochastic) perturbation theory.
1.An algorithm is structured if it computes the exact solution to a perturbed input, where the perturbations respect some special features of the input: for example singular, of rank 3, triangular, or with precisely one eigenvalue.The vanilla implementation of QZ used here is unstructured in the sense that it does not preserve any of the structures that would explain the strange case of the algorithm that computes an apparently uncomputable eigenvalue. 4It does, however, preserve the real structure.In other words, if the input is real, QZ computes the eigenvalues of a nearby real pencil.Yet, by taking real γ i in the example above, it is clear that there are real pencils arbitrary close to L(x) whose eigenvalues are all arbitrarily far away from 1. 2. The classical condition number is based on the worst-case perturbation of an input; as discussed in [29, §2.8], this approach tends to be overly pessimistic in practice.Numerical analysis pioneer James Wilkinson, in order to illustrate that Gaussian elimination is unstable in theory but in practice its instability is only observed by mathematicians looking for it, is reported to have said [43] " Anyone that unlucky has already been run over by a bus." In other words: in Wilkinson's experience, the likelihood of seeing the admittedly terrifying worst case appeared to be very small, and therefore Wilkinson believed that being afraid of the potential catastrophic instability of Gaussian elimination is an irrational attitude.Based on this experience, Weiss et al. [48] and Stewart [41] proposed to study the effect of perturbations on average, as opposed to worst-case; see [29, §2.8] for more references on work addressing the stochastic analysis of roundoff errors.This idea was later formalized and developed further by by Armentano [4].This approach gives some hope to explain the example above, because it is known that the set of perturbations responsible for the discontinuity of f has measure zero [18].However, this does not imply that on average perturbations are not harmful.In fact, as we will see, the stochastic condition number for the example above (or for similar problems) is still infinite!Average-case perturbation analysis, at least in the form in which it has been used so far, is still unable to solve the puzzle.
While neither structured nor average-case perturbation theory can explain the phenomenon observed above, Wilkinson's colourful quote does contain a hint on how to proceed: shift attention from averagecase analysis of perturbations to bounding rare events.We will get back to the matrix pencil (1) in Example 5.3, where we show that our new theory does explain why this problem is solved to high accuracy using standard backward stable algorithms.
In summary, the main contributions of this paper are 1. a new species of 'weak' condition numbers, which we call the weak worst-case condition number and the weak stochastic condition number, that give a more accurate description of the perturbation behaviour of a computational map (Section 2); 2. a precise probabilistic analysis of the sensitivity of the problem of computing simple eigenvalues of singular matrix polynomials (Sections 4 and 5); 3. an illustration of the advantages of the new concept by demonstrating that, unlike both classical and stochastic condition numbers, the weak condition numbers are able to explain why the apparently uncomputable eigenvalues of singular matrix polynomials, such as the eigenvalue 1 in the example above, can be computed with remarkable accuracy (Example 5.3); 4. a concrete method for bounding the weak condition numbers for the eigenvalues of singular matrix polynomials (Section 6).

Related work
Rounding errors, and hence the perturbations considered, are not random [29, 1.17].Nevertheless, the observation that the computed bounds on rounding errors are overly pessimistic has led to the study of statistical and probabilistic models for rounding errors.An early example of such a statistical analysis is Goldstine and von Neumann [27], see [29, 2.8] and the references therein for more background.Recently, Higham and Mary [31] obtained probabilistic rounding error bounds for a wide variety of algorithms in linear algebra.In particular, they give a rigorous foundation to Wilkinson's rule of thumb, which states that constants in rounding error bounds can be safely replaced by their square roots.The idea of using an average, rather than a supremum, in the definition of conditioning was introduced by Weiss et.al.[48] in the context of the (matrix) condition number of solving systems of linear equations, and a more comprehensive stochastic perturbation theory was developed by G. W. Stewart [41].In [4], Armentano introduced the concept of a smooth condition number, and showed that it can be related to the worst case condition.His work uses a geometric theory of conditioning and does not extend to singular problems.
The line of work on random perturbations is not to be confused with the probabilistic analysis of condition numbers, where a condition number is a given function, and the distribution of this function is studied over the space of inputs (see [13] and the references therein).Nevertheless, our work is inspired by the idea of weak average-case analysis [3] that was developed in this framework.Weak average-case analysis is based on the observation, which has origins in the work of Smale [40] and Kostlan [34], that discarding a small set from the input space can dramatically improve the expected value of a condition number, shifting the focus away from the average case and towards bounding the probability of rare events.Our contribution is to apply this line of thought to study random perturbations instead of random inputs.However, we stress that we do not seek to model the distribution of perturbations.The aim is to formally quantify statements such as "the set of bad perturbations is small compared to the set of good perturbations".In other words, the (non-random) accumulation of rounding errors in a procedure would need a very good reason to give rise to a badly perturbed problem.
The conditioning of regular polynomial eigenvalue problems has been studied in detail by Tisseur [42] and by Dedieu and Tisseur in a homogeneous setting [15].A probabilistic analysis of condition numbers (for random inputs) for such problems was given by Armentano and Beltrán [5] over the complex numbers and by Beltrán and Kozhasov [7] over the real numbers.Their work studies the distribution of the condition number on the whole space of inputs, and such an analysis only considers the condition number of regular matrix polynomials.A perturbation theory for singular polynomial eigenvalue problems was developed by de Terán and Dopico [14], and our work makes extensive use of their results.A method to solve singular generalized eigenvalue problems with plain QZ, based on applying a certain perturbation to them, is proposed in [32] (see also the references therein); note that our work goes beyond this, by showing how to estimate the weak condition number that could guarantee, often with overwhelming probability, that QZ will do fine even without any preliminary perturbation step.

Organization of the paper
The paper is organized as follows.In Section 2 we review the rigorous definitions of the worst-case (von Neumann-Turing) condition number and the stochastic framework (Weiss et.al., Stewart, Armentano), and comment on their advantages and limitations.We then define the weak condition numbers as quantiles and argue that, even when Wilkinson's metaphorical bus hits von Neumann-Turing's and Armentano-Stewart's theories of conditioning, ours comes well endowed with powerful dodging skills.In Section 3 we introduce the perturbation theory of singular matrix polynomials, along with the definitions of simple eigenvalues and eigenvectors.We define the input-output map underlying our case study and introduce the directional sensitivity of such problems.In Section 4, which forms the core of this paper, we carry out a detailed analysis of the probability distribution of the directional sensitivity of the problems introduced in Section 3. In Section 5, we translate the probabilistic results from Section 4 into the language of weak condition numbers and prove the main results, Theorem 5.1 and Theorem 5.2.In Section 6 we sketch how our new condition numbers can be estimated in practice.Along the way we derive a simple concentration bound on the directional sensitivity of regular polynomial eigenvalue problems.Finally, in Section 7, we give some concluding remarks and discuss potential further applications.

Theories of conditioning
For our purposes, a computational problem is a map between normed vector spaces5 and we will denote the (possibly different) norms in each of these spaces by • .Following the remark on [30, p. 56], for simplicity of exposition in this paper we focus on absolute, as opposed to relative, condition numbers.The condition numbers considered depend on the map f and an input D ∈ V.
As we are only concerned with the condition of a fixed computational problem at a fixed input D, in what follows we omit reference to f and D in the notation.
If f is Fréchet differentiable at D, then this definition is equivalent to the operator norm of the Fréchet derivative of f .However, Definition 2.1 also applies (and can even be finite) when f is not differentiable.In complexity theory [8,13], an elegant geometric definition of condition number is often used, which is essentially equivalent to Defintion 2.1 under certain assumptions (which include smoothness).
The following definition is loosely derived from the work of Stewart [41] and Armentano [4], based on earlier work by Weiss et.al. [48].In what follows, we use the terminology X ∼ D for a random variable with distribution D, and E X∼D [•] for the expectation with respect to this distribution.Definition 2.2 (Stochastic condition number) Let E be a V-valued random variable with distribution D and assume that E E∼D [E] = 0 and E E∼D [ E 2 ] = 1.Assume that the function f is measurable.Then the stochastic condition number is Remark 2.3 We note in passing that Definition 2.2 depends on the choice of a measure D. This measure is a parameter that the interested mathematician should choose as convenient; this is of course not particularly different than the freedom one is given in picking a norm.In fact, it is often convenient to combine these two choices, using a distribution that is invariant with respect to a given norm.Typical choices that emphasize invariance are the uniform (on a sphere) or Gaussian distributions, and the Bombieri-Weyl inner product when dealing with homogeneous multivariate polynomials [13, 16.1].Technically speaking, the distribution is on the space of perturbations, rather than the space of inputs.
If f is differentiable at D and V is finite dimensional, then it was observed by Armentano [4] that the stochastic condition number can be related to the worst-case one.We illustrate this relation in a simple but instructive special case.Consider the setting 6 where f : where for (a) we used the fact that and for (b) we used the orthogonal invariance of the uniform distribution on the sphere.As we will see in the case of singular polynomial eigenvalue problems with complex perturbations, the bound (2) does not hold in general, as the condition number can be infinite while the stochastic condition number is bounded.However, sometimes it can happen that the stochastic condition number is also infinite, because the "directional sensitivity" (see Definition 2.4) is not an integrable function.For example, for the problem of computing the eigenvalue of the singular pencil L(x) in the introduction, in spite of the fact that real perturbations are analytic for all but a proper Zariski closed set of perturbations [18], when restricting to real perturbations, we get Despite this, QZ computes the eigenvalue 1 with 16 digits of accuracy.
To remedy the shortcomings of the stochastic condition number as defined in 2.2, we propose a change in focus from the expected value to tail bounds and quantiles, and the key concept for that purpose is the directional sensitivity.Just as the classical worst-case condition corresponds to the norm of the derivative, the directional sensitivity corresponds to a directional derivative.And, just as a function can have some, or all, directional derivatives while still not being continuous, a computational problem can have well-defined directional sensitivities but have infinite condition number.

Definition 2.4 (Directional sensitivity)
The directional sensitivity of the computational problem f at the input D with respect to the perturbation E is The directional sensitivity takes values in [0, ∞].In numerical analytic language, the directional sensitivity is the limit, for a particular direction of the backward error, of the ratio of forward and backward errors of the computational problem f ; this limit is taken letting the backward error tend to zero (again having fixed its direction), which could also be thought of as letting the unit round-off tend to zero.See, e.g., [29, §1.5] for more details on this terminology.
The directional sensitivity is, if it is finite, E −1 times the norm of the Gâteaux derivative df (D; E) of f at D in direction E. If f is Fréchet differentiable, then the Gâteaux derivative agrees with the Fréchet derivative, and we get κ = sup If E is a V-valued random variable satisfying the conditions of Definition 2.2 and if f is Gâteaux differentiable in almost all directions, then by the Fatou-Lebesgue Theorem we get When integrating, null sets can be safely ignored; however, depending on the exact nature of the divergence (or lack thereof) of the integrand when approaching those null sets, the value of the integral need not be finite.To overcome this problem and still give probabilistically meaningful statements, we propose to use instead the concept of numerical null sets, i.e., sets of finite but small (in a sense that can be made precise depending on, for example, the unit roundoff of the number system of choice, the confidence level required by the user, etc.) measure.This is analogous to the idea that the "numerical zero" is the unit roundoff.We next define our main characters, two classes of weak condition numbers which generalize, respectively, the classical worst-case and stochastic condition numbers.
In the following, we fix a probability space (Ω, Σ, P) and a random variable E : Ω → V, where we consider V endowed with the Borel σ-algebra.We further assume that The following definitions assume that σ E is P-measurable.This is the case, for example, if f is measurable and the directional (Gâteaux) derivative df (D; E(ω)) exists P-a.e.Definition 2.5 (Weak worst-case and weak stochastic condition number) Let 0 ≤ δ < 1 and assume that σ E is P-measurable.The δ-weak worst-case condition number and the δ-weak stochastic condition number are defined as Remark 2.6 We note that one can give a definition of the weak worst-case and weak stochastic condition number that does not require σ E to be a random variable, by setting where we used the notation |S| = P(S) for the measure of a set if there is no ambiguity.This form is reminiscent of the definition of weak average-case analysis in [3], and when σ E is a random variable it can be shown to be equivalent to 2.5.Moreover, this slightly more general definition better illustrates the essence of the weak condition numbers: these are the (worst-case and average-case) condition numbers that ensue when one is allowed to discard a "numerically invisible" subset from the set of perturbations.
The directional sensitivity has an interpretation as (the limit of) a ratio of forward and backward errors, and hence the new approach provides a potentially useful general framework to give probabilistic bounds on the forward accuracy of outputs of numerically stable algorithms.Moreover, as we will discuss in Section 6, upper bounds on the weak condition numbers can be computed in practice for a natural distribution.One can therefore see δ as a parameter representing the confidence level that a user wants for the output, and any computable upper bound on κ w becomes a practical reliability measure on the output, valid with probability 1 − δ.Although of course round-off errors are not really random variables, we hope that modelling them as such can become, with this "weak theory", a useful tool for numerical analysis problems whose traditional condition number is infinite.

Eigenvalues of matrix polynomials and their directional sensitivity
Algebraically, the spectral theory of matrix polynomials is most naturally described over an algebraically closed field; however, the theory of condition is analytic in nature and it is sometimes of interest to restrict the coefficients, and their perturbations, to be real.In this section we give a unified treatment of both real and complex matrix polynomials.For conciseness we keep this overview very brief; interested readers can find further details in [14,18,19,26,37] and the references therein.A matrix polynomial is a matrix P (x) ∈ F[x] n×n , where F ∈ {C, R} is a field.Alternatively, we can think of it as an expression with P i ∈ F n×n .If we require P d = 0, then the integer d in such an expression is called the degree of the matrix polynomial7 .We denote the vector space of matrix polynomials over F of degree at most d by . A matrix polynomial is called singular if det P (x) ≡ 0 and otherwise regular.An element λ ∈ C is said to be a finite eigenvalue of P (x) if rank C (P (λ)) < rank F(x) (P (x)) =: r, where F(x) is the field of fractions of F[x], that is, the field of rational functions with coefficients in F. We assume throughout rank r ≥ 1 (which implies n ≥ 1) and degree d ≥ 1.The geometric multiplicity of the eigenvalue λ is the amount by which the rank decreases in the above definition, There exist matrices U, V ∈ F[x] n×n with det(U ) ∈ F\{0}, det(V ) ∈ F\{0}, that transform P (x) into its Smith canonical form, where the invariant factors h i (x) ∈ F[x] are non-zero monic polynomials such that h i (x)|h i+1 (x) for i ∈ {1, . . ., r − 1}.If one has the factorizations h i = (x − λ) ki hi (x) for some hi (x) ∈ C[x], with 0 ≤ k i ≤ k i+1 for i ∈ {1, . . ., r − 1} and (x − λ) not dividing any of the hi (x), then the k i are called the partial multiplicities of the eigenvalue λ.The algebraic multiplicity a λ is the sum of the partial multiplicities.Note that an immediate consequence of this definition is a λ ≥ g λ .If a λ = g λ (i.e., all non-zero k i equal to 1) then the eigenvalue λ is said to be semisimple, otherwise it is defective.If a λ = 1 (i.e., k i = 1 for i = r and zero otherwise), then we say that λ is simple, otherwise it is multiple.
A square matrix polynomial is regular if r = n, i.e., if det P (x) is not identically zero.A finite eigenvalue of a regular matrix polynomial is simply a root of the characteristic equation det P (x) = 0, and its algebraic multiplicity is equal to the multiplicity of the corresponding root.If a matrix polynomial is not regular it is said to be singular.More generally, a finite eigenvalue of a matrix polynomial (resp.its algebraic multiplicity) is a root (resp.the multiplicity as a root) of the equation γ r (x) = 0, where γ r (x) is the monic greatest common divisor of all the minors of P (x) of order r (note that γ n (x) = det P (x)).
Remark 3.1 The concept of an eigenvalue, and the other definitions recalled here, are valid also in the more general setting of rectangular matrix polynomials.However, in that scenario a generic matrix polynomial has no eigenvalues [18]; as a consequence, a perturbation of a matrix polynomial with an eigenvalue would almost surely remove it.This is a fairly different setting than in the square case, and a deeper probabilistic analysis of the rectangular case beyond the scope of the present paper.
We mention in passing that there are possible ways to extend the analysis to the rectangular case, such as embedding them in a larger square matrix polynomial or (at least in the case of pencils, or linear matrix polynomials) consider structured perturbations that do preserve eigenvalues.
Note that ker λ P (x) ⊆ ker P (λ) and ker λ P (x) * ⊆ ker P (λ) * for λ ∈ C, and that the difference in dimension is the geometric multiplicity, ker P (λ) − ker λ P (x) = ker P (λ) * − ker λ P (x) * = g λ .A right eigenvector corresponding to an eigenvalue λ ∈ C is defined [19,Sec. 2.3] to be a nonzero element of the quotient space ker P (λ)/ ker λ P (x).A left eigenvector is similarly defined as an element of ker P (λ) * / ker λ P (x) * .In terms of the Smith canonical form (3), the last n − r columns of U , evaluated at λ * , represent a basis of ker λ P (x) * , while the last (n − r) columns of V , evaluated at λ, represent a basis of ker λ P (x).
In the analysis we will be concerned with a quantity of the form |u * P (λ)v|, where u, v are representatives of eigenvectors.It is known [19,Lemma 2.9] that b ∈ ker λ P (x) is equivalent to the existence of a polynomial vector b(x) such that b(λ) = b and P (x)b(x) = 0.Then, implies that for any representative of a left eigenvector u ∈ ker P (λ) * we get u * P (λ)b(λ) = 0.It follows that for an eigenvalue representative v, u * P (λ)v depends only the component of v orthogonal to ker λ P (x), and an analogous argument also shows that this expression only depends on the component of u orthogonal to ker λ P (x) * .In practice, we will therefore choose representatives u and v for the left and right eigenvalues that are orthogonal to ker λ P (x) * and ker λ P (x), respectively, and have unit norm.
x] is a matrix polynomial with simple eigenvalue λ, then there is a unique (up to sign) way of choosing such representatives u and v. , where F ∈ {R, C}, is a matrix polynomial of rank r ≤ n, and let λ be a simple eigenvalue.Let X = [U u] ∈ C n×(n−r+1) be a matrix whose columns form a basis of ker P (λ) * , and such that the columns of U ∈ C n×(n−r) form a basis of ker λ P (x) * .Likewise, let Y = [V v] be a matrix whose columns form a basis of ker P (λ), and such that the columns of V ∈ C n×(n−r) form a basis of ker λ P (x).In particular, v and u are representatives of, respectively, right and left eigenvectors of P (x).The following explicit characterization of a simple eigenvalue is due to De Terán and Dopico [14, Theorem 2 and Eqn.(20)].To avoid making a case distinction for the regular case r = n, we agree that det(U * E(λ)V ) = 1 if U and V are empty.[x] be such that X * E(λ)Y is non-singular.Then for small enough > 0, the perturbed matrix polynomial P (x) + E(x) has exactly one eigenvalue λ( ) of the form Note that in the special case r = n we recover the expression for regular matrix polynomials from [42, Theorem 5] and [14, Corollary 1], where u, v are left and right eigenvectors corresponding to the eigenvalue λ.

The directional sensitivity of a singular polynomial eigenproblem
We can now describe the input-output map that underlies our analysis.By the local nature of our problem, we consider a fixed matrix polynomial P (x) ∈ F n×n d [x] of rank r with simple eigenvalue λ, and define the input-output function f : that maps P (x) to λ, maps P (x) + E(x) to λ( ) for any E(x) and > 0 satisfying the conditions of Theorem 3.2, and maps any other matrix polynomial to an arbitrary number other than λ.
An immediate consequence of Theorem 3.2 and our definition of the input-output map is an explicit expression for the directional sensitivity of the problem.Here we write E for the Euclidean norm of the vector of coefficients of E(x) as a vector in F n 2 (d+1) .From now on, when talking about the "directional sensitivity of an eigenvalue in direction E", we implicitly refer to the input-output map f defined above.
In the special case r = n, we have For the goals in this paper, these results suffice.However, we note that it is possible to obtain equivalent formulae for the expansion that, unlike the one by De Terán and Dopico, do not involve the eigenvectors of singular polynomials.
Finally, we introduce a parameter that will enter all of our results, and coincides with the inverse of the worst-case condition number in the regular case r = n.Choose representatives u, v of the eigenvectors that satisfy u = v = 1 and (if r < n) U * u = V * v = 0.For such a choice of eigenvectors, define We conclude with the following variation of [42,Theorem 5].For a proof of the following result, see [1, Lemma 2.1] or [2] for a discussion in a wider context.Remark 3.5 In practice, an algorithm such as QZ applied to P (x) will typically compute all the eigenvalues of a nearby matrix polynomial.Therefore, any conditioning results on the conditioning of our specific input-output map f will explain why the correct eigenvalue is found among the computed eigenvalues, but not tell us how to choose the right one in practice.For selecting the right eigenvalue one could use heuristics, such as computing the eigenvalues of an artificially perturbed problem.For more details on these practical considerations, we refer to [32].
4 Probabilistic analysis of the directional sensitivity In this section we study the probability distribution of the directional sensitivity of a singular polynomial eigenvalue problem To deal with real and complex perturbations simultaneously as far as possible, we follow the convention from random matrix theory [22] and parametrize our results with a parameter β, where which we identify with the matrix , and denote by E the Euclidean norm of E considered as a vector in F N , where N := n 2 (d + 1) (equivalently, the Frobenius norm of the matrix E).When we say that E is uniformly distributed on the sphere, written E ∼ U(βN ) with β = 1 for real perturbations and and β = 2 if E is complex, we mean that the image of E under an identification F n×n(d+1) ∼ = R βN is uniformly distributed on the corresponding unit sphere S βN −1 .To avoid trivial special cases, we assume that r ≥ 1 and d ≥ 1, so that, in particular, N ≥ 2.
The following theorem characterizes the distribution of the directional sensitivity under uniform perturbations.
The proof is given later in this section, after having introduced some preliminary concepts and results.If r = n, then the directional sensitivity is distributed like the square root of a beta random variable, and in particular it is bounded.Using the density of the beta distribution, we can derive the moments and tail bounds for the distribution of the directional sensitivity explicitly.
If t ≥ γ −1 P , then for r < n we have the tail bounds If r = n, then σ E ≤ γ P .
Proof.For the expectation, using Theorem 4.1 in the case r < n, we have where Remark 4.3 In the context of random inputs, it is common to study the logarithm of a condition number instead of the condition number itself [13,21].Thus, even when the expected condition is not finite, the expected logarithm may still be small.Using a standard argument (see, e.g., [13, Proposition 2.26]) we can deduce a bound on the expected logarithm of the directional sensitivity: The logarithm of the sensitivity is relevant as a measure for the loss of precision.
As the derivation of the bounds (6) using Lemma A.1 shows, the cumulative distribution functions in question can be expressed exactly in terms of integrals of hypergeometric functions.This way, the tail probabilities can be computed to high accuracy for any given t, see also Remark 4.6.However, as the derivation of the tail bounds in Appendix A also shows, the bounds given in Corollary 4.2 are sharp for fixed t and n − r → ∞, as well as for fixed n − r and t → ∞. Figure 1 illustrates these bounds for a choice of small parameters (n = 4, d = 2, r = 2, γ P = 1).Moreover, the bounds (6) have the added benefit of being easily interpretable.These tail bounds can be interpreted as saying that for large n and/or d, it is highly unlikely that the directional sensitivity will exceed γ −1 P (which by Proposition 3.4 is the worst-case condition bound in the smooth case r = n).Example 4.4 Consider again the matrix pencil L(x) from (1).This pencil has rank 3, and the cokernel and kernel are spanned by the vectors p(x) and q(x), respectively, given by The matrix polynomial has the simple eigenvalue λ = 1, and the matrix L(1) has rank 2. The cokernel ker L(1) T and the kernel ker L(1) are spanned by the columns of the matrices X and Y , given by Let u be the second column of X and let v be the second coumn of Y .The vectors u and v are orthogonal to ker λ L(x) T = span{p(1)} and ker λ L(x) = span{q(1)} and have unit norm.We therefore have Hence, γ −1 L = 12.16.Figure 2 shows the result of comparing the distribution of σ E , found empirically, with the bounds obtained in Theorem 4.1.The relative error in the plot is of order 10 −5 .0.00 0.25 0.50 0.75 1.00 1. 25

Real perturbations of L(x)
Exact probability tail Tail bound Fig. 2 The exact distribution tail of σ E for the matrix pencil L(x) from (1), and the theoretically computed tail bound (6).
The plan for the rest of this section is as follows.In Section 4.1 we recall some facts from probability theory and random matrix theory.In Section 4.2 we discuss the QR decomposition of a random matrix, and in Section 4.3 we use this decomposition to prove Theorem 4.1

Probabilistic preliminaries
We write g ∼ N 1 (µ, Σ) for a normal distributed (Gaussian) random vector g with mean µ and covariance matrix Σ, and g ∼ N 2 (µ, Σ) for a complex Gaussian vector; this is a C n -valued random vector with expected value µ, whose real and imaginary parts are independent real Gaussian random vectors with covariance matrix Σ/2 (a special case are real and complex scalar random variables, N β (µ, σ 2 )).We denote the uniform distribution on a sphere S n−1 by U(n).Every Gaussian vector g ∼ N 1 (0, I n ) can be written as a product g = rq with r and q independent, where r ∼ χ(n) is χ-distributed with n degrees of freedom, and q ∼ U(n).

Projections of random vectors
The squared projected lengths of Gaussian and uniform distributed random vectors can be described using the χ 2 and the beta distribution, respectively.A vector X is χ 2 -distributed with k degrees of freedom, X ∼ χ 2 (k), if the cumulative distribution function (cdf) is The special case χ 2 (2) is the exponential distribution with parameter 1/2, written exp(1/2).The beta distribution B(a, b) is defined for a, b > −1, and has cdf supported on [0, 1], where B(a, b) = Γ (a)Γ (b)/Γ (a + b) is the beta function.For a vector x ∈ F n , denote by π k (x) the projection onto the first k coordiantes and by π k (x its squared length.The following facts are known: The first claim is a standard fact about the normal distribution and can be derived directly from it, see for example [9].The statement for the uniform distribution can be derived from the Gaussian one, but also follows by a change of variables from expressions for the volume of tubular neighbourhoods of subspheres of a sphere, see for example [13,Section 20.2].Since all the distributions considered are orthogonally (in the real case) or unitarily (in the complex case) invariant, these observations hold for the projection of a random vector onto any k-dimensional subspace, not just the first k coordinates.

Random matrix ensembles
If P (x) is a singular matrix polynomial with a simple eigenvalue λ, then the set of perturbation directions for which the directional sensitivity is not finite is a proper Zariski closed subset, see Theorem 3.2.It is therefore natural and convenient to consider probability measures on the space of perturbations that have measure zero on proper Zariski closed subsets.This is the case, for example, if the measure is absolutely continuous with respect to the Lebesgue measure.In this paper we will work with real and complex Gaussian and uniform distributions.For a detailed discussion of the random matrix ensembles used here we refer to [24,.
For a random matrix we write G ∼ G β n (µ, σ 2 ) if each entry of G is an independent N β (µ, σ 2 ) random variable, and call this a Gaussian random matrix.In the case β = 2 this is called the Ginibre ensemble [25].Centered (µ = 0) Gaussian random matrices are orthogonally (if β = 1) or unitarily (if β = 2) invariant ([35, Lemma 1]) and the joint density of their entries is given by , which takes into account the fact the real and imaginary parts of the entries of a complex Gaussian have variance 1/2.In addition, we consider the circular real ensemble CRE(n) for real orthogonal matrices in O(n), and the circular unitary ensemble CUE(n) [20] for unitary matrices in U (n), where both distributions correspond to the unique Haar probability measure on the corresponding groups.

The probabilistic QR decomposition
Any nonsingular matrix A ∈ F n×n has a unique QR-decomposition A = QR, where Moreover, all these random variables are independent.
An easy and conceptual derivation of the distribution of Q can be found in [35], while the distribution of R can be deduced from the known expression for the Jacobian of the QR-decomposition [22, 3.3].

Proof of Theorem 4.1
In this section we present the proofs of Theorem 4.1 and the corollaries that follow from it.To simplify notation, we set = n − r + 1. Recall from Corollary 3.3 the expression where the columns of X = [U u], Y = [V v] ∈ F n× are orthonormal bases of ker P (λ) * and ker P (λ), the columns of U, V represent bases of ker λ P (x) * and ker λ P (x), respectively, and γ P is defined in (5).
Proof of Theorem 4.1.We first assume r < n.By the scale invariance of the directional sensitivity σ E , we consider Gaussian perturbations E ∼ N β (0, σ 2 I βN ) (recall that we interpret E as a vector in F N ), where σ 2 = ( where h is the lower right corner of the inverse of an × Gaussian matrix G.To study the distribution of |h | −1 , we resort to the probabilistic QR-decomposition discussed in Section 4.2.If G = QR is the unique QR-decomposition of G with positive diagonals in R, then the inverse is given by H = R −1 Q * , and a direct inspection reveals that the lower-right element h of H is h = q * /r .From Section 4.2 it follows that Q ∼ CRE(n) or CUE(n), and βr 2 ∼ χ 2 (β).Moreover, each column of Q is uniformly distributed on the sphere S β −1 , so that |q | 2 ∼ B(β/2, β( − 1)/2) (by Section 4.1.1),and {r 2 , |q | 2 } are independent.We therefore get Setting γ P = |u * P (λ)v| • ( d j=0 |λ| 2j ) −1/2 (see ( 5)), we arrive at Then we can rearrange the coefficients of E(x) to a matrix F ∈ F n 2 ×(d+1) so that is an orthogonal/unitary matrix with p 0 as first column, then If we denote by G c the vector consisting of those entries of E(λ) that are not in G, then Therefore, the factor r2 , itself a square of a (real or complex) Gaussian, is a summand in a sum of squares of N = n 2 (d + 1) Gaussians, and the quotient is equal to the squared length of the projection of a uniform random vector in S βN −1 onto the first β coordinates.By Section 4.1.1,this is B(β/2, β(N − 1)/2) distributed.Denoting this random variable by Z N and |q | 2 by Z , we obtain This establishes the claim in the case r < n.If r = n, we use the expression (see ( 4)), where u and v are eigenvectors.By orthogonal/unitary invariance, σ 2 E has the same distribution as the the squared norm of a Gaussian.By the same argument as above, we can bound E in terms of E(λ) , and the quotient with E(λ) 2 is then the squared projected length of the first β coordinates of a uniform distributed vector in S βN −1 , which is B(β/2, β(N − 1)/2) distributed.
Remark 4.6 If N + is large, then for a (real or complex) Gaussian perturbation with entry-wise variance 1/N , by Gaussian concentration (see [11,Theorem 5.6]), E is close to 1 with high probability: This means that the distribution of E σ E for a Gaussian perturbation will be close to that of σ E for a uniform perturbation.Even for moderate sizes of d and n, the result can be numerically almost indistinguishable.
In fact, when G is Gaussian, then the distribution can be expressed explicitly as where 1 F 1 (a, b; z) denotes the confluent hypergeometric function (this follows by mimicking the proof of Theorem 4.1, expressing the distribution in terms of a quotient of a χ 2 and a beta random variable, and writing out the resulting integrals).Similarly, using the same computations as in the proof of Lemma A.1, we get the exact expression where 2 F 1 (a, b, c; z) is the hypergeometric function.The case distinction corresponds to different branches of the solution of the hypergeometric differential equation.See [38,39] for more on computing with hypergeometric functions.

Weak condition numbers of simple eigenvalues of singular matrix polynomials
The tail bounds on the directional sensitivity can easily be translated into statements about condition numbers and discuss some consequences and interpretations.[x] be a matrix polynomial of rank r, and let λ be a simple eigenvalue of P (x).Then the worst-case condition number is the stochastic condition number, with respect to uniformly distributed perturbations, is if r < n and δ ∈ (0, 1), then the δ-weak worst-case condition number, with respect to uniformly distributed perturbations, is bounded by The expression for the stochastic condition number involves the quotient of gamma functions, which can be simplified using the well-known bounds which hold for x > 0 [49].Using these bounds on the numerator and denominator of (8), we get the more interpretable The bound on the weak condition number (9) shows that κ w (1/2), which is the median of the same random variable of which κ s is the expected value, is bounded by 1/γ P , which is the expression of the worst-case condition number in the regular case r = n.
The situation changes dramatically when considering real matrix polynomials with real perturbations, as in this case even the stochastic condition becomes infinite if the matrix polynomial is singular.In the statement we denote the resulting condition number with respect to real perturbations by using the superscript R.
x] be a real matrix polynomial of rank r, and let λ ∈ C be a simple eigenvalue of P (x).Then the worst-case condition number is the stochastic condition number, with respect to uniformly distributed real perturbations, is if r < n and δ ∈ (0, 1), then the δ-weak worst-case condition, with respect to uniformly distributed real perturbations, is It is instructive to compare the weak condition numbers in the singular case to the worst-case and stochastic condition number in the regular case.In the regular case (n = r), when replacing the worstcase with the stochastic condition we get an improvement by a factor of ≈ N −1/2 , which is consistent with previous work [4] (see also Section 2) relating the worst-case to the stochastic condition.We will see in Section 6.1 that the expected value in the case n = r captures the typical perturbation behaviour of the problem more accurately than the worst-case bound.Among the many possible interpretations of the weak worst-case condition, we highlight the following: -Since the bounds are monotonically decreasing as the rank r increases, we can get bounds independent of r.Specifically, we can replace the quotient (n − r)/N with 1/ n(d + 1).This is useful since, in applications, the rank is not always known.-While the stochastic condition number (12), which measures the expected sensitivity of the problem of computing a singular eigenvalue, is infinite, for 4(n − r) < N the median sensitivity is bounded by The median is a more robust and arguably better summary parameter than the expectation.-Choosing δ = e −N in (13), we get a weak stochastic condition bound of κ R ws (e −N ) ≤ That is, the condition number improves from being unbounded to sublinear in N , by just removing a set inputs of exponentially small measure.For small enough δ we get the (not optimized It is easy to translate Corollary 4.2 into the main results, Theorem 5.1 and Theorem 5.2.For the weak stochastic condition, we need the following observation, which is a variation of [3, Lemma 2.2].Lemma 5.4 Let Z be a random variable such that P{Z ≥ t} ≤ C a t for t > a. Then for any t 0 > a, Proof of Theorem 5.1 and Theorem 5.2.The statements about the worst-case, (7) and (11), and about the stochastic condition number, ( 8) and ( 12), follow immediately from Theorem 4.1 and Corollary 4.2.
For the weak condition number in the complex case, if δ ≤ (n − r)/N , then setting we get γ P t ≥ 1, and therefore, using the complex tail bound from Corollary 4.2, This yields κ w (δ) ≤ t.If δ > (n − r)/N , then we use the fact that the weak condition number is monotonically decreasing with δ (intuitively, the larger the set we are allowed to exclude, the smaller the condition number will be), to conclude that κ w (δ) ≤ κ w (δ 0 ) ≤ 1/γ P , where δ 0 := (n − r)/N .For the real case, if r < n we use the bound which follows from (10).If δ < (n − r)/N , set Then where for the last inequality we used the fact that N ≥ 2. We conclude that κ w (δ) ≤ t.If δ > (n − r)/N , then we use the monotonicity of the weak condition just as in the complex case.Finally, for the weak stochastic condition number in the real case, we use Lemma 5.4 with a = γ −1 P , C = (n − r)/N and t 0 = C(δγ P ) −1 in the conditional expectation.We just saw that κ R w (δ) ≤ t 0 , so that where we used Lemma 5.4 in the second inequality.
6 Bounding the weak stochastic condition number In this section we illustrate how the weak condition number of the problem of computing a simple eigenvalue of a singular matrix polynomial can be estimated in practice.More precisely, we show that the weak condition number of a singular problem can be estimated in terms of the stochastic condition number of nearby regular problems.Before deriving the relevant estimates, given in Theorem 6.3, we discuss the stochastic condition number of regular matrix polynomials.
6.1 Measure concentration for the directional sensitivity of regular matrix polynomials For the directional sensitivity in the regular case, r = n, the worst-case condition number is γ −1 P , as was shown in Proposition 3.4.In addition, the expression for the stochastic condition number involves a ratio of gamma functions (see Corollary 4.2 or the case r = n in Theorem 5.1 and Theorem 5.2).From (10) we get the approximation Γ (k + 1/2)/Γ (k) ≈ √ k, so that the stochastic condition number for regular polynomial eigenvalue problems satisfies This is compatible with previously known results about the stochastic condition number in the smooth setting (see the discussion in Section 2).A natural question is whether the directional sensitivity is likely to be closer to this expected value, or closer to the upper bound κ.Theorem 4.1 describes the distribution of σ E as that of the (scaled) square root of a beta random variable.Using the interpretation of beta random variables as squared lengths of projections of uniformly distributed vectors on the sphere (see Section 4.1.1),tail bounds for the distribution of σ E therefore translate into the problem of bounding the relative volume of certain subsets of the unit sphere.A standard argument from the realm of measure concentration on spheres, Lemma 6.1, then implies that with high probability, σ E will stay close to its mean.Lemma 6.1 Let x ∼ U(βN ) be a uniformly distributed vector on the (real or complex) unit sphere, where Proof.For complex perturbations, we get the straight-forward bound In the real case, a classic result (see [6,Lemma 2.2] for a short and elegant proof) states that the probability in question is bounded by The claimed bound follows by replacing N with N − 1 for the sake of a uniform presentation.The next corollary follows from the description of the distribution of σ E in Theorem 4.1, and the characterization of beta random variables as squared projected lengths of uniform vectors from Section 4.1.1.Corollary 6.2 Let P (x) ∈ F n×n d [x] be a regular matrix polynomial and let λ be a simple eigenvalue of P (x).If E ∼ U(βN ), where β = 1 if F = R and β = 2 if F = C, then for t ≤ γ −1 P we have P{σ E ≥ t} ≤ e −β(N −1)γ 2 P t 2 /2 .

6.2
The weak condition number in terms of nearby stochastic condition numbers It is common wisdom that computing the condition number is as hard as solving the problem at hand, so at the very least we would like to avoid making the computation of the condition estimate more expensive than the computation of the eigenvalue itself.We will therefore aim to estimate the condition number of the problem in terms of the output of a backward stable algorithm for computing the eigenvalue and a pair of associated eigenvectors.
Note that these parameters depend implicitly on a perturbation direction E(x), even though the notation does not reflect this.The parameters κ and κ s are the limits of the worst-case and stochastic condition numbers, κ(P (x) + E(x)) and κ s (P (x) + E(x)), as → 0. Since almost sure convergence implies convergence in probability, we get whenever the left-hand side of this expression is finite.A backward stable algorithm, such as vanilla QZ, computes an eigenvalue λ and associated unit-norm eigenvectors ũ and ṽ of a nearby problem P (x) + E(x).If is small, then λ ≈ λ, ũ ≈ u and ṽ ≈ v, so that we can approximate the values (15) using the output of such an algorithm.Unfortunately, this does not yet give us a good estimate of γ P , as the definition of γ P makes use of very special representatives of eigenvectors (recall from Section 3.1 that for a singular matrix polynomials, eigenvectors are only defined as equivalence classes).The following theorem shows that we can still get bounds on the weak condition numbers in terms of κ s .
If δ ≤ (n − r)/N , then for any η > 0 we have the tail bounds For the proof of Theorem 6.3 we recall the setting of Section 3. Let X = [U u] and Y = [V v] be matrices whose columns are orthonormal bases of ker P (λ) * and ker P (λ), respectively, and such that U and V are bases of ker λ P (x) T and ker λ P (x), respectively.If u = u and v = v in (15), then γ P = γ P .In general, however, we only get a bound.To see this, recall from Section 3.1 that u * P (λ)v depends only on the component of u that is orthogonal to ker λ P (λ) * , and the component of v that is orthogonal to ker λ P (λ).In particular, X * P (λ)Y has rank one, and we have (recall = n − r + 1) The key to Proposition 6.3 lies in a result analogous to Theorem 3.
and let a and b be the corresponding left and right eigenvectors.Then for small enough > 0, the perturbed matrix polynomial P (x) + E(x) has exactly one eigenvalue λ( ) as described in Theorem 3.2, and the corresponding left and right eigenvectors satisfy Given a matrix polynomial P (x) and a perturbation direction E(x), we can therefore assume that the eigenvectors of a sufficiently small perturbation in direction E(x) are approximated by u = Xa and v = Y b, where a, b are the eigenvectors of the matrix pencil (17).We would next like to characterize these eigenvectors for random perturbations E(x).As with the rest of this paper, the following result is parametrized by a parameter β ∈ {1, 2} which specifies whether we work with real or complex perturbations.Proposition 6.5 Let P (x) ∈ F n×n d [x] be a matrix polynomial of rank r < n with simple eigenvalue λ ∈ C, and let E(x) ∼ U(βN ) be a random perturbation.Let a, b be left and right eigenvectors of the linear pencil (17), let u = Xa and v = Y b, and define γ P as in (15).Then and Proof.By scale invariance of ( 17), we may take E(x) to be Gaussian, E(x) ∼ N β (0, σ 2 I βN ) with σ 2 = ( d j=0 |λ| 2j ) −1 (so that E(λ) ∼ G β n (0, 1)).Set G := X * E(λ)Y , so that G ∼ G β (0, 1).Using ( 16), the eigenvectors associated to (17)  Clearly, each of the vectors a and b individually is uniformly distributed.They are, however, not independent.To simplify notation, set H = G −1 .For the condition estimate we get, using (16), By orthogonal/unitary invariance of the Gaussian distribution, the random vector q := He / He is uniformly distributed on S β −1 .It follows that |e T He |/ He is distributed like the absolute value of the projection of a uniform vector onto the first coordinate.For the expected value, the bound follows by observing that the expected value of such a projection is bounded by ( − 1) −1/2 .For the tail bound, using (14) (with N replaced by ) we get For the tails bounds in the complex case, note that in the complex case we have where we used Proposition 6.5 for the inequality.The real case follows in the same way.

Conclusions and outlook
The classical theory of conditioning in numerical analysis aims to quantify the susceptibility of a computational problem to perturbations in the input.While the theory serves its purpose well in distinguishing well-posed problems from problems that approach ill-posedness, it fails to explain why certain problems with high condition number can still be solved satisfactory to high precision by algorithms that are oblivious to the special structure of an input.By introducing the notions of weak and weak stochastic conditioning, we developed a tool to better quantify the perturbation behaviour of numerical computation problems for which the classical condition number fails to do so.Our methods are based on an analysis of directional perturbations and probabilistic tools.The use of probability theory in our context is auxiliary: the purpose is to quantify the observation that the set of adversarial perturbations is small.In practice, any reasonable numerical algorithm will find the eigenvalues of a nearby regular matrix polynomial, and the perturbation will be deterministic and not random.However, as the algorithm knows nothing about the particular input matrix polynomial, it is reasonable to assume that if the set of adversarial perturbations is sufficiently small, then the actual perturbation will not be in there.Put more directly, to say that the probability that a perturbed problem has large directional sensitivity is very small is to say that a perturbation, although non-random, would need a good reason to cause damage.
The results presented continue the line of work of [3], where it is argued that, just as sufficiently small numbers are considered numerically indistinguishable from zero, sets of sufficiently small measure should be considered numerically indistinguishable from null-sets.One interesting direction in which the results presented can be strengthened is to use wider classes of probability distributions, including such that are discrete, and derive equivalent (possibly slightly weaker) results.One important side-effect of our analysis is a focus away from the expected value, and more towards robust measures such as the median8 and other quantiles.
Our results hence have a couple of important implications, or "take-home messages", that we would like to highlight: 1.The results presented call for a critical re-evaluation of the notion of ill-posedness.It has become common practice to simply identify ill-posedness with having infinite condition, to the extent that condition numbers are often defined in terms of the inverse distance to a set of ill-posed inputs, an approach that has been popularized by J. Demmel [16,17]. 9The question of whether the elements of such a set are actually badly behaved a practical sense is often left unquestioned.Our theory suggests that the set of inputs that are actually ill-behaved from a practical point of view can be smaller than previously thought.
2. Average-case analysis (and its refinement, smoothed analysis [12]) is, while well-intentioned, still susceptible to the caprices of specific probability distributions.More meaningful results are obtained when, instead of analysing the behaviour of perturbations on average, one shifts the focus towards showing that the set of adversarial perturbations is small; ideally so small, that hitting a misbehaving perturbation would suggest the existence of a specific explanation for this rather than just bad luck.
In terms of summary parameters, our approach suggests using, in line with common practice in statistics, more robust parameters such as the median instead of the mean.
A natural question that arises from the first point is: if some problems that were previously thought of as ill-posed are not (in the sense that the set of discontinuous perturbation directions is negligible), then which problems are genuinely ill-posed?In the case of polynomial eigenvalue problems, we conjecture that problems with semisimple eigenvalues are not ill-conditioned in our framework; in fact, it appears that much of the analysis performed in this section can be extended to this setting.It is not completely obvious which problems should be considered ill-posed based on this new theory.That some inputs still should can be seen for example by considering Jordan blocks with zeros on the diagonal; the computed eigenvalues of perturbations of the order of machine precision will not recover the correct eigenvalue in this situation.Our analysis in the semisimple case is based on the fact that the directional derivative of the function to be computed exists in sufficiently many directions.
Another consequence is that much of the probabilistic analyses of condition numbers based on the distance to ill-posedness, while still correct, can possibly be refined when using a smaller set of ill-posed inputs.In particular, it is likely that condition bounds resulting from average-case and smoothed analysis can be refined.Finally, an interesting direction would be to examine problems with high or infinite condition number that are not ill-posed in a practical sense in different contexts, such as polynomial system solving or problems arising from the discretization of continuous inverse problems.

3. 2
Perturbations of singular matrix polynomials: the De Terán-Dopico formula Assume that P (x) ∈ F n×n d [x]

Theorem 3 . 2
Let P (x) ∈ F n×n d [x] be matrix polynomial of rank r with simple eigenvalue λ and X, Y as above.Let E(x) ∈ F n×n d

Corollary 3 . 3
Let λ be a simple eigenvalue of P (x) and let E(x) ∈ F n×n d [x] be a regular matrix polynomial.Then the directional sensitivity of the eigenvalue λ in direction E(x) is

Proposition 3 . 4
Let P (x) ∈ F n×n d [x] be a regular matrix polynomial and λ ∈ C a simple eigenvalue.Then the worst-case condition number of the problem of computing λ is κ = γ −1 P .

Theorem 4 . 1
Let P (x) ∈ F n×n d [x] be a matrix polynomial of rank r and let λ be a simple eigenvalue of P

Corollary 4 . 2
Let P (x) ∈ F n×n d [x] be a matrix polynomial of rank r and let λ be a simple eigenvalue of P X k denotes a B(β/2, β(k−1)/2) distributed random variable.The claimed tail bounds and expected values for r < n follow by applying Lemma A.1 with k = 2, a = c = β/2, b = β(N − 1)/2, and d = β(n − r)/2.If r = n, the expected value follows along the lines, and the deterministic bound follows trivially from the boundedness of the beta distribution.
and R ∈ F n×n is upper triangular with r ii > 0 [44, Part II].The following proposition describes the distribution of the factors Q and R in the QR-decomposition of a (real or complex) Gaussian random matrix.Proposition 4.5 Let G ∼ G β n (0, 1) be a Gaussian random matrix, β ∈ {1, 2}.Then G can be factored uniquely as G = QR, where R = (r jk ) 1≤j≤k≤n is upper triangular and
Let P (x) ∈ F n×n d [x] be a matrix polynomial of rank r < n with a simple eigenvalue λ ∈ C, and let E(x) ∈ F n×n d [x] be a regular perturbation.Denote by λ( ) the eigenvalue of P (x) + E(x) that converges to λ (see Theorem 3.2), and let u( ) and v( ) be the corresponding left and right eigenvectors of the perturbed problem.As shown in [14, Theorem 4] (see Theorem 6.4 below), for all E(x) outside a proper Zariski closed set, the limits u = lim →0 u( ), v = lim →0 v( ) converge to representatives of left and right eigenvectors of P (x) associated to λ. Whenever these limits exist and represent eigenvectors of P (x), define 2 for the eigenvectors by de Terán and Dopico [14, Theorem 4].Theorem 6.4 Let P (x) ∈ F n×n d [x] be matrix polynomial of rank r with simple eigenvalue λ and X, Y as above.Let E(x) ∈ F n×n d [x] be such that X * E(λ)Y is non-singular.Let ζ be the eigenvalue of the non-singular matrix pencil X are then characterized as solutions of a * (G + ζ • γ P e e * ) = 0, (G + ζ • γ P e e * )b = 0.It follows that G * a and Gb are proportional to e , and hence a = G − * e G − * e , b = G −1 e G −1 e .