Condition Numbers and Backward Error of a Matrix Polynomial Equation Arising in Stochastic Models

We consider a matrix polynomial equation (MPE) AnXn+An-1Xn-1+⋯+A0=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A_nX^n+A_{n-1}X^{n-1}+\cdots +A_0=0$$\end{document}, where An,An-1,…,A0∈Rm×m\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A_n, A_{n-1},\ldots , A_0 \in \mathbb {R}^{m\times m}$$\end{document} are the coefficient matrices, and X∈Rm×m\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X\in \mathbb {R}^{m\times m}$$\end{document} is the unknown matrix. A sufficient condition for the existence of the minimal nonnegative solution is derived, where minimal means that any other solution is componentwise no less than the minimal one. The explicit expressions of normwise, mixed and componentwise condition numbers of the matrix polynomial equation are obtained. A backward error of the approximated minimal nonnegative solution is defined and evaluated. Some numerical examples are given to show the sharpness of the three kinds of condition numbers.


Introduction
We consider a matrix polynomial equation (MPE) of the following form A n X n + A n−1 X n−1 + · · · + A 0 = 0, (1.1) where A n , A n−1 , . . . , A 0 ∈ R m×m are the coefficient matrices, and X ∈ R m×m is the unknown matrix. Matrix polynomial equations often arise in queueing problems, differential equations, system theory, stochastic theory and many other areas [2,3,12,18,21,27]. Different techniques have been studied for finding the minimal nonnegative solution. For the case n = 2, the MPE (1.1) is the well-known quadratic matrix equation (QME). In [10,11,20,28], the structured QME, which is called the unilateral quadratic matrix equation (UQME), was studied. The authors showed that an algebraic Riccati equation XC X − AX − X D + B = 0 can be transformed into a UQME. Bini et al. [11] proposed an algorithm by complementing the transformation with the shrink-and-shift technique of Ramaswami for finding the solution of the UQME. Larin [29] generalized the Schur and doubling methods to the UQME. For the unstructured QME, which has a wide application in the quasi-birth-death process [6,30], the minimal nonnegative solution is of importance. Davis [14,15] considered Newton's method for solving the unstructured QME. Higham and Kim [23,24] studied the dominant and minimal solvent of the unstructured QME and they improved the global convergence properties of Newton's method by incorporating an exact line searches. The logarithmic reduction method with quadratic convergence is introduced in [31].
For the case n = +∞, the MPE (1.1) is called power series matrix equation and often arises in Markov chains. For a given M/G/1-type matrix S, the computation of the probability invariant vector associated with S is strongly related to the minimal nonnegative solution of the MPE (1.1) with n = +∞. Latouche [6,30] proved that Newton's method could be applied to solve the power series matrix equation, and the matrix sequence obtained by Newton's method converges to the minimal nonnegative solution. Bini et al. [5] solved the matrix polynomial equations by devising some new iterative techniques with quadratic convergence.
For the general case (n ≥ 2), the cyclic reduction method [7][8][9], the invariant subspace algorithm [1] and the doubling technique [33] have been proposed for finding the minimal nonnegative solution of the MPE (1.1). Kratz and Stickel [26] proved that Newton's method could also be applied to solve this general case. Seo and Kim [38] studied the relaxed Newton's method for finding the minimal nonnegative solution of the MPE (1.1) and they also proved that the relaxed Newton's method could work more efficiently than the general Newton's method.
Since the minimal nonnegative solution of the MPE (1.1) is of practical importance and there is little work about the perturbation analysis for the MPE (1.1), this paper is devoted to the condition numbers of the MPE (1.1), which play an important role in perturbation analysis. We investigate three kinds of normwise condition numbers for Eq. (1.1). Note that the normwise condition number ignores the structure of both input and output data, so when the data are badly scaled or sparse, using norms to measure the relative size of the perturbation on its small or zero entries does not suffice to determine how well the problem is conditioned numerically. In this case, componentwise analysis can be one alternative approach by which much tighter and revealing bounds can be obtained. There are two kinds of alternative condition numbers called mixed and componentwise condition numbers, respectively, which are developed by Gohberg and Koltracht [17], and we refer to [16,22,34,35,[39][40][41][42][43] for more details of these two kinds of condition numbers.
We also apply the theory of mixed and componentwise condition numbers to the MPE (1.1) and present local linear perturbation bounds for its minimal nonnegative solution by using mixed and componentwise condition numbers. This paper is organized as follows. In Sect. 2, we give a sufficient condition for the existence of the minimal nonnegative solution. In Sect. 3, we investigate three kinds of normwise condition numbers and derive explicit expressions for them. In Sect. 4, we obtain explicit expressions and upper bounds for the mixed and componentwise condition numbers. In Sect. 5, we define a backward error of the approximate minimal nonnegative solution and derive an elegant upper and lower bound. In Sect. 6, we give some numerical examples to show the sharpness of these three kinds of condition numbers. We begin with the notation used throughout this paper. R m×m stands for the set of m × m matrices with elements in field R. · 2 and · F are the spectral norm and the Frobenius norm, respectively. For X = (x i j ) ∈ R m×m , X max is the max norm given by X max = max i, j {|x i j |} and |X | is the matrix whose elements are |x i j |. For a vector v = (v 1 , v 2 , . . . , v m ) T ∈ R m , diag(v) is the diagonal matrix whose diagonal is given by a vector v and |v| = (|v 1 |, |v 2 |, . . . , |v m |) T . For a matrix A = (a i j ) ∈ R m×m and a matrix B, vec(A) is a vector defined by vec(A) = (a T 1 , . . . , a T m ) T with a i as the i-th column of A, A⊗ B = (a i j B) is the Kronecker product. For matrices X and Y , we write X ≥ 0 (X > 0) and say that X is nonnegative (positive) if x i j ≥ 0 (x i j > 0) holds for all i, j, and X ≥ Y (X > Y ) is used as a different notation for X − Y ≥ 0 (X − Y > 0).

Existence of the Minimal Nonnegative Solution
In this section, we give a sufficient condition for the existence of the minimal nonnegative solution of the MPE (1.1). Some basic definitions are stated as follows. (2.1) Proof We define a matrix function by where the A k 's are coefficients of the MPE (1.1) and X ∈ R m×m .
with X 0 = 0. By Theorems A. 16 and A.19 in [4], there exists a vector v > 0 such that hold for all i = 0, 1, . . .. Clearly, Suppose that (2.3) holds for i = l. Then, On the other hand, it follows from (2.2) that Then, the MPE (1.1) has a minimal positive solution.
Proof Note that if A 0 and A 1 are irreducible matrices, or if A 0 is a positive matrix, we get

Normwise Condition Number
In this section, we investigate three kinds of normwise condition numbers of the MPE (1.1).
The perturbed equation of the MPE (1.1) is For the notation simplification, we introduce the recursion function : N × N × R m×m × R m×m → R m×m as defined in [38]: where N is the set of natural numbers and N + = N − {0}. It can be easily shown that Using the function , we can write the MPE (1.1) as By Lemma 3.1, Eq. (3.1) can be rewritten as Dropping the high order terms in (3.3) yields Applying the vec expression to (3.4) gives Under certain conditions, usually satisfied in the applications, the matrix P is a nonsingular matrix as showed in [38]. We suppose that P is nonsingular in the remainder of this paper. We define the following mapping where X is the minimal nonnegative solution of the MPE (1.1). Three kinds of normwise condition numbers are defined by The nonzero parameters δ k in 1 and 2 provide some freedom in how to measure the perturbations. Generally, δ k is chosen as the functions of A k F , and δ k = A k F is most often taken for k = 0, 1, . . . , n.

12)
where Proof It follows from (3.5) that where It yields (3.14) Note that r 1 2 = 1 ≤ , and it follows from (3.8) (when i = 1) and inequality (3.14) that (3.10) holds. According to (3.5), we get Let = 2 . It follows from (3.13) that On the other hand, (3.13) can be rewritten as from which it is easy to get where μ = n k=0 δ k P −1 (X k ) T ⊗ I m 2 . Then, (3.11) is obtained according to inequalities (3.16) and (3.17). Now we study another sensitivity analysis for the MPE (1.1). Consider the parameter perturbation of A p : A p (τ ) = A p + τ E p and the equation n p=0 A p (τ )X p = 0, (3.18) where E p ∈ R m×m and τ is a real parameter. Let Q(X, τ ) = n p=0 A p (τ )X p and let X + be any solution of the MPE (1.1) such that P is nonsingular. Then (i) Q(X + , 0) = 0 (ii) Q(X, τ ) is differentiable arbitrarily many times in the neighborhood of (X + , 0), and Note that ∂ Q ∂ X | (X + ,0) is exactly P in (3.6) and is nonsingular under our assumption. By the implicit function theory [36], there exists δ > 0 for τ ∈ (−δ, δ), there is a unique X (τ ) satisfying: (ii) X (τ ) is differentiable arbitrarily many times with respect to τ . According to [37], we can derive the Rice condition number of X + :

Mixed and Componentwise Condition Number
In this section, we investigate the mixed and componentwise condition numbers of the MPE (1.1). Explicit expressions to these two kinds of condition numbers are derived. We first introduce some well-known results. To define mixed and componentwise condition numbers, the following distance function is useful. For any a, b ∈ R m , define a b = [c 1 , c 2 , . . . , c m ] T as otherwise.
Then we define Consequently for matrices A, B ∈ R m×m , we define
In the sequel, we assume d(a, b) < ∞ for any pair (a, b). For > 0, we set B 0 (a, ) = {x|d(x, a) ≤ }. For a vector-valued function F : R p → R q , Dom(F) denotes the domain of F.
The mixed and componentwise condition numbers introduced by Gohberg and Koltracht [17] are listed as follows: Definition 4.1 [17] Let F : R p → R q be a continuous mapping defined on an open set Dom(F) ⊂ R p such that 0 / ∈ Dom(F) and F(a) = 0 for a given a ∈ R p .
The componentwise condition number of F at a is defined by The explicit expressions of the mixed and componentwise condition numbers of F at a are given by the following lemma [13,17].

Furthermore, we have two simple upper bounds for m(ϕ) and c(ϕ) as follows:
Proof It follows from (3.5) that vec( X ) ≈ P −1 Lr, which implies that the Fréchet derivative of ϕ is It holds that Therefore, From (2) of Lemma 4.2, we obtain Similarly, it holds that

Backward Error
In this section, we investigate the backward error of an approximate solution Y to the MPE (1.1). The backward error is defined by then we can write the equation in (5.1) as Applying the vec operator to (5.2) yields For convenience, we write (5.3) as where We assume that H is of full rank. This guarantees that (5.3) has a solution and the backward error is finite. From (5.4), an upper bound for θ(Y ) is obtained where H + is the pseudoinverse of H , and σ min (H ) is the minimal singular value of H which is nonzero under the assumption that H is of full rank.
Note that

Numerical Examples
In this section, we give three numerical examples to show the sharpness of the normwise, mixed and componentwise condition numbers. All computations are made in Matlab 7.10.0 with the unit roundoff being u ≈ 2.2 × 10 −16 .
Example 6. 1 We consider the matrix polynomial equation 9 k=0 A k X k = X with A k = D −1Ā k for k = 0, 1 . . . , 9, whereĀ k = rand (10) with rand as the random function in Matlab. The matrix D is a diagonal matrix whose entries are the row sums of 9 i=0Ā k so that ( 9 k=0 A k )1 m = 1 m . We rewrite the matrix polynomial as Note that I m − A 1 is a nonsingular M-matrix and I m − 9 k=0 A k is a singular irreducible M-matrix. From Theorem 2.3, we know the minimal nonnegative solution S of Eq. (6.1) exists.
Suppose that the perturbations in the coefficient matrices arẽ where s is a positive integer and • is the Hadamard product. Note that I m −Ã 1 and I m − 9 k=0Ã k are also nonsingular M-matrices. Hence the corresponding perturbed equation has a unique minimal nonnegative solutionS.
We use the Newton's method proposed in [38] to compute the minimal nonnegative solution S andS. Choose δ k = A k F , from Theorem 3.2 we get three kinds of local normwise perturbation bounds: √ nk 1 (ϕ) and k M 2 (ϕ) = μ/ S F , we compare the above approximate perturbation bounds with the exact Table 1 Comparison of exact relative error with local normwise perturbation bounds relative error S − S F / S F . Table 1 shows that our estimates of the three normwise perturbation bounds are close to the exact relative error S − S F / S F . It also shows that the perturbation bound given by k 3 (ϕ) 3 is sharper than the other two bounds.
Example 6.2 This example is taken from [32]. Consider the matrix polynomial equation This example represents a queueing system in a random environment, where periods of severe overflows alternate with periods of low arrivals. Note that in this example, both A 0 and A 2 are nonpositive. A 1 itself is a nonsingular M-matrix. Consider the following equation Then Eq. (6.2) has same solutions as equation A 0 + A 1 X + A 2 X 2 = 0, and the coefficients matrices in (6.2) satisfy the conditions in Corollary 2.5, then Eq. (6.2) has a minimal positive solution X . For k = 0, 1, 2, let A k = rand(m) • A k × 10 −s , where s is a positive integer, thenÃ k = A k + A k is the perturbed coefficient matrix of the corresponding perturbed equation. Similarly, the minimal positive solutionX of the perturbed matrix polynomial equation exists and can be obtained by using the Newton's method in [38]. Let and 0 = min{ : | A k | ≤ |A k |, k = 0, 1, . . . , n}. Table 2 shows that the mixed and componentwise analysis give more tighter and revealing bounds than the normwise perturbation bounds. Example 6. 3 We consider the matrix differential equation Such equations may occur in connection with vibrating system. The characteristic polynomial is The coefficient matrices of P 3 (X ) = 0 satisfy the condition in Corollary 2.5, so there is a minimal positive solution X * such that P 3 (X * ) = 0.
Let s be a positive integer and suppose the coefficient matrices are perturbed by A i (i = 0, 1, 2), where Using the notations listed in Examples 6.1 and 6.2, the perturbation bounds obtained by the normwise, mixed and componentwise condition numbers are listed in Table 3. Table 3 shows that our estimated perturbation bounds are sharp. Moreover, we observe that the simple upper bounds m U (ϕ) and c U (ϕ) of the mixed and componentwise condition numbers m(ϕ) and c(ϕ), which are obtained in Theorem 4.3, are also tight.

Conclusion
In this paper, one sufficient condition for the existence of the minimal nonnegative solution of a matrix polynomial equation is given. Three kinds of normwise condition numbers of the matrix polynomial equation are investigated. The explicit expressions and upper bounds for the mixed and componentwise condition numbers are derived. A backward error is defined and evaluated.