Relaxed fixed point iterations for matrix equations arising in Markov chain modeling

We present some accelerated variants of fixed point iterations for computing the minimal non-negative solution of the unilateral matrix equation associated with an M/G/1-type Markov chain. These variants derive from certain staircase regular splittings of the block Hessenberg M-matrix associated with the Markov chain. By exploiting the staircase profile, we introduce a two-step fixed point iteration. The iteration can be further accelerated by computing a weighted average between the approximations obtained at two consecutive steps. The convergence of the basic two-step fixed point iteration and of its relaxed modification is proved. Our theoretical analysis, along with several numerical experiments, shows that the proposed variants generally outperform the classical iterations.


Introduction
The transition probability matrix of an M/G/1-type Markov chain is a block Hessenberg matrix P of the form with A i , B i ∈ R n×n ≥ 0 and ∑ ∞ i=0 B i and ∑ ∞ i=−1 A i stochastic matrices.In the sequel, given a real matrix A = (a i j ) i j ∈ R m×n , we write A ≥ 0 (A > 0) if a i j ≥ 0 (a i, j > 0) for any i, j.A stochastic matrix is matrix A ≥ 0 such that Ae = e, where e is the column vector having all the entries equal to 1.
In the positive recurrent case, the computation of the steady state vector π of P, such that π T P = π T , π T e = 1, π ≥ 0, is related with the solution of the unilateral power series matrix equation Under some mild assumptions this equation has a componentwise minimal non-negative solution G which determines, by means of Ramaswami's formula [16], the vector π.
Among the easy-to-use, but still effective, tools for numerically solving (3), there are fixed point iterations (see [3] and the references given therein for a general review of these methods).The intrinsic simplicity of such schemes make them attractive in domains where high performance computing is crucial.But they come at a price: the convergence can become very slow especially for problems which are close to singularity.The design of acceleration methods (aka extrapolation methods) for fixed point iterations is a classical topic in numerical analysis [5].Relaxation techniques are commonly used for the acceleration of classical stationary iterative solvers for large linear systems.In this paper we introduce some new coupled fixed point iterations for solving (3) which can be combined with relaxation techniques to speed up their convergence.
More specifically, we first observe that computing the solution of the matrix equation (3) is formally equivalent to solving a semi-infinite block Toeplitz block Hessenberg linear system.Customary block iterative algorithms applied for the solution of this system yield classical fixed point iterations.In particular, the traditional and the U-based fixed point iteration [3] originate from the block Jacobi and the block Gauss-Seidel method, respectively.Recently, in [7] the authors show that some iterative solvers based on a block staircase partitioning outperform the block Jacobi and the block Gauss-Seidel method for M-matrix linear systems in block Hessenberg form.The application of the staircase splitting to the block Toeplitz block Hessenberg linear system associated with (3) yields the new coupled fixed point iteration starting from an initial approximation X 0 .The contribution of this paper is aimed at highlighting the properties of (4).We show that, if X 0 = 0, the sequence {X k } k defined in (4) converges to G faster than the traditional fixed point iteration.In the case where the starting matrix X 0 of (4) is any column stochastic matrix and G is also column stochastic, we prove that the sequence {X k } k still converges to G.Moreover, by comparison of the mean asymptotic rates of convergence, we conclude that (4) is asymptotically faster than the traditional fixed point iteration.
At each iteration the scheme (4) determines two approximations which can be combined by using the relaxation technique, that is, the approximation computed at the k-th step takes the form of a weighted average between Y k and X k+1 .The modified relaxed variant is defined by the sequence where ω k+1 is the relaxation parameter.The convergence results for (4) easily extend to the modified scheme in the case of under-relaxation, that is, the parameter ω k sat- isfies 0 ≤ ω k ≤ 1. Heuristically, it is argued that over-relaxation values (ω k > 1) can improve the convergence.If X 0 = 0, under some assumptions a theoretical estimate of the asymptotic convergence rate of ( 5) is given which confirms this heuristic.Moreover, an adaptive strategy is devised which makes possible to perform over-relaxed iterations of ( 5) by still ensuring the convergence of the overall iterative process.The results of extensive numerical experiments confirm the effectiveness of the proposed variants, which generally outperform the U-based fixed point iteration for nearly singular problems.In particular, the over-relaxed scheme (5) with X 0 = 0, combined with the adaptive strategy for parameter estimation, is capable to significantly accelerate the convergence without increasing the computational cost.The paper is organized as follows.In Section 2 we set up the theoretical framework, briefly recalling some preliminary properties and assumptions.In Section 3 we revisit classical fixed point iterations for solving (3) by establishing the link with the iterative solution of an associated block Toeplitz block Hessenberg linear system.In Section 4 we introduce the new fixed point iteration (4) by proving some convergence results.The relaxed variant (5), as well as the generalizations of convergence results for this variant, are described in Section 5. Section 6 deals with a formal analysis of the asymptotic convergence rate of both ( 4) and (5).Adaptive strategies for the choice of the relaxation parameter are discussed in Section 7, together with their cost analysis under some simplified assumptions.Finally, the results of extensive numerical experiments are presented in Section 8 whereas conclusions and future work are the subjects of Section 9.

Preliminaries and assumptions
Throughout the paper we assume that A i , i ≥ −1, are n × n nonnegative matrices, such that their sum A = ∑ ∞ i=−1 A i is irreducible and row stochastic, that is, Ae = e, e = [1, . . ., 1] T .According to the results of [3,Chapter 4], such assumption implies that (3) has a unique componentwise minimal nonnegative solution G; moreover, I n − A 0 is an invertible M-matrix and, hence, Furthermore, in view of the Perron Frobenius Theorem, there exists a unique vector In the study of M/G/1-type Markov chains, the drift is the scalar number η = v T w [14].The sign of the drift determines the positive recurrence of the M/G/1-type Markov chain [3].
When explicitly stated, we will assume the following condition: A1.The series ∑ ∞ i=−1 iA i is convergent and η < 0.
3 Nonlinear matrix equations and structured linear systems In this section we reinterpret classical fixed point iterations for solving the matrix equation (3) in terms of iterative methods for solving a structured linear system.Formally, the power series matrix equation (3) can be rewritten as the following block Toeplitz block Hessenberg linear system .
The above linear system can be expressed in compact form as where XT = X T , X T 2 , . . .For instance, from the partitioning H = M − N, where M = I and N = P, we find that the block vector X is a solution of the fixed point problem From this equation we may generate the sequence of block vectors We may easily verify that the sequence {X k } k coincides with the sequence generated by the so called natural fixed point iteration Similarly, the Jacobi partitioning, where M = I ⊗ (I n − A 0 ) and N = M − H, which leads to the sequence corresponds to the traditional fixed point iteration The anti-Gauss-Seidel partitioning, where M is the block upper triangular part of H and N = M − H, determines the U-based fixed point iteration The convergence properties of these three fixed point iterations are analyzed in [3].Among the three iterations (9) is the fastest and also the most expensive since it requires the solution of a new linear system (with multiple right hand sides) at each iteration.Moreover, it turns out that fixed-point iterations exhibit arbitrarily slow convergence for problems which are close to singularity.In particular, for positive recurrent Markov chains having a drift close to zero the convergence slows down and the number of iterations becomes arbitrarily large.In the next sections we present some new fixed point iterations which offer several advantages in terms of computational efficiency and convergence properties when compared with (9).

A new fixed point iteration
Recently in [7] a comparative analysis has been performed for the asymptotic convergence rates of some regular splittings of a non-singular block upper Hessenberg M-matrix of finite size.The conclusion is that the staircase splitting is faster than the anti-Gauss-Seidel splitting, that in turn is faster than the Jacobi splitting.The second result is classical, while the first one is somehow surprising since the matrix M in the staircase splitting is much more sparse than the corresponding matrix in the anti-Gauss-Seidel partitioning and the splittings are not comparable.Inspired from these convergence properties, we introduce a new fixed point iteration for solving (3), based on the staircase partitioning of H, namely, (10) The splitting has attracted interest for applications in parallel computing environments [12,10].In principle the alternating structure of the matrix M in (10) suggests several different iterative schemes.
From one hand, the odd block entries of the system MZ k+1 = N Xk + EA −1 yield the traditional fixed point iteration.On the other hand, the even block entries lead to the implicit scheme . Differently, by looking at the structure of the matrix M on the whole, we introduce the following composite two-stage iteration: or, equivalently, starting from an initial approximation X 0 .At each step k, this scheme consists of a traditional fixed point iteration that computes Y k from X k , followed by a cheap correction step for computing the new approximation X k+1 .
As for classical fixed point iterations, the convergence is guaranteed when X 0 = 0.
Proposition 1. Assume that X 0 = 0. Then the sequence {X k } k∈N generated by (12) converges monotonically to G.
Proof.We show by induction on k that 0 We find that and

By multiplying both sides by the inverse of
we prove similarly that G ≥ X k+1 .It follows that {X k } k∈N is convergent, the limit solves (3) by continuity and, hence, the limit coincides with the matrix G, since G is the minimal nonnegative solution.
A similar result also holds for the case where X 0 is a stochastic matrix, assuming that [A1] holds, so that G is also stochastic. .

Proposition 2. Assume that condition [A1
] is fulfilled and that X 0 is a stochastic matrix.Then, the sequence {X k } k∈N generated by (12) converges to G.
Proof.From (11) we obtain that which gives that X k ≥ 0 and Y k ≥ 0, for any k ∈ N, since X 0 ≥ 0. By assuming that X 0 e = e, we may easily show by induction that Y k e = X k e = e for any k ≥ 0. Therefore, all the matrices X k and Y k , k ∈ N, are stochastic.Let { Xk } k∈N be the sequence generated by ( 12) with X0 = 0. We can easily show by induction that X k ≥ Xk for any k ∈ N. Since lim k→∞ Xk = G, then any convergent subsequence of {X k } k∈N converges to a stochastic matrix S such that S ≥ G. Since G is also stochastic, it follows that S = G and therefore, by compactness, we conclude that the sequence {X k } k∈N is also convergent to G.
Propositions 1 and 2 are global convergence results.An estimate of the rate of convergence of (12) will be provided in Section 6, together with a comparison with other existing methods.

A relaxed variant
At each iteration, the scheme (12) determines two approximations which can be combined by using a relaxation technique, that is, the approximation computed at the k-th step takes the form of a weighted average between Y k and X k+1 , In matrix terms, the resulting relaxed variant of ( 12) can be written as If ω k = 0, for k ≥ 1, the relaxed scheme reduces to the traditional fixed point iteration (8).If ω k = 1, for k ≥ 1, the relaxed scheme coincides with (12).Values of ω k greater than 1 can speed up the convergence of the iterative scheme.Concerning convergence, the proof of Proposition 1 can immediately be generalized to show that the sequence {X k } k defined by ( 13), with X 0 = 0, converges for any ω k = ω, k ≥ 1, such that 0 ≤ ω ≤ 1.Moreover, let {X k } k∈N and { Xk } k∈N be the sequences generated by (13) for ω k = ω and ω k = ω with 0 ≤ ω ≤ ω ≤ 1, respectively.It can be easily shown that G ≥ Xk ≥ X k for any k and, hence, that the iterative scheme (12) converges faster than (13) The convergence analysis of the modified scheme (13) for ω k > 1 is much more involved since the choice of a relaxation parameter ω k > 1 can destroy the monotonicity and the nonnegativity of the approximation sequence, which is at the core of the proofs of Proposition 1 and Proposition 2 .In order to maintain the convergence properties of the modified scheme we introduce the following definition.Definition 3. The sequence {ω k } k≥1 is eligible for the scheme (13) if ω k ≥ 0, k ≥ 1, and the following two conditions are satisfied: and It is worth noting that condition ( 14) is implicit since the construction of X k+1 also depends on the value of ω k+1 .By replacing X k+1 in (14) with the expression in the right-hand side of ( 13) we obtain a quadratic inequality with matrix coefficients in the variable ω k+1 .Obviously The following generalization of Proposition 1 holds.
Proposition 4. Set X 0 = 0 and let condition [A1] be satisfied.If {ω k } k≥1 is eligible then the sequence {X k } k∈N generated by (13) converges monotonically to G.
Proof.We show by induction that 0 (15) it follows that X k e ≤ e for all k ≥ 0 and therefore the sequence of approximations is upper bounded and it has a finite limit H.By continuity we find that H solves the matrix equation ( 3) and He ≤ e.Since G is the unique stochastic solution, then H = G.
Remark 5.As previously mentioned, condition (14) is implicit, since X k+1 also depends on ω k+1 .An explicit condition can be derived by noting that There follows that ( 14) is fulfilled whenever which can be reduced to a linear inequality in ω k+1 over a fixed search interval.Let ω ∈ [1, ω] be such that Then we can impose that From a computational viewpoint the strategy based on ( 16) and (17) for the choice of the value of ω k+1 can be too much expensive and some weakened criterion should be considered (compare with Section 7 below).
In the following section we perform a convergence analysis to estimate the convergence rate of (13) in the stationary case ω k = ω, k ≥ 1, as a function of the relaxation parameter.

Estimate of the convergence rate
Relaxation techniques are usually aimed at accelerating the convergence speed of frustratingly slow iterative solvers.Such inefficient behavior is typically exhibited when the solver is applied to a nearly singular problem.Incorporating some relaxation parameter into the iterative scheme (3) can greatly improve its convergence rate.Preliminary insights on the effectiveness of relaxation techniques applied for the solution of the fixed point problem (7) come from the classical analysis for stationary iterative solvers and are developed in Section 6.1.A precise convergence analysis is presented in Section 6.2.

Finite dimensional convergence analysis
Suppose that H in ( 7) is block tridiagonal of finite size m = nℓ, ℓ even.We are interested in comparing the iterative algorithm based on the splitting (10) with other classical iterative solvers for the solution of a linear system with coefficient matrix H.As usual, we can write H = D − P 1 − P 2 , where D is block diagonal, while P 1 and P 2 are staircase matrices with zero block diagonal.The eigenvalues λ i of the Jacobi iteration matrix satisfy 0 = det( Let us consider a relaxed scheme where the matrix M is obtained from (10) by multiplying the off-diagonal blocks by ω.The eigenvalues µ i of the iteration matrix associated with the relaxed staircase regular splitting are such that and, equivalently, By using a similarity transformation induced by the matrix There follows that Therefore, the eigenvalues of the Jacobi and relaxed staircase regular splittings are related by µ 2 i − λ 2 i µω + λ 2 i (ω − 1) = 0.For ω = 0 the staircase splitting reduces to the Jacobi partitioning.For ω = 1 we find that µ i = λ 2 i which yields the classical relation between the spectral radii of Jacobi and Gauss-Seidel methods.Observe that it is well known that the asymptotic convergence rates of Gauss-Seidel and the staircase iteration coincide when applied to a block tridiagonal matrix [1].For ω > 1 the spectral radius of the relaxed staircase scheme can be significantly less than the spectral radius of the same scheme for ω = 1.In Figure 1 we illustrate the plot of the function for a fixed λ = 0.999.For the best choice of ω

Asymptotic Convergence Rate
A formal analysis of the asymptotic convergence rate of the relaxed variants (13) can be carried out by using the tools described in [11].In this section we relate the approximation error at two subsequent steps and we provide an estimate of the asymptotic rate of convergence, expressed as the spectral radius of a suitable matrix depending on ω.
Hereafter it is assumed that assumption [A1] is verified.
we analyze the convergence of the vector ε k = E k e, k ≥ 0. We have Similarly, for the second equation of ( 13), we find that which gives Denote by R k the matrix on the right hand side of (18), i.e., Since Ge = e, equation ( 19), together with the monotonicity, yields Observe that R k e ≤ W E k e, where where The matrix P(ω) can be written as P(ω) = M −1 N(ω) where and Let us assume the following condition holds: C1.The relaxation parameter ω satisfies ω ∈ [0, ω] with ω ≥ 1 such that Assumption [C1] ensures that N(ω) ≥ 0 and, therefore P(ω) ≥ 0 and M − N(ω) = A(ω) is a regular splitting of A(ω).If [C1] is satisfied at each iteration of ( 13), then from (21) we obtain that Therefore, the asymptotic rate of convergence, defined as , where • is any vector norm, is such that The above properties can be summarized in the following result, that gives a convergence rate estimate for iteration (13).
When ω = 0, we find that A(0 According to Theorem 4.14 in [3], I −V is a nonsingular M-matrix and therefore, since 0) is a regular splitting.Hence, the spectral radius ρ 0 of P(0) is less than 1.More generally, under Assumption [C1] since [15] we find that A(ω) is a nonsingular M-matrix and A(ω) = M − N(ω) is a regular splitting.Hence, we deduce that ρ ω < 1.The following result gives an estimate of ρ ω by showing its dependence as function of the relaxation parameter.
Proof.In view of the classical Collatz-Wielandt formula (see [13], Chapter 8) , if P(ω)v = w, where v > 0 and w ≥ 0, then Observe that which leads to (23), since u ≥ 0.Moreover, since Observe that, in the QBD case, where A i = 0 for i ≥ 2, we have A 1 (I n + G) = W and from the proof above u = ρ 0 v. Therefore, we have σ min = σ max = ρ 0 and, hence, ρ ω = ρ 0 (1 − ω(1 − ρ 0 )).In particular, ρ ω linearly decreases with ω, and ρ 1 = ρ 2 0 .In the general case, inequality (23) shows that the upper bound to ρ ω linearly decreases as a function of ω.Therefore the choice ω = ω gives the fastest convergence rate.Remark 8.For the sake of illustration we consider a quadratic equation associated with a block tridiagonal Markov chain taken from [9].We set A −1 = W + δ I, A 0 = A 1 = W where 0 < δ < 1 and W ∈ R n×n has zero diagonal entries and all off-diagonal entries equal to a given value α determined so that A −1 + A 0 + A 1 is stochastic.We find that for ω k = ω ∈ [0, 6] N(ω) ≥ 0. In Figure 2 we plot the spectral radius of P = P(ω).The linear plot is in accordance with Theorem 7.

The Case X 0 stochastic
In this section we analyze the convergence of the iterative method ( 13) starting with a stochastic matrix X 0 , that is, X 0 ≥ 0 and X 0 e = e.Eligible sequences {ω k } k are such that X k ≥ 0 for any k ≥ 0. This happens for 0 ≤ ω k ≤ 1, k ≥ 1. Suppose that: S0.The sequence {ω k } k in ( 13) is determined so that ω k ≥ 0 and X k ≥ 0 for any k ≥ 1.
Observe that the property X k e = e, k ≥ 0 is automatically satisfied.Hence, under assumption [S0], all the approximations generated by the iterative scheme ( 13) are stochastic matrices and therefore, Proposition 2 can be extended in order to prove that the sequence The analysis of the speed of convergence follows from relation ( 19).Let us denote as vec(A) ∈ R n 2 the vector obtained by stacking the columns of the matrix A ∈ R n×n on top of one another.Recall that vec(ABC) = (C T ⊗ A) vec(B) for any A, B,C ∈ R n×n .By using this property we can rewrite (19) as follows.We have: for k ≥ 0. The convergence of {vec(E k )} depends on the choice of ω k+1 , k ≥ 0. Sup- pose that ω k = ω for any k ≥ 0 and [S0] holds.Then (24) can be rewritten in a compact form as vec where It can be shown that the asymptotic rate of convergence σ satisfies In the sequel we compare the cases ω k = 0, which corresponds with the traditional fixed point iteration (8), and ω k = 1 which reduces to the staircase fixed point iteration (12).
For ω k+1 = 0, k ≥ 0, we find that which means that which means that H 0 is similar to the matrix on the right hand-side.There follows that the eigenvalues of H 0 belong to the set we conclude that ρ(H 0 ) ≤ ρ(P(0)) in view of the Wielandt theorem [13].
A similar analysis can be performed in the case ω k = 1, k ≥ 0. We find that By the same arguments as above we find that the eigenvalues of H 1 belong to the set with λ eigenvalue of G T , and we conclude that ρ(H 1 ) ≤ ρ(P(1)).
Therefore, in the application of ( 12) we expect a faster convergence when X 0 is a stochastic matrix, rather than X 0 = 0. Indeed, numerical results shown in Section 5 exhibit a very rapid convergence profile when X 0 is stochastic, even better than the one predicted by ρ(H 1 ).This might be explained with the dependence of the asymptotic convergence rate on the second eigenvalue of the corresponding iteration matrices as reported in [11].

Adaptive Strategies and Efficiency Analysis
The efficiency of fixed point iterations depends on both speed of convergence and complexity properties.Markov chains are generally defined in terms of sparse matrices.To take into account this feature we assume that γn 2 , γ = γ(n), multiplications/divisions are sufficient to perform the following tasks: 1. to compute a matrix multiplication of the form A i •W , where A i ,W ∈ R n×n ; 2. to solve a linear system of the form (I − A 0 )Z = W , where A 0 ,W ∈ R n×n .
We also suppose that the transition matrix P in (1) is banded, hence A i = 0 for i > q.This is always the case in numerical computations where the matrix power series has to be approximated by some finite partial sum ∑ q i=−1 A i X i+1 k .Under these assumptions we obtain the following cost estimates per step: 1. the traditional fixed point iteration (8) requires qn 3 + 2γn 2 + O(n 2 ) multiplicative operations; 2. the U-based fixed point iteration ( 9) requires (q + 4/3)n 3 + γn 2 + O(n 2 ) multiplicative operations; 3. the staircase-based (S-based) fixed point iteration ( 12) requires (q+1)n 3 +4γn 2 + O(n 2 ) multiplicative operations.
Observe that the cost of the S-based fixed point iteration is comparable with the cost of the U-based iteration, which is the fastest among classical iterations [3].Therefore, in the cases where the U/S-based fixed point iterative schemes require significantly less iterations to converge, these algorithms are more efficient than the traditional fixed point iteration.
Concerning the relaxed versions ( 13) of the S-based fixed point iteration for a given fixed choice of ω k = ω we get the same complexity of the unmodified scheme (12) obtained with ω k = ω = 1.The adaptive selection of ω k+1 exploited in Proposition 4 and Remark 5 with X 0 = 0 requires more care.
The strategy (16) is computationally unfeasible since it needs the additional computation of ∑ q i=2 A i Y i+1 k .To approximate this quantity we recall that Let θ k+1 be such that Then condition ( 16) can be replaced with The iterative scheme (13) complemented with the strategy based on (25) for the selection of the parameter ω k+1 requires no more than (q + 3)n 3 + 4γn 2 + O(n 2 ) multiplicative operations.The efficiency of this scheme will be investigated experimentally in the next section.

Numerical Results
In this section we present the results of some numerical experiments which confirm the effectiveness of our proposed schemes.All the algorithm have been implemented in Matlab and tested on a PC i9-9900K CPU 3.60GHz×8.Our test suite includes: 1. Synthetic Examples: (a) The block tridiagonal Markov chain of Remark 8. Observe that the drift of the Markov chain is exactly equal to −δ .(b) A numerical example given in [2] and considered in [8] for testing a suitable modification -named Algorithm 1-of the U-based fixed point iteration.
The Markov chain of the M/G/1 type is given by The Markov chain is positive recurrent, null recurrent or transient according as 0 < p < 0.5, p = 0.5 or p > 0.5, respectively.In our computations we have chosen different values of p, in the range 0 < p < 0.6 and the matrices A i are treated as zero matrix for i ≥ 51.
(c) Synthetic examples of M/G/1-type Markov chains described in [4].These examples are constructed in such a way that the drift of the associated Markov chain is close to a given nonnegative value.We do not describe in detail the construction, as it would take some space, but refer the reader to [4, Sections 7.1].

Application Examples:
(a) Some examples of PH/PH/1 queues collected in [4, Sections 7.1] for testing purposes.The construction depends on a parameter ρ with 0 ≤ ρ ≤ 1.In this case the drift is µ = 1 − ρ.
(b) The Markov chain of M/G/1 type associated with the infinitesimal generator matrix Q from the queuing model described in [6].This is a complex queuing model, a BMAP/PHF/1/N model with retrial system with finite buffer of capacity N and non-persistent customers.For the construction of the matrix Q we refer the reader to [6, Sections 4.3 and 4.5].

Synthetic Examples
The first test concern with the validation of the analysis performed above, regarding the convergence of fixed point iterations.In Table 1 we show the number of iterations required by different iterative schemes on Example 1.a with n = 100.Specifically we compare the traditional fixed point iteration, the U-based fixed point iteration, the Sbased fixed point iteration (12) and the relaxed fixed point iterations (13).For the latter case we consider the S ω -based iteration where ω k+1 = ω is fixed a priori and the S ω(k) - based iteration where the value of ω k+1 is dynamically adjusted at any step according to the strategy (25) complemented with condition (15).The relaxed stationary iteration is applied for ω = 1.8, 1.9, 2. The relaxed adaptive iteration is applied with ω = 10.The iterations are stopped when the residual error The first four columns of Table  matrix.Specifically, the U-based and the S-based iterations are twice faster than the traditional iteration.Also, the relaxed stationary variants greatly improve the convergence speed.An additional remarkable improvement is obtained by adjusting dynamically the value of the relaxation parameter.Also notice that the S ω(k) -based iteration is guaranteed to converge differently to the stationary S ω -based iteration.
The superiority of the adaptive implementation over the other fixed point iterations is confirmed by numerical results on Example 1.b.In Table 2 for different values of p we show the number of iterations required by different iterative schemes including also Algorithm 1 implemented in [8].For comparison with the results in [8] here we set tol = 10 −8 in the stopping criterion.
Finally, we compare the convergence speed of the traditional, U-based, S-based and S ω(k) -based fixed point iterations applied on the synthetic examples of M/G/1-type Markov chains described in [4].In Figure 3 we report the semilogarithmic plot of the residual error in the infinity norm generated by the four fixed point iterations for two different values of the drift.Observe that the adaptive relaxed iteration is about twice faster than the traditional fixed point iteration.The observation is confirmed in in Table 3 where we indicate the speed-up in terms of CPU-time with respect to the traditional fixed point iteration for different values of the drift µ.In Figure 4 we repeat the set of experiments of Figure 3 with a starting stochastic matrix X 0 = ee T /n.Here the adaptive strategy is basically the same as used before where we select ω k+1 in the interval [0, ω k ] as the maximum value which maintains the nonnegativity of X k+1 .In this case the adaptive strategy seems to be quite effective in reducing the number of iterations.Other results are not so favorable and we believe that in the stochastic case the design of a general efficient strategy for the choice of the relaxation parameter ω k is still an open problem.

Application Examples
The first set 2.a of examples from applications includes several cases of PH/PH/1 queues collected in [4].The construction of the Markov chain depends on a parameter ρ with 0 ≤ ρ ≤ 1 and two integers (i, j) which specify the PH distributions of the model.The Markov chain generates in this way is denoted as Example (i, j).Its drift is µ = 1 − ρ.In Tables 4 5 and 6 we compare the number of iterations for different values of ρ.Here and hereafter the relaxed stationary S ω -based iteration is applied with ω = 2.For comparison in Table 7 we show the number of iterations on Example (8, 3) of Table 5 starting with X 0 a stochastic matrix.We compare the traditional, U-based and S-based iterations.We observe a rapid convergence profile and the fact that the number of iterations is independent of the drift value.For a more challenging example from applications in 2.b we consider the generator matrix Q from the queuing model described in [6].In Table 8 we indicate the number of iterations for different values of the capacity N.

Conclusion and Future Work
In this paper we have introduced a novel fixed point iteration for solving M/G/1-type Markov chains.It is shown that this iteration complemented with suitable adaptive relaxation techniques is generally more efficient than other classical iterations.Incorporating relaxation techniques into other different inner-outer iterative schemes as the ones introduced in [4]
Tab. 2: Number of iterations on Example 1.b for different values of p.
Number of iterations for different values of ρ on Example (2, 6).Number of iterations for different values of ρ on Example (8, 3).
is an ongoing research.Number of iterations for different values of ρ on Example (8, 9).Number of iterations for different values of ρ on Example (8, 3) with X 0 a stochastic matrix.Number of iterations for different values of N on Example described in [6].