Perturbed Markov Chains with Damping Component

The paper is devoted to studies of regularly and singularly perturbed Markov chains with damping component. In such models, a matrix of transition probabilities is regularised by adding a special damping matrix multiplied by a small damping (perturbation) parameter ε. We perform a detailed perturbation analysis for such Markov chains, particularly, give effective upper bounds for the rate of approximation for stationary distributions of unperturbed Markov chains by stationary distributions of perturbed Markov chains with regularised matrices of transition probabilities, asymptotic expansions for approximating stationary distributions with respect to damping parameter, explicit coupling type upper bounds for the rate of convergence in ergodic theorems for n-step transition probabilities, as well as ergodic theorems in triangular array mode.


Introduction
Perturbed Markov chains is one of the popular and important objects of studies in the theory of Markov processes and their applications to stochastic networks, queuing and reliability models, bio-stochastic systems, and many other stochastic models.
We are especially interested in models of Markov chains commonly used for description of information networks. In such models an information network is represented by the Markov chain associated with the corresponding node links graph. Stationary distributions and other related characteristics of information Markov chains usually serve as basic tools for ranking of nodes in information networks. The ranking problem may be complicated by singularity of the corresponding information Markov chain, where its phase space is split into several weakly or completely non communicating groups of states. In such models, the matrix of transition probabilities P 0 of information Markov chain is usually regularised and approximated by matrix P ε = (1 − ε)P 0 + εD, where D is a so-called damping stochastic matrix with identical rows and all positive elements, while ε ∈ [0, 1] is a damping (perturbation) parameter.
The power method is often used to approximate the corresponding stationary distributionπ ε by rows of matrix P n ε . The damping parameter ε ∈ (0, 1] should be chosen neither too small nor too large. In the first case, where ε takes too small values, the damping effect will not work against absorbing and pseudo-absorbing effects, since the second eigenvalue for such matrices (determining the rate of convergence in the above mentioned ergodic approximation) take values approaching 1. In the second case, the ranking information (accumulated by matrix P 0 via the corresponding stationary distribution) may be partly lost, due to the deviation of matrix P ε from matrix P 0 . This actualises the problem of construction asymptotic expansions for perturbed stationary distributionπ ε with respect to damping parameter ε, as well as studies of asymptotic behaviour of matrices P n ε in triangular array mode, where ε → 0 and n → ∞, simultaneously.
In this paper, we perform the detailed perturbation analysis of Markov chains with damping component, using the procedure of artificial regeneration and renewal techniques for the approximating Markov chains and the coupling method in ergodic theorems for perturbed Markov chains with damping component. We get effective explicit series representations for the corresponding stationary distributionsπ ε , upper bounds for the deviationπ ε −π 0 , and asymptotic expansions forπ ε with respect to the perturbation parameter ε, effective explicit upper bounds for the rate of convergence in ergodic theorems as well as ergodic theorems for Markov chains with damping component in triangular array mode.
Real world system consists of interacting units or components. These components constitute what is termed as information networks. With recent advancement in technology, filtering information has become a challenge in such systems. Moreover, their significance is visible as they find their applications in Internet search engines, biological, financial, transport, queuing networks and many others, (Andersson and Silvestrov 2008;Avrachenkov et al. 2013;Biganda et al. 2017;Brin and Page 1998;Engström 2016;Engström and Silvestrov 2014, 2016a, b, 2017Gleich 2015;Meyer 2004, 2011;Sun and Han 2013).
PageRank is the link-based criteria that captures the importance of Web pages and provide rankings of the pages in the search engine Google (Avrachenkov et al. 2013;Biganda et al. 2017;Brin and Page 1998;Engström 2016;Engström and Silvestrov 2014, 2016a, b, 2017Meyer 2004, 2011). The transition matrix (also called Google matrix G) of a Markov chain in PageRank problem is defined in Andersson and Silvestrov (2008), Haveliwala and Kamvar (2003) and Meyer (2004, 2011), as G = cP + (1 − c)E, where P is an n × n row-stochastic matrix (also called hyperlink matrix), E (the damping matrix) is the n × n rank-one stochastic matrix and c ∈ (0, 1) is the damping parameter.
The fundamental concept of PageRank is to use the stationary distribution of the Markov chain on the network to rank Web pages. However, other algorithms similar to PageRank exist in literature, for instance, EigenTrust algorithm  and DeptRank algorithm (Battiston et al. 2012). In addition, variants of PageRank in relation to some specific networks has been studied, e.g. in Biganda et al. (2018a, b); and also updating PageRank due to changes in some network is in literature, for instance, in Abola et al. (2018a, b).
The parameter c is very important in the PageRank definition, because it regulates the level of the uniform noise introduced to the system (Avrachenkov et al. 2013;Langville and Meyer 2011). If c = 1 there are several absorbing states for the random walk defined by P. However, if 0 < c < 1, the Markov chain induced by matrix G is ergodic (Avrachenkov et al. 2013). The authors of Langville and Meyer (2011) argued that the parameter c controls the asymptotic rate of convergence of power method algorithm. Similar arguments were given in Andersson and Silvestrov (2008), where it is pointed out that the choice of c is a delicate matter. It may result into convergence and stability problems, if not carefully chosen.
The damping factor c may be denoted and interpreted differently depending on a model being studied. For instance, a model of Markov chain with restart is considered in Avrachenkov et al. (2018), where parameter p is the probability to restart the move and 1 − p is the probability to follow the link according to the corresponding transition probability of the above Markov chain. Hence, one can argue that the parameter p has the same interpretation as the damping factor in Google's PageRank problem.
Our representation of perturbed Markov chains is traditional for perturbed stochastic processes. In fact, PageRank is the stationary distribution of the singularly perturbed Markov chain with perturbation parameter ε = 1 − c. Also, notation D is usually used for the damping matrix instead of notation E mentioned above. Hence, we wish to point out here that, representation of information network model by a Markov chain with matrix of transition probabilities P ε = (1 − ε)P 0 + εD should not create any confusion to readers. We perform asymptotic analysis of such Markov chains, in particular, under the assumption that ε → 0.
The paper includes 5 sections. In Section 2, we derive a special series representation for stationary distributions of Markov chains with damping component. In Section 3, we describe continuity properties of stationary distributions with respect to perturbation (damping) parameter, give explicit upper bounds for rates of convergence in approximations of the stationary distributions for Markov chain with damping component and present asymptotic expansions for stationary distribution of Markov chains with damping component. In Section 4, we get explicit coupling type upper bounds for the rate of convergence in ergodic theorems for Markov chain with damping component. In Section 5, we present ergodic theorems for Markov chains with damping component in a triangular array mode, where time tends to infinity and perturbation parameter tends to zero, simultaneously.

Stationary Distributions for Markov Chains with Damping Component
In this section, we introduce the model of Markov chains with damping component. We describe a procedure of embedding such Markov chains in the model of discrete time regenerative processes with special damping regenerative times and derive a special series representation for stationary distributions, This representation plays the key role in the following perturbation analysis. We also give formulas for stationary distributions of the corresponding limiting (unperturbed) Markov chains.
Here and henceforth, symbols Pp and Ep are used for probabilities and expectations related to a Markov chain with an initial distributionp. In the case, where the initial distribution is concentrated in a state i the above symbols take the forms P i and E i .
The phase space X of the Markov chain X ε,n is one aperiodic class of communicative states, for every ε ∈ (0, 1].
The following theorem give a series representation for stationary distribution of Markov chain X ε,n . This representation plays the key role in the asymptotic perturbation analysis for Markov chains with damping component.
This follows from independence of transition probabilities p ε,ir,j k , given by relation (4), Let us also denote by PQ m the class of all initial distributions pq = p i q r , (i, r) ∈ Y . If the initial distribution pq = dq • = d i q •,r , (i, r) ∈ Y , whereq • = q •,0 = 0, q •,1 = 1 , then Y ε,n is a standard regenerative process. If pq = dq • , then Y ε,n is a regenerative process with the transition period [0, T ε,1 ).
Let us introduce sets Z j = {(j, 0), (j, 1)}, j ∈ X}. Obviously, p ε,p,j (n) = P pq {Y ε,n ∈ Z j }, n ≥ 0. That is why, probabilities p ε,p,j (n), n ≥ 0 are, for every j ∈ X, the unique bounded solution for the following discrete time renewal type equation, Equation 6 is the standard renewal equation for probabilities p ε,d,j (n) in the case, where the initial distribution pq = dq • , In the standard regeneration case, the geometrical distribution of the regeneration time T ε,1 = S ε,1 is aperiodic and has expectation ε −1 . This makes it possible to apply the discrete time renewal theorem (see, for example, Feller 1968) to the renewal (6). This yields the following ergodic relation, for j ∈ X, Let us define p ε,d,j (n−l) = 0, for l > n. Relation (7) implies that p ε,d,j (n−l) → π ε,j as n → ∞, for l ≥ 0 and j ∈ X. Using relation (7), and the Lebesgue dominated convergence theorem, we get, for p ∈ P m , j ∈ X, The proof is complete.
The phase space X is one aperiodic class of communicative states for the Markov chain X ε,n , for every ε ∈ (0, 1]. In this case, its stationary distributionπ ε = π ε,j , j ∈ X is the unique positive solution for the system of linear equations, i∈X π ε,i p ε,ij = π ε,j , j ∈ X, j ∈X π ε,j = 1. Also, the stationary probabilities π ε,j can be represented in the form π ε,j = e −1 ε,j , , j ∈ X, via the expected return times e ε,j , with the use of regeneration property of the Markov chain X ε,n at moments of return in state j . The series representation (1) for the stationary distribution π ε,j is based on the use of alternative "shorter" damping regeneration times. This representation is, by our opinion, a more effective tool for performing asymptotic perturbation analysis for Markov chains with damping component.
It should be noted that the series representation (1) can also be obtained by alternative ways, for example, by expanding in series the matrix representation for stationary distribution of the Markov chain X ε,n given in Langville and Meyer (2004), or just by direct checking that limits π ε,j given by relation (1) satisfy the system of linear (9). We, however, do prefer to get it using embedding of the Markov chain X ε,n into the model of regenerative process with special damping regeneration moments and the corresponding renewal (6). This approach yields not only the series representation (1) for the stationary distribution but also the corresponding ergodic relation. Also the renewal (6) is essentially used in the proofs of Theorems 6, 7 and 8 and let us get more general convergence relations and, also, explicit upper bounds for the rate of convergence in these relations, pointed out in Remarks 8 and 9.

The Stationary Distribution of the Markov Chain X 0,n
Let us describe ergodic properties of Markov chains X 0,n . Matrix This relation let one consider the Markov chain X ε,n , for ε ∈ (0, 1], as a perturbed version of the Markov chain X 0,n and to interpret the damping parameter ε as a perturbation parameter.
In the present paper, we assume that the following condition holds for some h ≥ 1: , where: (a) X (g) , g = 1, . . . , h are non-intersecting subsets of X, (b) X (g) , g = 1, . . . , h are non-empty, closed aperiodic classes of communicative states for the Markov chain X 0,n . The Markov chains X ε,n has the same finite phase space X = {1, . . . , m}, for every ε ∈ [0, 1]. As it was pointed out in Section 2.1, this phase space is one aperiodic class of communicative states for the Markov chain X ε,n , for every ε ∈ (0, 1]. However, the situation can be different for the limiting Markov chain X 0,n , i.e., for the case where ε = 0. Condition A h,1 describe possible alternatives, related either to the model with regular perturbation type (h = 1) or to the model with singular perturbation type (h > 1). In the first case (h = 1), the phase space X is one aperiodic class of communicative states for the Markov chain X 0,n . In the second case (h > 1), the phase space X split in h non-intersecting closed aperiodic classes of communicative states for the Markov chain X 0,n .
If the initial distribution of the Markov chain X 0,n is concentrated at the set X (g) , for some g = 1, . . . , h, then X 0,n = X (g) 0,n , n = 0, 1, . . . can be considered as a Markov chain with the reduced phase space X (g) and the matrix of transition probabilities P 0,g = p 0,rk k,r∈X (g) = p (g) 0,rk k,r∈X (g) . Let, also, P n 0,g = p (g) 0,rk (n) r,k∈X (g) , n = 0, 1, . . . be matrices of n-steps transition probabilities for the Markov chain X (g) 0,n , for g = 1, . . . , h. According to condition A h,1 , the Markov chain X (g) 0,n is, for every g = 1, . . . , h, ergodic, i.e., the following ergodic relation holds for any r, k ∈ X (g) , g = 1, . . . , h, The stationary distributionπ (g) 0 is, for every g = 1, . . . , h, the unique positive solution for the system of linear equations, Let us introduce probabilities f (g) p = i∈X (g) p i , forp ∈ P m , g = 1, . . . , h. Note that some of these probabilities can be equal to 0, however, h g=1 f (g) p = 1. The following simple lemma, which proof can be found, for example, in Avrachenkov and Litvak (2004) or Avrachenkov et al. (2008), presents the variant of ergodic theorem for the Markov chain X 0,n .
Lemma 1 Let condition A h,1 holds, for some h ≥ 1. Then, the following ergodic relation takes place, for anyp ∈ P m , k ∈ X (g) , g = 1, . . . , h, Remark 1 Ergodic relation (13) shows that in the singular case, where condition A h,1 holds for some h > 1, the stationary probabilities π 0,p,k defined by the asymptotic relation (13) may depend on the initial distribution. The stationary distributionsπ 0,p = π 0,p,k , k ∈ X andπ 0,d = π 0,d,k , k ∈ X coincide, if probabilities f These relations obviously hold for any initial distributionp ∈ P m , in the regular case, where h = 1.

A Perturbation Analysis for Stationary Distributions of Markov Chains with Damping Component
In this section, we present results concerned with continuity of stationary distributionsπ ε with respect to damping (perturbation) parameter ε → 0. We also give explicit upper bounds for the rate of convergence in the above continuity convergence relations for stationary probabilities and the corresponding asymptotic expansions with respect to perturbation parameter ε.

Continuity Property for Stationary Probabilities
In what follows, relation ε → 0 is a reduced version of relation 0 < ε → 0.
Theorem 2 Let condition A h,1 holds. Then, the following asymptotic relation holds, for Proof Let S ε be a random variable geometrically distributed with parameter ε, i.e., In the case, where condition A h,1 holds, the above asymptotic relation and ergodic relation (13) imply that, for k ∈ X, It is readily seen that the following representation takes place for the stationary probabilities π ε,k , k ∈ X, Since the sequence p 0,d,k (n), n = 0, 1, . . . is a bounded, relations (15), and (16), and the corresponding variant of the Lebesgue dominated convergence theorem imply that the following asymptotic relation holds, for k ∈ X, The proof is complete.
Remark 2 Theorem 2 implies that, in the case, where condition A h,1 holds, the continuity property for stationary distributionsπ ε , as ε → 0, takes place under the additional assumption that f In particular it is so, for the regular case, where h = 1.

Rate of Convergence for Stationary Distributions
Let assume that condition A h.1 holds. In this case, the reduced Markov chain X (g) 0,n with the phase space X (g) and the matrix of transition probabilities P 0,g is, for every g = 1, . . . , h, exponentially ergodic and the following inequalities take place, for g = 1, . . . , h and n ≥ 1, max r,k∈X (g) with some constants C g = C g (P 0,g ) ∈ [0, ∞), λ g = λ g (P 0,g ) ∈ [0, 1), g = 1, . . . , h and distributionsπ 0,k , k ∈ X (g) , g = 1, . . . , h, with all positive components. According to the Perron-Frobenius theorem, the role of λ g can play, for every g = 1, . . . , h, the absolute value of the second (by absolute value), eigenvalue of matrix P 0,g , and C g is the constant, which can be computed using the algorithm described, for example, in the book (Feller 1968).
Condition A h,1 is, in fact, equivalent to the following condition: , where: (a) X (g) , g = 1, . . . , h are non-intersecting subsets of X, (b) X (g) , g = 1, . . . , h are non-empty, closed classes of states for the Markov chain X 0,n such that inequalities (18) hold.
for all n large enough. This implies that X (g) , g = 1, . . . , h are closed, aperiodic classes of communicative states.

Remark 3
The quantities |d k −π 0,d,k | appearing in inequality (19) are, in some sense, determined by a prior information about the stationary probabilities π 0,d,k . They take smaller values if one can choose the damping distributiond with smaller deviation from the sta- (19) by quantities independent on the corresponding stationary probabilities π 0,d,k .
Remark 4 Theorem 2 remains valid even if we weaken condition A h,2 by omitting in relation (18) the assumption of positivity for the distributionsπ In this case, condition A h,2 implies that the phase spaces X (g) = X (g) 0,i > 0} is the non-empty closed, aperiodic class of communicative states, while X Remark 5 We would like also to refer to paper (Mitrophanov 2005), where one can find alternative upper bounds for the rate of convergence of stationary distributions for perturbed Markov chains and further related references.

Asymptotic Expansions for Stationary Distributions
Let condition A h,1 holds, and assume that class X (g) includes m g ≥ 1 states, for g = 1, . . . , h.
Condition A h,1 implies that condition A h,3 holds. This follows from the Perron-Frobenius theorem.
The following theorem takes place.
It worth noting that some of eigenvalues ρ g,l and coefficients π (g) 0,rk [l] can be complex numbers. Despite of this, coefficientsπ Indeed, π ε,k , ε ∈ (0, 1] and π 0,d,k are real numbers. Relation (25) is a real number. In the similar way, the above proposition can be proved for all coefficients in expansions (25). This implies that the remainders of these expansions O(ε n+1 ) also are real-valued functions of ε.

Coupling and Ergodic Theorems for Perturbed Markov Chains with Damping Component
In this section, we present coupling algorithms and get the effective upper bounds for the rate of convergence in ergodic theorems for regularly and singularly perturbed Markov chains with damping component. We rerefer to books Lindvall (2002) and Thorisson (2000), where one can find historical and methodological remarks and extended bibliographies related to the coupling method.

Maximal Coupling for Discrete Distributions
Letp = p i , i ∈ X andp = p 1 , i ∈ X be two discrete probability distributions. Let us denote by L[p ,p ] the class of two-dimensional probability distributionP = P ij , (i, j) ∈ X × X which satisfy the following conditions (a) P i = j ∈X P ij = p i , i ∈ X; (b) P j = i∈X P ij = p j , j ∈ X.
Lemma 2 There exists the two-dimensional distributionP * = P * ij , i, j ∈ X ∈ L[p ,p ] such that: The distributionP * is given by the following relations: (ii) Q * = 1 if and only if p k = p k , k ∈ X and, (iii) Q * = 0 if and only if p k p k = 0, k ∈ X and, Proof It can be found in the above mentioned works. In order, to improve self-readability of the present paper, we just give a short sketch of the proof. Obviously, probability P ii ≤ p i ∧p i , i ∈ X, for any two-dimensional distributionP = P ij , (i, j) ∈ X×X ∈ L[p ,p ]. This relation implies that QP ≤ Q * = i∈X p i ∧p i . This is easily to check that every relation (31), (32), or Eq. 33 defines a two-dimensional distributionP * from the class L[p ,p ]. Moreover, the corresponding quantity QP * = Q * . This is obvious for two cases presented in propositions (ii) and (iii). In the first case presented in proposition (i), this follows from relation, (p i − min(p i , p i ))(p i − min(p i , p i )) = 0, i ∈ X.
Proof Variants of the proof can be found in the above mentioned works. In order, to improve self-readability of the present paper, we just give a short sketch of the proof. Let us consider the case, where transition probabilities P (N) ε,ij,rk are given by relation (35). According to Lemma 2, the following relation takes place, for i, j, r ∈ X, k∈X P (N) ε,ij,rk = k∈X min(p ε,ir (N ), p ε,j k (N ))I(r = k) = min(p ε,ir (N ), p ε,j r (N )) + p ε,ir (N ) − min(p ε,ir (N ), p ε,j r (N )) = p ε,ir (N ).
The proof of relation (38) is trivial in the cases, where transition probabilities P (N) ε,ij,rk are given by relation (36) or Eq. 37.
The following useful proposition takes place.

Lemma 5
The following inequality takes place, for N ≥ 1 and ε ∈ (0, 1], Proof Relation, AB = B, holds for any m × m stochastic matrix A = a ij and m × m stochastic damping type matrix B = b ij , with elements b ij = b j ≥ 0, i, j = 1, . . . , m.
Also, matrix C = BA, which has elements, c ij = c j = m k=1 b k a kj ≥ 0, i, j = 1, . . . , m, is a stochastic damping type matrix, i.e., it has all rows the same.
Using these remarks, we get the following relation, for N ≥ 1, Using relation (42) and Lemma 4, we get the following relation, This relation is equivalent to inequality (41).
Let us introduce, for N ≥ 1, the coefficient of ergodicity originating from the measure of ergodicity introduced by Markov (1906) and related functionals used in Dobrushin (1956), Doeblin (1940), Loève (1978) and many other works, The given below Theorems 5 and 6 present effective coupling type upper bounds for the rate of convergence in the individual ergodic theorem for Markov chain with damping component. These theorems are based on corresponding general coupling results for Markov chains given in Griffeath (1975), Lindvall (2002), Pitman (1979 and specify and detail the corresponding coupling upper bounds for the rate of convergence in ergodic theorems for Markov chains with damping component. Note that condition A h,1 is not required in Theorem 5 formulated below. Also, we count Δ N (P 0 ) 0 = 1, if Δ N (P 0 ) = 0.

Remark 6
The upper bounds given in relation (45) become better if quantities 1−Q(p,π ε ), Δ N (P 0 ) and 1 − ε take smaller values. The factor 1 − Q(p,π ε ), is determined by a prior information about the stationary probabilities. It takes smaller values if one can choose initial distributionp with smaller deviation from the stationary distributionπ ε . Relation (45) gives an effective upper bounds for the rate of convergence in the corresponding individual ergodic theorem for the Markov chain X ε,n even in the case, where factor Δ N (P 0 ) = 1.
Remark 7 It's worth noting that the weaker upper bound (1 − ε) n on the right hand side of inequality (45) have been given for Markov chains with a general phase and damping component, in the recent paper (Avrachenkov et al. 2018).
In the case, where condition A 1,1 holds (i.e., the phase space X is one aperiodic class of communicative states for the Markov chain X 0,n ), 1 − Q(P N 0 ) → 0 as N → ∞, and, thus, the following condition holds for N large enough: Also, condition B N,1 is, for every N ≥ 1, sufficient for holding the weaken variant of condition A 1,1 mentioned in Remark 4.
If the stationary distributionπ 0 = π 0,j , j ∈ X is positive, then X 0 = ∅. In this case, condition B N,1 is sufficient for holding of condition A 1,1 .
Relation (54) implies that the Markov chain X 0,n is ergodic with an exponential rate of convergence in the corresponding ergodic theorem, if condition B N,1 holds, for some N ≥ 1.
As it was shown in Pitman (1979), the following formula takes place, It is readily seen thatπ ε,k , k ∈ X is, for every g = 1, . . . , h, the stationary distribution for the Markov chain X (g) ε,n , with the phase space X (g) and the matrix of transition probabilities P ε,g = (1 − ε)P 0,g + εD g , where P 0,g = p 0,rk r,k∈X (g) is, according to condition B N,h , a stochastic matrix, while D g = d (g) rk r,k∈X (g) is the stochastic damping matrix with elements d (g) r, k ∈ X (g) . Note that relations (58) and (59) imply that, for g = 1, . . . , h, Condition B N,h implies that the Markov chain X (g) 0,n is exponentially ergodic, for every g = 1, . . . , h. Letπ 0,k , k ∈ X (j ) be the corresponding stationary distribution. According to the above remarks and Theorem 2, the following relation takes place, for k ∈ X (g) , g = 1, . . . , h, Below, we count Δ N (P 0,g ) 0 = 1, if Δ N (P 0,g ) = 0.
The coincidence of the repeated limits for transition probabilities p ε,p,k (n) let one expect that, in this case, transition probabilities p ε,p,k (n ε ) should converge to stationary probabilities π 0,d,k as ε → 0, for an arbitrary n ε → ∞ as ε → 0.
Thus, the repeated limits for transition probabilities p ε,p,k (n) coincides, under additional assumption that the initial distributionp =d. But, they may not coincide, ifp =d.
The transition probabilities p ε,p,k (n) satisfy the renewal type relation (6). This relation shows that the initial state X ε,0 influences the behaviour of the Markov chain X ε,n only up to the first damping regeneration time T ε,1 .
The random variables T ε,1 has the geometric distribution with parameter ε. Its rate of growth is ε −1 as ε → 0. Moreover, random variables εT ε,1 d −→ T 1 as ε → 0, where T 1 is a random variable, which has the exponential distribution with parameter 1.
This hints one that it would be natural to study the asymptotic behaviour of transition probabilities p ε,p,k (n ε ) for n ε → ∞ as ε → 0 such that εn ε → t ∈ [0, ∞] as ε → 0.
Moreover, it can be expected that in two extremal cases, where t = ∞ or t = 0, the transition probabilities p ε,p,k (n ε ) converge as ε → 0 to the corresponding repeated limits, respectively, π 0,d,k or π 0,p,k .
The question arises about the asymptotic behaviour of transition probabilities p ε,p,k (n ε ) in the intermediate case, where the above limit t ∈ (0, ∞).

Ergodic Theorems for Regularly Perturbed Markov Chains with Damping Component in the Triangular Array Mode
The following theorem takes place.
Proof Using the renewal type relation (6) and inequality (54), we get that the following relation holds, for k ∈ X and any n ε → ∞ as ε → 0, The proof is complete.
Remark 8 Relation (70) gives, in fact, explicit upper bounds for the rate of convergence in the ergodic relation given in Theorem 7.

Ergodic Theorems for Singularly Perturbed Markov Chains with Damping Component in the Triangular Array Mode
As follows from remarks made in Section 5.1, one should expect that ergodic relations take in this case much more complex forms.
Remark 9 Inequality (78) gives, in fact, explicit upper bounds for the rate of convergence in ergodic relation given in Theorem 8. Of course, it is possible to get some simple explicit upper bounds for R ε (t) in terms of quantities εn ε and t.
It should be noted that analogous asymptotic relations for Cezaro limits of transition probabilities have been obtained in Filar et al. (2002). The asymptotic relations given in Theorem 8 relate to usual limits for transition probabilities (which are stronger than Cezaro limits). Moreover, the proof of Theorem 8 yields the explicit upper bounds for the rate of convergence in this theorem, which are indicated in Remark 9.

Concluding Comments
One of the main reasons for approximation of the Markov chain X 0,n (with the matrix of transition probabilities P 0 ), by perturbed (regularised) Markov chains with damping component X ε,n (with the matrix of transition probabilities P ε = (1 − ε)P 0 + εD), is to use it for approximation of the stationary distributionπ 0 = π 0,j , j ∈ X of the Markov chain X 0,n by the stationary distributionπ ε = π ε,j , j ∈ X of the Markov chain X ε,n .
Since the corresponding phase space X = {1, . . . , m} can be large, the power method can be used for approximative computing of stationary distributionπ ε . In this case, its components π ε,j are approximated by probabilities p ε,p,j (n) = i∈X p i p ε,ij (n), where p ε,ij (n) are elements of the matrix P n ε andp = p j , j ∈ X is some initial distribution. The results given in Theorems 1-8 show that the situation significantly differs for two models, where: (a) the phase space X is one class of communicative states for the Markov chain X 0,n (condition A 1,1 holds), i.e., this Markov chain is ergodic, and, (b) the phase space X splits in several closed classes of communicative states for the Markov chain X 0,n (condition A h,1 holds, for some h > 1), and, thus, this Markov chain is not ergodic.
Rates of convergence for stationary probabilities π ε,j to π 0,j as ε → 0 and probabilities p ε,p,j (n) to stationary probabilities π 0,p,j as n → ∞ play the key role in the above method.
We give explicit upper bounds for rates of convergence of stationary probabilities π ε,j to π 0,j and asymptotic expansions for π ε,j , with respect to damping parameter ε, in Theorems 3 and 4. We also give explicit upper bounds for rates of convergence of probabilities p ε,p,j (n) to stationary probabilities π ε,p,j , in Theorems 5 and 6.
These results let one construct the two-stage effective algorithms for approximating the stationary distributionπ 0 in the case, where the phase space X is one class of communicative states for the Markov chain X 0,n (condition A 1,1 holds), and, thus, this Markov chain is ergodic.
At the first stage, one approximates stationary probabilities π ε,j by probabilities p ε,p,j (n). The rate of this approximation has order O(Δ N (P 0 ) n (1 − ε) n ). The effectiveness of this approximation declines for small values of damping parameter ε and values of ergodicity coefficient Δ N (P 0 ) closed to 1 that is typical for models with the phase space X nearly decomposed in several closed classes of states.
At the second stage, one approximate stationary probabilities π 0,j by stationary probabilities π ε,j . The rate of approximation has order O(ε), which, also, can be improved by using the approximations based on the corresponding asymptotic expansions.
Here, some dual effect takes place. At the second stage it would be better to use small values of regularisation parameter ε, while at the first stage using small values of ε are not desirable.
Theorem 7 let one also to use the one-step variant of the algorithm described above and approximate the stationary probabilities π 0,j by probabilities p ε,p,j (n ε ), with an arbitrary positive integers n ε → ∞ as ε → 0. Moreover, the explicit upper bounds for |p ε,p,j (n ε ) − π 0,j | pointed out in Remark 8 let one balance the choice of ε and n ε .
The case, where the phase space X splits in several closed classes of communicative states for the Markov chain X 0,n , (condition A h,1 holds, for some h > 1) and, thus, this Markov chain is not ergodic, is more complex.
As a matter of fact, in this case, (c) stationary probabilities π 0,p,k for the Markov chain X 0,n depend on the initial distributionp, (d) the stationary probabilities π ε,k for the Markov chain X ε,n converge to the stationary probabilities π 0,d,k as ε → 0.
If the initial distributionp =d then the two-stage algorithm as well as its one-stage variant described above can be applied.
However, ifp =d, one should be more careful, since in this case it may be that the stationary probability π 0,d,k = π 0,p,k and, thus, the two-stage algorithm described above does not work.
In this case, Theorem 8 answers the question about applicability probabilities p ε,p,j (n ε ) as approximations for stationary probabilities for the Markov chain X 0,n . In fact, these probabilities converge to some mixture of stationary probabilities π 0,p,k and π 0,d,k , namely, π 0,p,k (t) = π 0,p,k e −t + π 0,d,k (1 − e −t ), as n ε → ∞ in such way that εn ε → t ∈ [0, ∞] as ε → 0. Moreover, the explicit upper bounds for |p ε,p,k (n ε ) − π 0,p,k (t)| pointed out in Remark 9 let one balance the choice of ε and n ε and, in some sense, predict the value of limit π 0,p,k (t) depending on the value of quantity εn ε .
In conclusion, we would like to note that Theorems 1-8 present results of the perturbation analysis for Markov chains with damping component for the basic cases, where the phase space of the unperturbed Markov chain X 0,n , either is one aperiodic class of communicative states, or split in several closed aperiodic classes of communicative states. Despite some technical complications, analogous results can also be obtained for the cases, where the phase space of the unperturbed Markov chain X 0,n , either is one periodic class of communicative states, or split in several closed periodic classes of communicative states plus possibly a class of transient states. We are going to present such results as well as results of experimental studies supporting and illustrating the results of the present paper in future publications.