Asymptotic Expansions for Stationary Distributions of Nonlinearly Perturbed Semi-Markov Processes. 1

New algorithms for construction of asymptotic expansions for stationary distributions of nonlinearly perturbed semi-Markov processes with finite phase spaces are presented. These algorithms are based on a special technique of sequential phase space reduction, which can be applied to processes with an arbitrary asymptotic communicative structure of phase spaces. Asymptotic expansions are given in two forms, without and with explicit upper bounds for remainders.


Introduction
In this paper, we present new algorithms for construction of asymptotic expansions for stationary distributions of nonlinearly perturbed semi-Markov processes with a finite phase space.
We consider models, where the phase space is one class of communicative states, for embedded Markov chains of pre-limiting perturbed semi-Markov processes, while it can obtained in the paper are a good illustration for this statement. In particular, they automatically yield analogous asymptotic results for nonlinearly perturbed discrete and continuous time Markov chains.
Part 1 of the paper includes six sections.
In Section 2, we present explicit formulas for computing parameters, coefficients and remainders for expansions obtained as results of multiplication by constant, summation, multiplication and division operations with asymptotic Laurent expansions (Lemmas 1 -4).
In Section 3, we introduce a model of perturbed semi-Markov processes, formulate basic perturbation conditions given in form of asymptotic expansions for transition probabilities of embedded Markov chains and expectations of transition times. We also describe a special time-space procedure of one-state reduction of phase space for semi-Markov processes, get explicit formulas for transition characteristics of reduced semi-Markov processes in the form of rational functions of the corresponding transition characteristics for initial semi-Markov processes, and prove invariance of hitting times with respect to the above time-space procedure (Theorem 1).
In Section 4, we prove that the above reduced semi-Markov processes satisfy the same type perturbation conditions as the initial ones and describe algorithms for re-computing parameters, coefficients and remainders in these conditions for reduced semi-Markov processes in terms of the corresponding parameters, coefficients and remainders appearing in perturbation conditions for the initial semi-Markov processes (Theorems 2 and 3). These algorithms are based on application of operational rules for Laurent asymptotic expansions presented in Section 1 to the above rational functions representing transition characteristics of reduced semi-Markov processes.
In Section 5, we describe a recurrent multi-step time-space screening procedure of phase space reduction, which is based on recurrent repetition of the above one-step time-space screening procedure up to reduction of all states r = i, except one preliminary chosen state i. We describe the corresponding recurrent algorithms for computing asymptotic expansions for transition characteristics for reduced semi-Markov processes obtained as the result of sequential application of the above one-step time-space screening procedure. The resulting semi-Markov process has the one-state phase space {i}. The above invariance property of hitting times holds for every one-state reduction step. This implies that the return time for state i is the same for the initial semi-Markov process and for the final reduced one-state semi-Markov process. In the latter case, the return time to state i coincides with the time of first jump, which play the role of transition time. Thus, the Laurent asymptotic expansion for the expectation of return time for state i obtained for the reduced one-state semi-Markov process also yields the Laurent asymptotic expansion for the expectation of the return time to state i for the initial semi-Markov process (Theorem 4). We also prove in this theorem that the resulting asymptotic expansion is invariant with respect to the order, in which the states r = i are excluded from the initial phase space.
In Section 6, we get the asymptotic expansion for the stationary probabilities of perturbed semi-Markov processes using the well known representation of stationary probabilities π i (ε) as quotients of expectations for sojourn times and expectation of return times. Laurent asymptotic expansions for expectations of sojourn times can be easily obtained by application of summation rule to the Laurent asymptotic expansions for expectations of transition times appearing in initial perturbation conditions while Laurent asymptotic expansions expectations of return times are given by Theorem 4. The application of the division rule to the quotient of these expansions yield, finally, the asymptotic expansions for the stationary probabilities for nonlinearly perturbed semi-Markov processes (Theorem 5), which are the main object of interest in the present paper.
As was mentioned above, we present analogs of the above results in the form of asymptotic expansions with explicit upper bounds for remainders in Part 2 of the paper. Also, examples, which illustrate theoretical results obtained in the paper, are presented in Part 2.
We would like to conclude the introduction with the remark that the present paper is a shorten version of the report Silvestrov and Silvestrov (2016b), where one can find some additional details of proofs, comments and references.

Laurent Asymptotic Expansions
In this section, we present so-called operational rules for Laurent asymptotic expansions. The corresponding proofs and comments are given in Appendix A, in Part II of the paper.
Let A(ε) be a real-valued function defined on an interval (0, ε 0 ], for some 0 < ε 0 ≤ 1, and given on this interval by a Laurent asymptotic expansion, where We refer to such Laurent asymptotic expansion as a (h A , k A )-expansion. We say that (h A , k A )-expansion A(ε) is pivotal if it is known that a h A = 0. Let us explain, why we can restrict consideration by the case, where parameter ε takes only positive values. As a matter of fact, if function A(ε) is also defined on some interval [ε 0 , 0) and is given on this interval by a Laurent asymptotic expansion A(ε) = A (ε) = a h A ε h A + · · · + a k A ε k A + o A (ε k A ) analogous to the above one given in relation (1), then A (ε), ε ∈ [ε 0 , 0) can always be rewritten as a function of positive parameter −ε ∈ (0, −ε 0 ] using the formula, . Thus, the operational analysis of function A(ε), in particular computing of coefficients and estimation of remainder for the corresponding asymptotic expansion defined at the two-sided neighborhood of 0 can be reduced to analysis of two functions defined at positive one-sided neighborhoods of 0.

The asymptotical expansion A(ε) is pivotal if and only if a h A = a h A = a h A = 0.
It is also useful to mention that a constant a can be interpreted as function A(ε) ≡ a. Thus, 0 can be represented, for any integer −∞ < h ≤ k < ∞, as the (h, k)-expansion, 0 = 0ε h + . . . + 0ε k + o(ε k ), with remainder o(ε k ) ≡ 0. Also, 1 can be represented, for any integer 0 ≤ k < ∞, as the (0, k)-expansion, Let us consider four Laurent asymptotic expansions, The following lemma presents operational rules for Laurent asymptotic expansions.

Lemma 2
The following operational rules take place for Laurent asymptotic expansions:

This expansion is pivotal if and only if
.

This expansion is pivotal if and only if
Remark 1 The Laurent asymptotic expansions for function D(ε), given by formulas (a) -(c) in proposition (v) of Lemma 2, coincide with the expansions given by formulas (a) - In this case, 1 should be interpreted as The following multiple summation and multiplication operational rules for Laurent asymptotic expansions are direct corollaries of the corresponding rules given in Lemma 2.

Lemma 3 Let
. . , N are invariant with respect to any permutation, respectively, of summation and multiplication order in the above formulas.
The following lemma summarizes some basic algebraic properties of Laurent asymptotic expansions. It is a corollary of Lemmas 1 and 2.

Lemma 4 The summation and multiplication operations for Laurent asymptotic expansions defined in Lemma 2 possess the following algebraic properties, which should be understood as identities for the corresponding Laurent asymptotic expansions (i.e., identities for the corresponding parameters h, k, coefficients and remainders) of functions represented in two alternative forms in the functional identities given below:
(i) The summation and multiplication operations for Laurent asymptotic expansions satisfy the "elimination" identities that are implied by the corresponding functional identities, The summation operation for Laurent asymptotic expansions is commutative and associative that is implied by the corresponding functional identities, The multiplication operation for Laurent asymptotic expansions is commutative and associative that is implied by the corresponding functional identities, ). (iv) The summation and multiplication operations for Laurent asymptotic expansions possess distributive property that is implied by the corresponding functional identity,

Remark 2 In proposition (i) of Lemma 4, 0 should be interpreted as the (h A , k A )-expansion,
Remark 3 The Laurent asymptotic expansion A(ε) is assumed to be pivotal, in the elimination identity implied by functional identity A(ε) · A(ε) −1 ≡ 1, and to hold, The proofs of Lemmas 1-4 are given in Appendix A of Part 2 of the paper.
We refer to sets Y i , i ∈ X as transition sets. Conditions A implies that all sets Y i = ∅, i ∈ X.
Condition A also implies that the phase space X of Markov chain η (ε) n is one class of communicative states, for every ε ∈ (0, ε 0 ].
We also assume that the following condition excluding instant transitions holds: where n , n = 0, 1, . . ., are sequential moments of jumps, for the semi-Markov process η (ε) (t). If [t] , t ≥ 0 is a discrete time homogeneous Markov chain embedded in continuous time. If Let us also introduce expectations of sojourn times, Here and henceforth, notations P i and E i are used for conditional probabilities and expectations under condition η (ε) (0) = i.
We also assume that the following condition holds: In the case of discrete time Markov chain, e ij (ε) = p ij (ε), i, j ∈ X.
Let us assume that the following perturbation condition, based on Taylor asymptotic expansions, holds: We also assume that the following perturbation condition, based on Laurent asymptotic expansions, holds: The above perturbation conditions can be interpreted as linear, if the asymptotic expansions appearing in them are of the first order, i.e., parameters l Otherwise, these perturbation conditions can be interpreted as nonlinear.
Let us, for the moment, exclude sub-condition (a) from condition A. Conditions D and just, decrease parameter ε 0 and to take the new ε 0 =ε 0 . Condition A (a) holds for this new value of ε 0 . An actual value of parameter ε 0 ∈ (0, 1] is not important in propositions concerned asymptotic expansions with remainders given in form of o(·). We, however, do prefer to include sub-condition (a) in condition A, in order to have a clear description for the communicative structure of the phase space X, in one condition. In this case, the above inequalities hold forε 0 = ε 0 . Conditions D and E are consistent with condition A (a), according to the above remarks. Matrix p ij (ε) is stochastic, for every ε ∈ (0, ε 0 ]. This model stochasticity assumption holds by the default. Condition D should, also, be consistent with this model stochasticity assumption. Condition D and proposition (i) (the multiple summation rule) of Lemma 3 imply that sum j ∈Y p ij (ε) can, for every subset Y ⊆ Y i and i ∈ X, be represented in the form of the following Laurent asymptotic expansion, where: ). Let us introduce the following condition, which presents additional links between the asymptotic expansions appearing in condition D, which are caused by the above model stochasticity assumption:

Lemma 5 Let conditions A (a) -(b) and D hold. In this case, condition F is equivalent to the model stochasticity assumption that matrix p ij (ε) is stochastic, for every
Proof The model stochasticity assumption for matrices p ij (ε) , ε ∈ (0, ε 0 ], takes, under conditions A (a) -(b), the form of the following identity, which should hold for every i ∈ X, Condition D and Lemma 3 let us write down the asymptotic expansion (6) for the case Y = Y i . Constant 1 also can be interpreted as the asymptotic expansion 1 = 1 + 0ε + · · · + 0ε k + o(ε k ) for k = l i,Y i and o(ε k ) ≡ 0. Then, identity (7) let one apply Lemma 1 to the described above two asymptotic expansions and get relations appearing in condition F.
Conditions A (a) -(b) imply that p ij (ε) ≥ 0 for i, j ∈ X. In this case, conditions D and F obviously imply that j ∈X p ij (ε) = 1, for i ∈ X. Thus, matrix p ij (ε) is stochastic.
It is also worse to note that, under the assumption of holding condition A (a), the perturbation conditions D and E are independent.
To see this, let us take arbitrary positive functions p ij (ε), j ∈ X i , i ∈ X and e ij (ε), j ∈ Y i , i ∈ X satisfying, respectively, conditions D and E, and, also, the corresponding stochasticity identities (7). Then, there exist semi-Markov transition probabilities It is readily seen that, for example, semi-Markov transition probabilities Q (ε) ij (t) = I(t ≥ e ij (ε)/p ij (ε))p ij (ε), t ≥ 0, j ∈ Y i , i ∈ X satisfy the above relations.
Conditions A-C imply that, for every ε ∈ (0, ε 0 ], the semi-Markov process η (ε) (t) is also ergodic, and its stationary distributionπ (ε) = π 1 (ε), . . ., π N (ε) is given by the following ergodic relation, This ergodic relation holds for any initial distributionp (ε) , and the stationary distribution π (ε) does not depend on the initial distribution. Also, π i (ε) > 0, i ∈ X and i∈X π i (ε) = 1, for every ε ∈ (0, ε 0 ]. Let us define hitting times, which are random variables given by the following relation, for j ∈ X, Let us denote, As is known, conditions A -C imply that, expectations of hitting times 0 < E ij (ε) < ∞, i, j ∈ X, for every ε ∈ (0, ε 0 ], The following well known relation for stationary probabilities (which holds for every ε ∈ (0, ε 0 ]) plays an important role in what follows, where Condition D implies that there exists lim ε→0 p ij (ε) = p ij (0), which equals to Matrix p ij (ε) is stochastic, for every ε ∈ (0, ε 0 ], and, thus, matrix p ij (0) is also stochastic. Let η (0) n be a Markov chain with the phase space X and the matrix of transition probabilities p ij (0) . It is possible that matrix p ij (0) has more zero elements than matrices p ij (ε) and, thus, X can consists of one or several closed classes of communicative states plus, possibly, a class of transient states, for the Markov chain η (0) n . Our goal is to design an effective algorithm for construction of asymptotic expansions for stationary probabilities π i (ε), i ∈ X, under the assumption that conditions A -E hold. As we shall see, the proposed algorithm can be applied to models with an arbitrary asymptotic communicative structure of phase spaces.
The models of nonlinearly perturbed discrete and continuous Markov chains are particular cases of the above model of nonlinearly perturbed semi-Markov processes.
If η (ε) (t) is a discrete time Markov chain, condition D implies condition E, since, in this case, expectations e ij (ε) = p ij (ε), j ∈ Y i , i ∈ X.
If η (ε) (t) is a continuous time Markov chain, condition E can be replaced by an analogous condition, which assumes that expectations e i (ε) = λ i (ε) −1 , i ∈ X can be represented in the form of pivotal Laurent asymptotic expansions. This condition and condition D would imply condition E, with the corresponding Laurent asymptotic expansions obtained by application proposition (ii) (the multiplication rule) of Lemma 2 to the products e ij (ε)

Semi-Markov Processes with Reduced Phase Spaces
Let us choose some state r ∈ X and consider the reduced phase space r X = X \ {r}, with the state r excluded from the phase space X.
The transition probabilities r Q (ε) ij (t) are expressed via the transition probabilities Q (ε) ij (t) by the following formula, for t ≥ 0, i, j ∈ r X, Here, symbol * is used to denote the convolution of distribution functions (possibly improper), and Q (ε) * n rr (t) is the n times convolution of the distribution function Q (ε) rr (t). Relation (18) directly implies the following formula for transition probabilities of the reduced embedded Markov chain r η (ε) n , for i, j ∈ r X, Note that condition A implies that probabilities p rr (ε) ∈ [0, 1), r ∈ X, ε ∈ (0, ε 0 ]. Let us introduce sets, We omit the proof of the following simple lemma. Lemma 6 Condition A, assumed to hold for the Markov chains η (ε) n , also holds for the Markov chains r η (ε) n , with the same parameter ε 0 and transition sets r Y i defined by the following relation, for i ∈ r X, Let us introduce expectations, Relation (18) directly implies the following formula for expectations of sojourn times for the reduced semi-Markov process r η (ε) (t), for i, j ∈ r X, r e ij (ε) = e ij (ε) + ∞ n=0 e ir (ε)p rj (ε) + (n + 1)e rr (ε)p ir (ε)p rj (ε) The following simple lemma is the direct corollary of relation (22).

Lemma 7 Conditions B and C, assumed to hold for the semi-Markov processes η (ε) (t),
also hold for the semi-Markov processes r η (ε) (t).
The following theorem presents the result, similar to those given in recent papers by Silvestrov and Manca (2015) and Silvestrov and Silvestrov (2016a, b). It plays an important role in what follows.

Theorem 1 Let conditions A -C hold for semi-Markov processes η (ε) (t).
Then, for any state j ∈ r X, the first hitting times τ (ε) j and r τ (ε) j to the state j , respectively, for semi-Markov processes η (ε) (t) and r η (ε) (t), coincide, and, thus, the expectations of hitting times j , for any i, j ∈ r X and ε ∈ (0, ε 0 ]. Proof The first hitting times to a state j ∈ r X are connected for Markov chains η (ε) n and r η (ε) n by the following relation, where r ν (ε) j = min(n ≥ 1 : r η (ε) n = j). Relation (23) implies that the following relation holds for the first hitting times to a state j ∈ r X, for the semi-Markov processes η (ε) (t) and r η (ε) (t), The equality of expectations is an obvious corollary of relation (24).
We would like to preface lemmas and theorems presenting algorithms of for constriction of asymptotic expansions for nonlinearly perturbed semi-Markov processes by comments clarifying slightly unusual references to descriptions of algorithms in the proofs.
All lemmas and theorems formulated below, contain proofs of propositions that the corresponding functionals for perturbed reduced semi-Markov processes can be represented in the form of asymptotic expansions. These proofs are based on recurrent application of operational formulas for Laurent asymptotic expansions presented in Section 1 to the reduced semi-Markov processes constructed with the use of the corresponding recurrent time-space screening procedures of phase space reduction. In fact, one should correctly describe to which functions, in which order, and which operational rules should be applied for getting the corresponding expansions (their parameters, coefficients and remainders) as well as to indicate some particular cases, where the corresponding computational steps should be modified. This is exactly what is done in all proofs of the corresponding lemmas and theorems.
An explicit writing down corresponding operational formulas representing the above recurrent algorithms (which could be given, say, as corollaries of these lemmas and theorems) would, in fact, replicate the above proofs in the formal form, require implementation of huge number of intermediate notations, take too much space, etc., but would not add any new essential information about the corresponding algorithms. That is why the decision was made, just, to say in each theorem that the description of the corresponding algorithms are given in their proofs. This makes formulations slightly unusual. But, as we think, this is the most compact way for presentation of the corresponding asymptotic results and algorithms.
Let us now describe an algorithm for construction of asymptotic expansions for expectations r e ij (ε) given by relation (22).

Theorem 3 Conditions
A -E, assumed to hold for the semi-Markov processes η (ε) (t), also hold for the reduced semi-Markov processes r η (ε) (t). Parameter ε 0 , in conditions A, D and E, is the same for processes η (ε) (t) and r η (ε) (t). The transition sets r Y i , i ∈ r X are given for processes r η (ε) (t) by relation (20). The pivotal ( r m − ij , r m + ij )-expansions appearing in condition E are given for expectations r e ij (ε), j ∈ r Y i , i ∈ r X by the algorithm described below, in the proof of the theorem.
Proof Lemma 6 and Theorem 2 imply that conditions A and D hold for the semi-Markov processes r η (ε) (t), with the same parameter ε 0 as for the semi-Markov processes η (ε) (t), and the transition sets r Y i , i ∈ r X given by relation (20). Also, conditions B and C hold for the semi-Markov processes r η (ε) (t), by Lemma 7.

Sequential Reduction of the Phase Space
In what follows, letr i,N = r i,1 , . . . , r i,N = r i,1 , . . . , r i,N−1 , i be a permutation of the sequence 1, . . . , N such that r i,N = i, and letr i,n = r i,1 , . . . , r i,n , n = 1, . . . , N be the corresponding chain of growing sequences of states from space X.

Theorem 4 Let conditions A -E hold for semi-Markov processes η (ε) (t).
Then, for every i ∈ X, the pivotal (M − ii , M + ii )-expansion for the expectation of hitting time E ii (ε) is given by the algorithm based on the sequential exclusion of states r i,1 , . . . , r i,N−1 from the phase space X of the processes η (ε) (t). This algorithm is described below, in the proof of the theorem. The above (M − ii , M + ii )-expansion is invariant with respect to any permutation r i,N = r i,1 , . . . , r i,N−1 , i of sequence 1, . . . , N .
The processr i,n η (ε) (t) has the phase spacer i,n X = X \ {r i,1 , r i,2 , . . . , r i,n }. The transition probabilities of the embedded Markov chainr i,n p i j (ε), i , j ∈r i,n X, and the expectations of sojourn timesr i,n e i j (ε), i , j ∈r i,n X are determined for the semi-Markov process r i,n η (ε) (t) by the transition probabilities and the expectations of sojourn times for the process r i,n−1 η (ε) (t), respectively, via relations (19) and (22).
The transition setsr i,n Y i , i ∈r i,n X are determined by the transition sets r i,n−1 Y i , i ∈r i,n−1 X, via relation (20) given in Lemma 6. Therefore, the pivotal l]ε l +r i,nȯi j (ε¯r i,n m + i j ), ε ∈ (0, ε 0 ], j ∈r i,n Y i , i ∈r i,n X, can be constructed by applying the algorithms given in Theorems 2 and 3, respectively, to the (r i,n−1 l − i j ,r i,n−1 l + i j )-expansions for transition probabilitiesr i,n−1 p i j (ε), j ∈r i,n−1 Y i , i ∈r i,n−1 X and to the (r i,n−1 m − i j ,r i,n−1 m + i j )-expansions for expectationsr i,n−1 e i j (ε), j ∈r i,n−1 Y i , i ∈r i,n−1 X.
For every j ∈r i,n Y i , i ∈r i,n X, n = 1, . . . , N − 1, the asymptotic expansions for the transition probabilityr i,n p i j (ε) and the expectationr i,n e i j (ε), resulted by the recurrent algorithm of sequential phase space reduction described above, are invariant with respect to any permutationr i,n = r i,1 , . . ., r i,n of sequencer i,n = r i,1 , . . . , r i,n .
Indeed, for every permutationr i,n of sequencer i,n , the corresponding reduced semi-Markov processr i,n η (ε) (t) is constructed as the sequence of states for the initial semi-Markov process η (ε) (t) at sequential moments of its hitting into the same reduced phase spacer i,n X = X \ {r i,1 , . . . , r i,n } =r i,n X = X \ {r i,1 , . . . , r i,n }. The times between sequential jumps of the reduced semi-Markov processr i,n η (ε) (t) are the times between sequential hitting of the above reduced phase space by the initial semi-Markov process η (ε) (t).