Resolvent Decomposition Theorems and Their Application in Denumerable Markov Processes with Instantaneous States

The basic aim of this paper is to provide a fundamental tool, the resolvent decomposition theorem, in the construction theory of denumerable Markov processes. We present a detailed analytic proof of this extremely useful tool and explain its clear probabilistic interpretation. We then apply this tool to investigate the basic problems of existence and uniqueness criteria for denumerable Markov processes with instantaneous states to which few results have been obtained even until now. Although the complete answers regarding these existence and uniqueness criteria will be given in a subsequent paper, we shall, in this paper, present part solutions of these very important problems that are closely linked with the subtle Williams S and N conditions.


Introduction
The basic aim of this paper is to provide a fundamental tool, the resolvent decomposition theorem, in the construction theory of continuous time Markov chains (CTMC). This extremely useful theorem has a very clear probabilistic interpretation. It is just the Laplace transform version of first-entrance-last-exit decomposition law.
Let {X t ; t ≥ 0} be a homogeneous continuous time Markov chain defined on the countable state space E = {e 1 , e 2 , e 3 , . . .} and let P(t) = {p i j (t); i, j ∈ E, t ≥ 0} be its transition function. Then, in matrix form, this family of real matrix functions P(t) satisfies P(t) ≥ 0 (t ≥ 0) (1.1) P(t)1 ≤ 1 (t ≥ 0) (1.2) P(t + s) = P(t)P(s) (t ≥ 0, s ≥ 0) (1.3) and lim t→0 + P(t) = P(0) = I (1.4) where I denotes the identity matrix on E × E and 1 denotes the column vector on E whose components are all 1. In the following, 1 will always denote the column vector with an appropriate dimension (either finite or infinite) whose components are all 1.
If the equality holds in (1.2) for all t ≥ 0, then the transition function P(t) is called honest.
One of the fundamental results in the theory of CTMC is that if P(t) satisfies (1.1)-(1.4), then the following limit exists where the matrix Q = {q i j ; i, j ∈ E} satisfies 0 ≤ q i j < +∞ (i = j) (1.6) −∞ ≤ q ii ≤ 0 (∀ i ∈ E) (1.7) and i = j q i j ≤ − q ii (∀ i ∈ E). (1.8) In the following, we shall always denote q i = −q ii (i ∈ E). Note that in (1.7), q ii may not be finite. If q i = −q ii < +∞, then the state i ∈ E is called stable, while if q i = +∞ then the state i ∈ E is called instantaneous or unstable. If all states are stable, then the matrix Q is called stable, otherwise the Q is called unstable. The limiting matrix Q given in (1.5) is usually called the intensity matrix of the transition function P(t). However, from now on we shall follow Anderson's [1] convenient usage by using another term. In particular, the limiting matrix Q in (1.5) will be called a q-matrix and a transition function P(t) will be called a q-function. We shall also follow Anderson's convention by calling a transition function P(t) as a Qfunction if the q-matrix Q is specified. Furthermore, if a matrix Q on E × E satisfies (1.6)-(1.8), then this Q will be called a pre-q-matrix. Hence the above basic result could be simply referred to as "a q-matrix must be a pre-q-matrix." However, the converse may not be always true. In fact, the basic question in the construction theory of CTMC is to investigate the reverse problems. More specifically, the basic questions in the construction theory of CTMC are as follows: Question 1 (Existence) Under what conditions will a given pre-q-matrix become a q-matrix, i.e., under what conditions does there exist a Q-function for a given pre-q-matrix? Question 2 (Uniqueness) If the given Q is a q-matrix, then under what conditions will there exist only one corresponding Q-function? Question 3 (Construction) How do we construct all the Q-functions for a given q-matrix Q? Question 4 (Property) How do we study properties of the Q-function P(t) and the corresponding CTMC {X t ; t ≥ 0} in terms of the given q-matrix Q?
No doubt, answering the above questions is of great significance. This is particularly important due to the fact that in most of theoretic and practical problems only the preq-matrix Q is known.
In order to answer these important questions, in many cases it will be more convenient to study the so-called resolvents rather than transition functions. For this purpose, define and denote R(λ) = {r i j (λ); i, j ∈ E, λ > 0}, and then this R(λ) is usually called a resolvent function, or simply a resolvent.
The follow conclusion shows that it is important to introduce the concept of resolvent in the construction theory of CTMC. Similar as transition functions, we shall use Q-resolvent and q-resolvent, respectively, to denote the resolvent function when the q-matrix is specified or not, respectively.
From the view point of construction theory of CTMC, the resolvent is more convenient than the transient function and therefore in the following we shall concentrate on discussing resolvent functions. In particular, the above-mentioned four basic questions may apply to Q-resolvent for a given q-matrix Q.
If the given q-matrix Q is stable, then the above questions were firstly systematically studied by J.L. Doob and W. Feller in the 1940s and then continuously investigated by many world-leading probabilitists including D.G. Kendall, G.E.H. Reuter, D. Willimas, J.F.C. Kingman, Samuel Karlin and K.L. Chung. In particular, Feller [14] showed that a stable pre-q-matrix must be a q-matrix and constructed a Q-resolvent (Qfunction) for any given stable q-matrix Q, which has a minimal property and bears his name today. The uniqueness problem for stable chains was firstly considered by Doob [13] and then resolved by Reuter [27][28][29]31] and Hou [16], respectively. Since then, the construction theory of stable CTMC has been flourished to a very high level. Until the publishing of Chung [12]'s foundation book, the theory of total stable q-processes was viewed, by and large, as completed, though many other important topics such as reversibility, ergodicity, quasi-stationary distributions, monotonicity, duality, coupling, large deviation, and spectral theory have emerged and flourished since then and lasted even until now. For more details, see Chen [11].
However, it may be surprised to know that few results have been obtained even until now if the pre-q-matrix is not stable, although great effort has been taken and could be traced back as early as in about 70 years ago. In fact, Kolmogorov [22] first posed an example with a single instantaneous state and asked for the existence of the corresponding process. This example was later firstly answered by Kendall and Reuter [20] and then deeply analyzed and generalized by Williams [35]. Later, Reuter [30] slightly generalized the Kolmogorov's example.
On the other hand, the case when all states are instantaneous has also been receiving attention for a long time. For example, Blackwell [2] considered a very special example when all off-diagonal elements of the pre-q-matrix Q are all zero. See also Kendall [19]. Later, Williams [35] obtained an excellent result by giving a surprisingly simple criterion for the existence of totally instantaneous chains. Freedman's book [15] seems the only book concentrating on discussing the unstable chains. See also Rogers and Williams [32,33].
It may be hard to believe that the above results are essentially the only results obtained for unstable chains. In particular, it is really very hard to believe, though definitely true, that even until now the topic regarding mixing pre-q-matrix with more than one single instantaneous state remains totally blank. This reflects the fact that we are still lack of suitable tools and methods to tackle the unstable case.
The basic aim of this paper is just to fill this gap by providing an extremely useful tool, the so-called resolvent decomposition theorem. As we are aware, the idea of using such decomposition in the construction theory of continuous time Markov chains could be traced back at least to early seventies of last century when the paper of Lamb [23] was published in 1971. Also, a special case of this theorem was already provided in Chen and Renshaw [3,4] and they have used this theorem to successfully tackle the unstable chain with a single instantaneous state. See, also, Chen and Renshaw [6,8] and Chen et al. [10]. In some sense, the main aim of this paper is to generalize their result to the multi-state case and then use it to tackle the Markov chain with finitely many instantaneous states.
It should be noticed that to tackle unstable chains is not only of significance in the aspect of theory, but also important in the applications of continuous time Markov chains. In fact, Markov chains with instantaneous states have been revealed helpful in modeling many natural phenomena, even in social science. For example, in modern finance, it is usually the case that when financial crisis happens, some kind of "outside force" may strongly intervenes in order to "rescues" the economy from "depression." Such "intervention" may be well modeled by the instantaneous chains since the action will be taken immediately when the "crisis" reaches some fixed "dangerous" level. This paper concentrates on discussing a basic tool, the resolvent decomposition theorem, referred to as RDT in the following. The structure of this paper is as follows. In Sect. 2, the resolvent decomposition theorems will be stated and explained. But the detailed proof is postponed to the final section of the paper. In Sect. 3, we use this fundamental resolvent decomposition theorem tool to discuss the interesting conditions regarding CTMC with instantaneous states posted by David Williams. The more important application, i.e., to investigate the fundamental problems of existence and uniqueness of denumerable Markov process together with some important examples will be postponed to a sequent paper.
Before proceeding, it is necessary to mention two points in the terminologies used in this paper. The first one is that we shall use the synonyms "continuous time Markov chains" and "denumerable Markov process" to denote our model. Secondly, we shall use the term "Q-process" to refer either Q-function P(t), Q-resolvent R(λ) or the true process (i.e., the family of random variables) {X t (w); t ≥ 0}. Such usage is, of course, commonly accepted and won't cause any confusion. In case we need to distinguish them, we shall use the notations P(t), R(λ) or {X t ; t ≥ 0} to emphasize the difference.

Resolvent Decomposition Theorems
Suppose P(t) is a transition function on the countable state space E with a q-matrix Q and resolvent R(λ), respectively. Let F ⊂ E be a finite subset of E and denote G = E\F. In most of cases (but not always), we shall assume that the states in the set F are instantaneous and also that, without loss of any generality, we have re-arranged E = {e 1 , e 2 , . . . , e n , e n+1 , . . .} such that F = {e 1 , e 2 , . . . , e n } contains the first n elements.
Then we have the following conclusion whose proof can be found in Sect. 7 of Chen and Renshaw [3].
where the notations such as η(μ), ξ(λ) , say, in the above expressions denote the inner product of the row vector η(μ) and the column vector ξ(λ) on E. Moreover, suppose for each i ∈ F we have a row vector η (i) (λ) on G such that η (i) (λ) ∈ H (i ∈ F). We pack these row vectors on G together as a finite-dimensional column vector whose components are row vectors on G. We still denote it as η(λ). Hence η(λ) = {η (i) j (λ); i ∈ F, j ∈ G} is, in fact, an F × G matrix. For convenience, we also express it as η(λ) = {η i j (λ); i ∈ F, j ∈ G}. Now, all the above information could be simply packed into a matrix equation Similarly, if for each i ∈ F we have a column vector ξ (i) (λ) on G such that ξ (i) (λ) ∈ K , then we can get a finite-dimensional row vector, denoted by ξ (λ), whose components are column vectors on G. That is that We can now restate Lemma 2.2 as the following useful and convenient conclusion.
Then (i) Both η(λ) and ξ (λ) are increasing matrix function of λ > 0 and thus the limit Proof It follows directly from Lemma 2.2.
Note that (2.1) and(2.4) are very similar, but the meaning is different. Indeed, both sides of (2.1) are just scalars, while both sides of (2.4) are F×F matrices. For example, suppose F = {a, b} where a = b, then the meaning of the simple form (2.4) is We are now ready to state our main conclusions in this paper. Since we are mainly interested in the honest transition function, we shall assume that P(t) or, equivalently, R(λ) is honest. Then we have the following basic theorem.
is an honest Q-resolvent. Let F be a finite subset of E and denote G = E\F. Then R(λ) can be uniquely decomposed as follows and and also (Here 1 is a column vector whose elements are all 1 and the dimension of which depends. For example, the first 1 in (2.10) is a finite-dimensional vector of F, while the other two in (2.10) are infinite-dimensional vector on G) and thus the right-hand side of (2.14) is invertible. Moreover, where the right-hand side of the above expression and (Recall the monotone property of ξ (λ) stated in (iv) above and thus the limit ξ in (2.22) does exist).

Remark 2.1
If the transition function P(t) or resolvent R(λ) may not be honest, then Theorem 2.1 still holds. The only difference is that the equality in (2.17) becomes the following (2.24) and the second equality in (2.10) become inequalities as the following and However, it is important to note that, different from the expression (2.17) which is independent of λ > 0, the right-hand side of the above (2.24) does depend on λ > 0 and thus a limit here is necessary. This extremely useful theorem has a very clear probabilistic meaning. It is just the Laplace transform version of the first-entrance and the last-exit decomposition theorem. Indeed, ξ (λ) and η(λ) are simply the Laplace transforms of the first-entrance law to, and last-exit law from, the subset F of the corresponding Markov chain and (λ) is just the taboo-resolvent. See Chung [12] for the celebrated idea of taboo probability. This idea has been extensively developed by Syski [34], though this latter book has concentrated on the Feller minimal chains and thus the q-matrix concerned is stable. It should also be emphasized that the A(λ) in (2.5) and (2.6) is just the Laplace transform of the "transition function" of a quasi-Markov chain, a theory brilliantly developed by Kingman [21], in which the "Markov characterization problem" was tackled and solved.
Surely, the idea of decomposition theorem 2.1, though not in the completed and subtle form as stated here, has a long history which can be traced back at least to Neveu [24][25][26]. Based on Neveu and Chung's works, Williams systematically studied and raised it to a considerable high level, see Rogers and Williams [33].
However, it seems that people have paid little attention to the converse of Theorem 2.1, which, in our opinion, is more important and has much more applications, particularly, in the study of unstable chains. That is that we have the following conclusion.
The important thing here in Theorem 2.2 is that it not only gives the existence conditions but also yields uniqueness criteria. It also provides a method to construct the q-resolvents, by which the property of the corresponding q-processes can be analyzed. This makes Theorems 2.1 and 2.2 very useful even for the stable q-processes. In particular, if the underling Q GG resolvent (λ) is known, then the property of the Q-process may be easily derived. This idea has stimulated some new research works, see, for example, Chen and Renshaw [7] in which the underlying structure is an M/M/1 (queue), and Chen and Renshaw [5,9] when the underlying structure is a simple Markov branching process.
In case that F is a single element set of E, Theorems 2.1 and 2.2 have been fully proved in Chen and Renshaw [3]. Their proof could be extended to our current general case by using the induction principle, see Hou et al. [18]. However, we prefer to provide a more simple and direct proof here, see the last section of this paper.
Note that in both Theorems 2.1 and 2.2, we do not impose any condition for the pre-q-matrix Q. That is that the pre-q-matrix Q is arbitrary. It may be stable, totally instantaneous or a mixed type (i.e., with both stable and instantaneous states).
In the following, we shall mainly be concerned with the case that all states in G are stable, though the states in the finite set F are arbitrary. Now, suppose that (that is that for all i ∈ G, q i < ∞). Following Yang [36] (see also Hou and Guo [17] or Anderson [1] Note that the resolvent forms of (2.27) and (2.28) are sometimes more convenient.
That is the that a Q GG -resolvent (λ) is called a B-type or an F-type Q GG -resolvent if holds, respectively. It is well known that if Q GG is stable, then there always exists a Q GG -function (Q GG -resolvent) that satisfies both (2.27) and (2.28) (or, equivalently, (2.29) and (2.30)) and also possesses the minimal property which is usually called the Feller minimal one. In case we need to emphasize this Feller minimal one, we shall usually use F(t) or (λ) to denote this Feller minimal transition function or resolvent, respectively.
If Q is not stable on E but Q GG is stable, then although we are able to define neither the B-type nor the F-type transition function, we may define the so-called almost B-type (F-type) as follows.
Without loss of any generality, let us assume that all the states in the finite set F are instantaneous, i.e., F = {i ∈ E; q i = +∞}, and all the states in G = E\F are stable. We then define a Q-function on E as almost B-type if Furthermore, a Q-function is called an almost B ∩ F type if both (2.31) and 2.32) hold. Again, it is more convenient if the resolvent form is used. That is that a Q-   Note that Theorems 2.1 and 2.2 hold true even if Q is stable. Although in this case, they do not provide further information regarding the existence of Q-function (which is trivial since we always have the Feller minimal one), Theorems 2.1 and 2.2 still give very useful information regarding the structure of the Q-functions (Q-resolvents). In particular, if the stable Q is regular, then such structure takes a very simple form which makes it very useful in realizing the properties of the Q-process. For this reason, we state it here for reference. Its proof is omitted since it is merely a direct corollary of Theorems 2.1 and 2.2.
Let Q be a regular (and thus conservative and stable) q-matrix defined on a countable state space E. Let F be a finite subset of E and G = E\F. We write Q as where Q F F is the restriction of Q to F × F, Q FG is the restriction of Q to F × G, etc. Now, let R(λ) = {r i j (λ), λ > 0, i, j ∈ E} be the minimal Q-resolvent (the resolvent of the minimal Q-process), and observe that R(λ) can be written similarly as Since Q is regular, the R(λ) is honest in that λR(λ)1 = 1, where 1 is the column vector in E whose elements are all 1 .

Theorem 2.4
Suppose Q is a regular q-matrix defined on E = F ∪ G. Then, the (honest) minimal Q-resolvent R(λ) can be written as

Williams' Conditions for q-Matrices
In his study of totally instantaneous Markov chains, D. Williams has obtained the following famous result regarding the totally instantaneous Markov chains.

Proposition 3.1 Suppose Q is a totally instantaneous q-matrix, i.e., for all i ∈ E, q i = +∞. Then there exists a Q-function P(t) if and only if the following two conditions hold
Conditions (i) and (ii) in the above are usually referred to as (N ) and (S) conditions, respectively.
Note that both (N ) and (S) conditions also hold trivially true for stable Q-matrices. Naturally, we are interested in knowing whether they would be true for Q-matrices with finitely many instantaneous states. Of course, we only want to know whether they are necessary conditions since it is obvious that they cannot be sufficient. Note that Williams' proof was probabilistic. We are thus also interested in answering the following questions. Can we give an analytic and simple proof of these conditions? Are these conditions always necessary for existence of Q-functions? Are there any relationships between the (N ) condition and (S) condition?
The following conclusion shows that (N )-condition is in fact always necessary.
But by (2.10) we have Now, by letting λ → 0 in the above inequality we immediately get that and thus by using (3.5) we obtain Applying Fatou's lemma in (3.6) and noting (2.11) and (2.12) together with the fact that F is a finite set then immediately yield the (N )-condition (3.1).
In fact, a much more strong form of N -condition can be obtained by simply applying Theorem 3.1.

Theorem 3.2 Suppose Q is a q-matrix on
and, in particular We only need to prove (3.7) since (3.8) is an easy corollary of (3.7). In order to show (3.7), we first see that by Theorem 3.1 we have and thus as a direct consequence of (3.9) we have that lim λ→+∞ λ η (a) (λ), Note also that it is easy to see that the limit in the left-hand side of (3.10) does exist. Now, since E 1 is a finite subset of E we get by (3.10) that On the other hand, by using Note that the vector 1 − b∈E 2 ξ (b) in (3.12) is actually independent of b ∈ E 2 and thus by using (3.12) and the finiteness of E 2 we obtain lim λ→∞ λ b∈E 2 Combining (3.13) and (3.11) immediately yields (3.14) Applying Fatou's lemma in (3.14) immediately yields the required (3.7).
As a direct consequence of Theorem 3.2, we have the following corollary.

Corollary 3.1 Suppose Q is a q-matrix on E × E.
Let E 1 be a finite subset of E. Then for any a ∈ E\E 1 , we have Note that the Williams' (N )-condition is a direct consequence of (3.15). We see that by the above conclusions we have had much more deep understanding of the (N )condition. We now turn to consider the (S)-condition.
If the q-matrix Q is stable, then (S)-condition holds trivially. On the other hand, if the q-matrix Q has only one instantaneous state, then the (S)-condition may not hold. A counter example is the famous Kolmogorov q-matrix, see Reuter [30] or Chen and Renshaw [4]. Hence in the following we shall assume that the given pre-q-matrix Q has at least two instantaneous states.
We now first show that for any given q-matrix Q that has at least two instantaneous states, then a weaker version of the (S)-condition does hold.

Theorem 3.3 Suppose that Q is a q-matrix on E. Further suppose that Q possesses at least two instantaneous states. Then for any i ∈ E, there exists an infinite subset
Compared with the original Williams' condition, we see that the difference is that in our Theorem 3.3, the infinite subset K i may depend on i. We shall prove Theorem 3.3 by using two lemmas which deal with the two different cases. We now claim that K a is an infinite subset. Indeed if it is a finite subset of E, then of course we have j∈K a q bj < +∞. This together with (3.19) yields j∈E\{a,b} q bj < +∞, a contradiction to our assumption j∈E b \{a,b} q bj = +∞. Similarly, we can prove K b is also an infinite subset of E. We now derive a contradiction. Indeed, if (3.23) is true, then the vector α := {q aj ; j ∈ E\{0}} has at most finitely many 0's. Without loss of any generality, we may assume that q aj > 0 for all j ∈ E\{a}. Now, by Theorem 3.1, we have Using (3.23), we see that (3.24) implies that lim j→+∞ ξ (b) j = 0. Hence there exist a finite subset E 1 (which is independent of λ > 0) and a constant δ > 0 such that is an increasing function of λ when λ tends to +∞, and that lim λ→+∞ λ η (b) (λ), 1 − ξ (b) < +∞, it follows that there exists a constant c > 0 such that It follows, by also using (3.25), that Noting that E 1 is a finite subset of E and also E 1 does not depend on λ > 0 and thus (3.26) implies that which contradicts with q a = +∞, see (2.18).
By Lemmas 3.1 and 3.2, we see that Theorem 3.3 has been proven. In other words, if Q is a q-matrix with at least two instantaneous states, then it must satisfy the weaker form of (S)-condition as stated in Theorem 3.3. We are now interested in knowing whether the original Williams' condition is true or not. Interestingly, this question can be answered completely. In fact, if the q-matrix Q has infinitely many instantaneous states, then the Williams' condition always hold. However, if the q-matrix Q has only finitely many instantaneous states, then the (S)-condition of Williams may not hold true. In a sequent paper we shall give an example to show that for any positive integer n we could construct a q-matrix such that the q-matrix Q has exactly n instantaneous states that does not satisfy the original (S)-condition. In spite of this, we could, interestingly, prove that (S)-condition is "nearly" true. The exact meaning is the following conclusion.
Theorem 3.4 Suppose Q = {q i j ; i, j ∈ E} is a q-matrix on E × E with n ≥ 2 instantaneous states. Then for any finite subset E 1 of E, there exists an infinite subset K of E such that j∈K \{i} Proof Without loss of generality, we could assume that j =i since otherwise we may discuss a more smaller E 1 . Now, by Corollary 3.1 we have

Now, if
j =a q aj = +∞, then the conclusion follows from Lemma 3.1, while if j =a q aj < +∞, then the conclusion follows from Lemma 3.2.
Finally, we consider the case that the given q-matrix Q has infinitely many of instantaneous states. For this case, the (S)-condition (3.2) will always hold true as the following conclusion shows. This confirms the Williams' theorem regarding totally instantaneous states.

Proofs of the Resolvent Decomposition Theorems
Our main aim of this section is to prove the resolvent decomposition theorems stated in Sect. 2.

Proof of Theorem 2.1
For notational convenience, we rearrange the states in E such that the states in the finite set F are in the beginning. Considering F ∪ G = E and F ∩ G = φ, we could write the transition function P(t) in matrix block form as where the meaning of the four block matrix is self-explained. Now, define Chung's taboo probability for i, j ∈ G and let F P(t) = { F p i j (t); i, j ∈ G}, Then P(t) can be written as where P GG (t) − F P(t) ≥ 0 (pointwise) for obvious reasons.
By the general last-exit (from set F) decomposition, we have where the term p ik (s)g k j (t −s)ds represents, intuitively, the probability that X (t) = j and the last visit to k in F until time t occurred between s and s +ds under the condition that X (0) = i, see Chung [12]. Now, for notational convenience, we simply denoted the Laplace transforms matrices as Similarly, by the general first-entrance (into set F) decomposition.
where the term f ik (s) p k j (t − s)ds represents, intuitively, the probability that the Markov chain {X t } first hits k between times s and s + ds given that X (0) = i. Hence paralleling to (4.4) we can get that

Applying (2.4) in the above then yields the fact that
Fourthly, we show that R(λ) in (4.7) satisfies (1.11) if and only if the following four relations hold true. where in the above four expressions, I and 0 are the identity matrix and 0 matrix, respectively. However, this verification is trivial and thus omitted. Now, by (4.8a), (4.12b), (4.9d) and (4.17d) we see that (λ) is a resolvent function. We now further claim that it is a Q GG -resolvent where Q GG is the restriction of Q on G × G. To this end, let us express the given q-matrix as the block form of immediately follows from (4.18d) and (4.19). Therefore, Parts (i) and (ii) in Theorem 2.1 have been proven. Moreover, (4.18b) and (4.17a) immediately yield (2.11) and, similarly, (4.18c) and (4.17a) yield (2.12) and hence Parts (iii), (iv), and (v) in Theorem 2.1 have been also proven. In order to prove the other parts of Theorem 2.1, we go back to the proven (4.16) which shows that which is just (2.14).
On the other hand, by using (4.18a) and (4.17a), it is fairly easy to show that lim λ→+∞ λI − A −1 (λ) = Q FF (4.22) and thus by using (4.21) we obtain Note that the second term in the right-hand side of the above (4.24) is independent of λ > 0 which is guaranteed by Lemma 2.3. Note also that the proven fact that if i = j, then c i j ≤ 0 and that j∈F ξ k j ≤ 1 for all k ∈ F we see that by (4.24), c ii ≥ 0 (∀ i ∈ F) and also, (4.24) just reads as (2.17). Hence the part (vii) of Theorem 2.1 is also proven.
Note also that the important fact (2.13)  Finally, we prove the decomposition form (2.5) is unique. But this is easy. Indeed, if, in addition to (2.5), we have another decomposition form as R(λ) = 0 0 0 (λ) +
If one carefully checks the proof of Theorem 2.1, one would find that Theorem 2.2 has been essentially proved. Indeed, in the process of proving Theorem 2.1, we have constantly emphasized that each crucial step is both necessary and sufficient. In other words, based on the proof of Theorem 2.1 we could easily prove Theorem 2.2 by simply adding the detailed checking. But we shall omit such detailed checking here.
Proof of Theorem 2. 3 We only need to prove (i), since the proof of (ii) is similar, while (iii) is a direct consequence of both (i) and (ii). For convenience, we use the block matrix form. That is that we express R(λ) as which shows that (λ) is a B-type Q GG -resolvent. To prove the converse, we need to prove that (4.34) implies both (4.30) and (4.31). To this end, we turn to Theorem 2.1 and by (2.5) we know that  Hence (4.35) is just which shows that (λ I − Q GG )ξ (λ) is a constant matrix, i.e., independent of λ > 0. In order to obtain this constant matrix, we just consider lim λ→∞ (λ I − Q GG )ξ (λ). Note that ξ (λ) ↓ 0 as λ ↑ ∞ and that lim λ→∞ λ ξ (λ) = Q GF , see (2.8). Hence this constant matrix is just Q GF and hence By right-multiplicating A(λ) on the above (4.36) immediately yields (4.32) and then (4.33) follows from the just proven (4.32) and (4.34). Hence both (4.32) and (4.33) hold true and thus both (4.30) and (4.31) hold, which ends the proof.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.