Branching Collision Processes with Immigration

We consider the regularity and ergodic properties of the Branching Collision Process with Immigration (BCIP) in this paper. We establish an easy checking sufficient condition under which the Feller minimal BCIP is honest. We provide some good conditions under which the Feller minimal BCIP is positive recurrent and then establish an analytic form of the generating function of the stationary distribution. The closely associated expected hitting times are also considered. Examples and numerical calculations are provided to illustrate our results.


Introduction
Quite a few years ago, Chen et al. (2012) considered an interesting and challenging model, called Interacting Branching Collision Process (IBCP), which has two strongly interacting components: the branching and the collision components. The main concern of their paper focuses on discussing the extinction behaviour of the process due to the fact that there exists an absorbing state, the state zero, in their model. In particular, the extinction probabilities for various situations are analysed and resolved. See also the more recent one, Chen et al. (2014).
The primary aim of this paper is to consider a new model, which is based on the above IBCP, but by adding a third component, the so-called immigration. Due to this immigration effect, the state zero is no longer absorbing. Hence our main interests turn to the ergodic properties, particularly the most important problem of stationary distributions.
The model we shall discuss in this paper is a continuous time Markov chain defined on the state space of nonnegative integers Z + = {0, 1, 2, · · · } that represents the complex evolution of some interacting particles. More formally we define the model by specifying its infinitesimal characteristic, i.e., the so-called q-matrix as follows.
Note that in Eq. 1.2 we have added some conditions such as c 0 > 0, ∞ k=3 c k > 0, b 0 > 0 and ∞ k=2 b k > 0 etc. This is solely because we want to exclude discussing some trivial cases. In particular we have assumed that −a 0 > 0 since otherwise we would turn back to the IBCP which has been already analysed by Chen et al. (2012Chen et al. ( , 2014 as mentioned above. Definition 1.2 A branching-collision with immigration process (henceforth referred to as a BCIP) is a continuous-time Markov chain on the state space Z + whose transition function P (t) = (p ij (t); i, j ∈ Z + ) satisfies where Q is a BCI q-matrix as given in Eqs. 1.1-1.2.
It should be noted that the Markov process considered in the current paper belongs to the important sub-class of the so-called interacting branching systems which are generalization of the ordinary Markov branching processes (MBPs). It should also be noted that currently there is an increasing and extensive interest in generalizing the MBPs into more general interacting branching models. Such interest is mainly due to the fact that the basic property which governs the evolution of an MBP (i.e. different particles act independently) is not appropriate in many realistic situations. Indeed, in realistic situations, particularly in biological science, individuals (particles) usually interact with each other.
Note that there exist huge and extensive references for MBPs. The standard and good ones, among many others, are Harris (1963), Athreya and Ney (1983), Asmussen and Hering (1983), and Athreya and Jagers (1972). Unfortunately, comparing with the huge publications in the field of MBPs there exist much fewer papers in the literature to discuss the interacting branching systems, possibly because the analysis of interacting branching systems becomes much more difficult. However, although few progresses have been made even until now, the interacting branching systems have been attracted much attention since many challenging but important and interesting questions have arisen due to the interaction effect. The interests toward such systems can be traced back at least to the early sixties of last century, see Sevastyanov and Kalinkin (1982) and the references therein. For more recent ones, see Kalinkin (2002Kalinkin ( , 2003, Chen et al. (2004Chen et al. ( , 2010 and (Lange 2007).
It seems that in considering models within the topic of interacting branching systems, the immigration effect has not been considered so far. However, on the one hand, from the point of view of applications particularly to biological or ecological processes, the immigration factor is clearly of great importance. On the other hand, from the mathematical point of view, adding immigration factor sheds light on developing interesting new laws. In the case of independent Markov branching Processes (MBPs), immigration effect has attracted much research interests. It seems that considering immigration effect for MBPs can be traced back at least to Sevastyanov (1957) in earlier fifties of the last century. His work was then followed by many researchers including, for example, Zubkov (1972), Yang (1972), Vatutin (1977), Pakes (1975a, b) and Li and Chen (2006). For a more recent one, see Li et al. (2012). In fact, various modifications of basic Markov branching process with either state-independent or state-dependent immigrations have been examined extensively. For a summary and unified development of this topic, see the important reference Rahimov (1995).
Clearly, introducing immigration factor to the interacting branching systems is even more of great significance. Indeed, in practical biological populations, for example, different species are usually interacting with each other and then tend to reach a state of balance with their environment. However, without immigration, the Collision-Branching processes will, as revealed by Chen et al. (2004Chen et al. ( , 2010 , either tend to extinction or to explosion, which is clearly contrary to the practical behavior of biological populations. This is the main reason that in the current paper, the immigration component is added to the original Collision-Branching processes. As will be revealed in this paper, adding immigration component does result in a balanced state, i.e. the equilibrium distribution under some conditions. Therefore, considering interacting branching systems with immigration, which is the main purpose of this paper, is not only crucial for the theoretic development, but also of great importance in the practical applications.
The structure of this paper is as follows. Some preliminary results are firstly obtained in Section 2. Uniqueness and regularity criteria are then obtained in Section 3. We show that, roughly speaking, the BCIP is honest, i.e. the infinitesimal q-matrix Q is regular, if and only if the mean birth rate is less than or equal to the mean death rate regarding the collision component only. We also show that there always exists only one BCIP, the Feller minimal Q-process for a given q-matrix Q. The important question of extinction probability regarding the BCIP stopped at state zero is extensively discussed in Section 4. Discussing this stopped BCIP is not only important in analysing the ergodic properties of the BCIP, but also having its own interests. Section 5 concentrates on discussing the ergodic properties of our BCIP process which is our main interest in this paper. In particular, we provide some easy checking conditions under which the BCIP is ergodic and then the generating function of the most important stationary distribution is presented. In Section 6, an example is provided to illustrate the results obtained in the previous sections. In the final Section 7, numerical computing for 2 concrete example are provided in order to show that our calculation procedure provided is effective.

Preliminaries
For the reason of analysing our model more effectively, we first introduce the generating functions of the three known sequences {c k ; k ≥ 0}, {b k ; k ≥ 0} and {a k ; k ≥ 0} as (2.1) As power series, these three generating functions have convergence radii r −1 a = lim n→∞ sup n √ a n , r −1 b = lim n→∞ sup n √ b n , and r −1 c = lim n→∞ sup n √ c n , respectively, where 1 ≤ r a ≤ +∞, 1 ≤ r b ≤ +∞ and 1 ≤ r c ≤ +∞. We usually view these generating functions as complex functions. However, in many cases, we shall view them as real-valued functions. In the following, as a convention, if we want to emphasize that they are viewed as real-valued functions only, we shall denote them as A(x), B(x), C(x), etc. Same convention applies to the functions, g k (z), defined below. Anyway, in this paper, we shall freely interchange the usage of "z" and "x", which, so long as carefully being noticed, won't cause any confusion. These three functions play extremely important role in our later analysis. It is clear that A(z), B(z) and C(z) are well defined at least on the closed unit disk {z; |z| ≤ 1}. The following simple yet important properties of these functions will be constantly used in this paper and we state them here as a remark for the reason of convenience. Note that the proofs of the following (i) and (ii) can be seen in Chen et al. (2004) and the proof of (iii) is obvious. (For simplicity, throughout this paper, we shall use "↑" or "↑↑" to denote "increasing" or "strictly increasing".) By Remark 2.1, we see that both C(x) = 0 and B(x) = 0 possess a smallest positive root, denoted by ρ c and ρ b , respectively. Moreover, if C (1) ≤ 0, then ρ c = 1 while if 0 < C (1) ≤ +∞, then 0 < ρ c < 1. Similarly, if B (1) ≤ 0, then ρ b = 1 while if 0 < B (1) ≤ +∞ then 0 < ρ b < 1.
In addition to the three basic functions A(z), B(z) and C(z), we also need to know some properties of the following family of functions which incorporate these three basic functions together. This family of functions will play a key role in our later analysis.
For each non-negative integer k ≥ 0, define Hence, in particular, we have that g 0 (z) = z 2 A(z) and g 1 (z) = zB(z) + g 0 (z). We see that all g k (z) have the same convergence radius r g = min(r a , r b , r c ) except, possibly, k = 1 or 0. In particular, they are well-defined at least on the closed unit disk {z; |z| ≤ 1} and are analytic within their convergent radius. Also, except g 1 (z) and g 0 (z), which involves the B(z) and A(z) only, all g k (z)(k ≥ 2) are combinations of the three functions A(z), B(z) and C(z).
Note that we are less interested in the properties of these functions beyond x > 1 ever if they may well-defined for x > 1, rather we are mainly interested in the properties of these functions when x ↑ 1, particularly for large k. Hence, in the following we shall concentrate on discussing the properties of g k (x) for 0 ≤ x < 1.
We then have the following two lemmas regarding the properties of g k (x) (k ≥ 2) which are very helpful in our later analysis. However, their proofs are lengthy and, also, elementary and thus are omitted here. As to g 0 (x) and g 1 (x), the properties are much simpler and thus are also omitted here.
Lemma 2.1 For k ≥ 2, the function g k (x) possesses the following properties.
(i) If g k (1) ≤ 0, then g k (x) has one (and only one) zero on the interval [0,1). That is that there exists a ξ k ∈ (0, 1) such that g k (ξ k ) = 0 and for all 0 ≤ x < ξ k , g k (x) > 0 and for all ξ k < x < 1, g k (x) < 0. Hence, g k (x) is strictly increasing on [0, ξ k ), beginning from the value g k (0) > 0, and is strictly decreasing on (ξ k , 1) until g k (1) = 0 and hence g k (ξ k ) is the maximal value of g k (x) on the interval [0, 1]. In particular, g k (x) > 0 for all 0 ≤ x < 1 and thus g k (x) does not have any zero on the interval [0,1). Therefore, 1 is the only zero of g k (x) on the interval [0, 1]. (ii) If 0 < g k (1) ≤ +∞, then g k (x) has exactly two zeros on the interval (0, 1). That is that there exist ξ . Therefore, g k (x) has a unique zero ρ (k) g on the interval [0, 1) and thus g k (x) has exactly two zeros, ρ (k) Remark 2.2 It follows from Lemma 2.1 that for any k ≥ 2, g k (x) has a smallest positive zero, denoted by ρ (k) g here and thereafter, on the interval [0, 1]. Also, if g k (1) ≤ 0, then ρ and A (1) < ∞, then there exists a positive integer m ≥ 2 such that for all k ≥ m and all x ∈ [0, 1), we have g m+k (x) ↑↑ (k ↑) and that g m (x) > 0. In other words, in both cases, we can find m ≥ 2 such that for all k ≥ m, ρ (k) g ≡ 1 and that for all x ∈ (0, 1), In particular, there exists a positive integer m ≥ 2 and a positive value ρ g ∈ (0, 1) such that for all x ∈ (ρ g , 1) and for any k ≥ 1, we have

exists a positive integer m ≥ 2 (which does depend on the value of x and thus should be denoted as m(x) and that
(2.5)

Regularity
In this section, we consider regularity and uniqueness for BCIPs. We first provide the following useful conclusion.
Lemma 3.1 Suppose that Q is a BCI q-matrix as defined in Eqs. 1.1-1.2 and let P (t) = (p ij (t); i, j ≥ 0) and (λ) = (φ ij (λ); i, j ≥ 0) be the Feller minimal Q-function and its Q-resolvent, respectively. Then for any i ≥ 0, t ≥ 0, λ > 0 and |x| < 1, we have or equivalently, Proof It follows from the Kolmogorov forward equation that for any i, j ≥ 0, Multiplying x j on both sides of the above equality and summing over Z + we immediately obtain (3.1). Taking Laplace transform on both sides of (3.1) then yields (3.2).
Noting that using g k (x) defined in (2.2), the form (3.1) can be rewritten as for 0 < x < 1, The form (3.3) is not only simple, but also informative which, in fact, reveals some deep properties of the corresponding process. For more details, see later.
Similarly, Eq. 3.2 can be rewritten as for 0 < x < 1, Proof Since C (1) ≤ 0 and B (1) ≤ 0 and thus by Remark 2.1 we know that for all That is that the Feller minimal Q-function is honest and thus Q is regular.
Proof We consider three different cases separately since the methods used to prove the conclusions are different.
First assume that C (1) < 0, 0 < B (1) < ∞ and A (1) < ∞. Then by (i) of Lemma 2.2, there exists a positive integer m such that the expression (3.4) can be rewritten as for 0 < Letting x ↑ 1 in the above immediately yields, in noting that for 0 which shows that Q is regular. Secondly, for the case of C (1) < 0, 0 < B (1) < ∞ and A (1) = +∞, the Q is still regular. Indeed, first note that the proven (3.2) in Lemma 3.1 can be written as (1)]. Since C (1) < 0 and 0 < B (1) < +∞ we can definitely find an m ≥ 2 such that for all j ≥ m we have W j (1) ≤ 0. However, it is easily seen that for a fixed j , the function W j (x) shares the similar properties of g k (x) as revealed in Lemma 2.2. In particular, we can get that for all j ≥ m and all (3.8) Now letting x ↑ 1 in Eq. 3.8 and noting that for each 1 ≤ j ≤ m, W j (x) → 0 as x → 1 and that lim φ ij (λ) = 1 and hence the Q is still regular.
Finally, we consider the case that C (1) = 0 and 0 < B (1) < ∞. For this case, first note that Q is regular if and only if for some λ > 0 (and hence for all λ > 0), we have λ ∞ j =0 φ ij (λ) = 1. In the following proof we assume the λ = λ 0 > 0 is fixed. Now suppose Q is not regular and thus for this fixed λ 0 , we have (3.9) In the following, let δ( Recall C (1) = 0 and thus by Remark 2.1 we know that C(x) > 0 for all x ∈ [0, 1) and here we still have, by using Eq. 3.2, which can be written as Since 0 < B (1) < +∞ then by Remark 2.1 we know that there exists a ρ b ∈ (0, 1) such that B(x) < 0 for all x ∈ (ρ b , 1) and then for all But due to Eq. 3.9 and the property of A(x) as revealed in Remark 2.1 we know that Without loss of generality, we may assume that ρ b < x 0 < 1 and thus by using Eq. 3.11 and noting that −B(x) > 0 we obtain that for all Integrating the above with x between (x 0 , 1) yields It follows from Eq. 3.12 together with noting that However, δ(λ 0 ) is independent of x and thus, by also noting that B(1) = 0 and 0 < B (1) < ∞, we get that (3.14) Hence ∞ j =1 φ ij (λ 0 ) = +∞ which is a contradiction. This ends the proof.
Note that the method in proving Part (iii) can also be applied to Parts (i) and (ii). However, we enjoy the simple proof given in Parts (i) and (ii).
Theorems 3.1 and 3.2 show that if C (1) ≤ 0, then the Q is regular provided that B (1) < +∞ (but A (1) can be either finite or infinite). Now how about if C (1) > 0? We shall show that if C (1) > 0, then the ABI-q-matrix Q is not regular. In order to discuss the regularity and uniqueness for this case, we need the following result.

Lemma 3.2 Suppose that
Then Q * is also a a conservative q-matrix. Moreover, if Q is regular then so is Q * .
Proof We only need to prove the last conclusion. Suppose that Q * is not regular, then By Theorem 2.2.7 of Anderson (1991) we know that the equation has a nontrivial nonnegative and bounded solution for some λ > 0, denoted by Y = (y i ; i ≥ 0). It is easily seen that y i = 0 for i ≤k. We now claim that Y = (y i ; i ≥ 0) is also a solution of QY ≥ λY .
Indeed, for i ≤k, Therefore, Q is not regular. The proof is finished.
Proof Suppose that C (1) > 0. By a similar argument as in Chen et al. (2004), we could find two constants a * and b * such that Now, we choose an ε ∈ (0, b * − a * ) and let i 0 = [ 2b 0 ε ] + 1 and then define a q-matrix Q = (q ij ; i, j ≥ 0) as where we still use the conventions that b −1 = a −2 = a −1 = 0.
By Lemma 3.2, we only need to prove thatQ is not regular. For this purpose, we define a q-matrix Q * = (q * ij ; i, j ∈ Z + ) as follows Clearly, Q * is a conservative birth-death q-matrix.
< +∞, it is easy to see that Q * is not regular. Hence, the equation has a non-trivial (non-negative) bounded solution, denoted by U * = (u i ; i ≥ 0), here we have ignored the constant λ > 0 ( we may let λ = 1, if necessary). Clearly u i > 0 for all i > i 0 . It is also easy to see that u 0 = · · · = u i 0 = 0 and is strictly increasing in i. From Eq. 3.18 it is easily seen that, for all k ≥ 1 and i > i 0 , and where I d , I b , J d , J b and R b should be self-explained by the above. Now by Eqs. 3.18 and 3.19, we get that Similarly, by Eq. 3.20 we have Indeed, Eq. 3.22 is obvious true for i ≤ i 0 . As to i > i 0 , by using Eqs. 3.20-3.21 and 3.17, we can easily obtain that ThusQ is not regular and hence by Lemma 3.2, Q is not regular. The proof is complete.
We now turn to consider the uniqueness problem of Q-functions which satisfy the Kolmogorov forward equations.

Theorem 3.4 There always exists exactly one Q-function that satisfies the Kolmogorov forward equation. That is that there always exists only one IBCP which is the Feller minimal process for any given Q.
Proof By Theorems 3.1 and 3.2 we only need to consider the case that C (1) > 0. By Theorem 2.2.8 of Anderson (1991), we only need to show that the equation has no nontrivial solution for some (and then for all) λ > 0, where Y · 1 denotes the inner product of Y and the vector 1 where 1 denotes the column vector whose components are all 1.
Suppose that Y = (y k ; k ≥ 0) is a nontrivial solution of Eq. 3.23 with λ = 1. Then y 0 > 0 and, if we let Y (x) = ∞ k=0 y k x k , then by using Eq. 3.23 together with some easy algebras, we can get that Recall we have assumed that C (1) > 0. Now if we further assume that B (1) > 0 then for all x ∈ (ρ c ∨ ρ b , 1) we have that both C(x) < 0 and B(x) < 0. Recall we always have that A(x) < 0 for all x ∈ (0, 1). It follows that for all x ∈ (ρ c ∨ ρ b , 1), the right hand side of Eq. 3.24 is negative. However, the left hand side is obviously positive for all x ∈ (0, 1), which thus causes a contradiction. If, on the other hand, we further assume that B (1) ≤ 0, then by, again, noting that A(x) < 0 for all x ∈ (0, 1) and C(x) < 0 for all x ∈ (ρ c , 1), then Eq. 3.24 yields that which contradicts with 0 < Y (1) < ∞. This ends the proof.

Branching Collision Process with Immigration Stopped at State Zero
In order to consider the ergodic properties of the BCIP which will be fully discussed in the next section, we first consider a closely linked process, the BCIP process stopped at state zero, or, more briefly, the Absorbing Branching Collision Process with immigration, denoted by ABCI thereafter. Revealing the properties of this ABCI process will be essential in analysing the ergodic properties of BCIP, see next section. On the other hand, the properties of the ABCI process are of interests for its own right. To this end, for each BCI q-matrix Q = {q ij ; i, j ∈ Z + } defined in Eqs. 1.1 and 1.2, we define an associated matrix Q (0) = {q (0) ij ; i, j ∈ Z + }, called an absorbing branchingcollision with immigration q-matrix (henceforth referred to as an ABCI q-matrix), as follows, (4.1) In other words, the BCI q-matrix Q and the ABCI q-matrix Q (0) are identical except for the first row. Hence, different from the original BCI q-matrix Q where the state zero is not absorbing, Q (0) possesses an absorbing state zero. We shall see that the ergodic properties of the BCI Q-process have a very close link with the extinction properties of the associated Q (0) -process. In fact, to use this close link is one of the main methods of this paper. Moreover, we define an absorbing branching-collision with immigration process (henceforth referred to as an ABCIP) as a continuous-time Markov chain on the state space Z + whose transition function P (t) = (p ij (t); i, j ∈ Z + ) satisfies where Q (0) is an ABCI q-matrix given in Eq. 4.1 in associated with Eqs. 1.1-1.2. For the given ABCI-q-matrix Q (0) as defined in Eq. 4.1, denote the Feller minimal Q (0) -function and Q (0) -resolvent as F (0) (t) = {f ij (λ); i, j ∈ Z + }, respectively. Then by using the Kolmogorov forward equation, we can immediately obtain the following conclusion.

Lemma 4.1 Suppose Q (0) is defined as in Eq. 4.1. Then for the Feller minimal Q (0)function and Q (0) -resolvent, we have that for any
together with the fact that f (0) together with the fact that Proof Using the Kolmogorov forward equations, we can immediately prove all the conclusions stated here. (4.7) Proof First note that each positive state is transient. Indeed, since the state zero is absorbing and all positive states form an irreducible class which leads to state zero with a positive probability and thus is a transient class and then (i) immediately follows. This simple fact can also be easily proved analytically, but we shall not do so here. The last fact, i.e.the limits lim t→∞ f (0) i0 (t) = v i (i ≥ 1) exist and the fact that 0 ≤ v i ≤ 1, are obvious. We now prove Eq. 4.7. Integrating Eq. 4.5 with respect to t ∈ [0, ∞) and using the just proven facts stated in (i) above together with denoting and hence the left-hand side of the above (4.8) is finite. Now, firstly if C (1) < 0, B (1) < ∞ and A (1) < ∞ or if C (1) = 0, B (1) < 0 then by using condition (2.3) in (i) of Lemma 2.2, together with using Eq. 4.8 we can get that for (4.9) The left-hand side of the above (4.9) is obvious finite for any x ∈ (0, 1) and thus so is the right-hand side of Eq. 4.9. However, since g m (x) > 0 and that However, this time since g m (x) < 0 we can still obtain that 0 < ∞ k=m g k (x)x k−2 < +∞ which, again, shows that Eq. 4.7 is true. Finally, for all other cases, we can use Eq. 2.5 in (iii) of Lemma 2.2 to get the same conclusion as in Eq. 4.9 and thus Eq. 4.7 follows. The proof is complete.
It is worth noting that, by the above two lemmas, if we denote the Feller minimal Q (0)function as We note that the quantities v i (i ≥ 1) are nothing but just the extinction probabilities of the Q (0) -process starting from the state i ≥ 1 together with the obvious fact that v 0 = 1. In fact, the quantities v i (i ≥ 1) will be our main interest in this section. Furthermore, if we denote, as above, (4.10) Our first conclusion regarding the Q (0) -process is the following satisfactory uniqueness criterion.

Theorem 4.1 For any given ABCI q-matrix Q (0) as given in Eq. 4.1, the ABCI-process is always unique which is just the Feller minimal Q (0) -process. Moreover, this Feller minimal Q (0) -process is honest, i.e. the Q (0) is regular, if and only if C (1) ≤ 0 provided B (1) < ∞ and A (1) < ∞.
Proof The proof is similar as the ones in showing Theorem 3.1 to Theorem 3.2 regarding the BCI-processes.
From now on until the end of this section, we shall assume C (1) ≤ 0, B (1) < ∞ and A (1) < ∞ and thus by Theorem 4.1, the Feller minimal Q (0) -process is honest.
Let {X(t); t ≥ 0} be the honest absorbing branching-collision process with the given BCI q-matrix Q (0) as defined in Eq. 4.1. Let and v i = P (τ 0 < ∞|X(0) = i), i ≥ 1, be the extinction time and extinction probability, respectively. We see that v i (i ≥ 1) are the same as given in (i) of Lemma 4.2. Denote Then by Eq. 4.10 we know that G i (x) is well-defined for all |x| < 1. We are mostly interested in finding the conditions under which all v i (i ≥ 1) equal 1. Before getting these conditions we first provide the following simple yet important conclusion.
Theorem 4.2 Let v i = P {τ 0 < ∞|X 0 = i} be the extinction probabilities, starting from state i ≥ 1, of the Feller minimal Q (0) -process. Then for any |x| < 1, we have or, equivalently, if we use the notations introduced in Eqs. 4.8 and 2.2, Proof Integrating (4.3) with respect to t ∈ [0, ∞) and using the facts stated in Lemma 4.2 immediately yields (4.12). Also, it is obvious that Eqs. 4.12 and 4.13 are equivalent.
We are now ready to consider the extinction probability of the ABCI process. The following conclusion is one of the key results we obtain in this paper. Proof For the first case, by using (i) of Lemma 2.2, we know that, under the conditions of this theorem, there exists a positive integer m ≥ 2 such that for all k > m and all 0 < x < 1, we have g k (x) > 0. It then follows from Eq. 4.13 that (4.14) Now letting x ↑ 1 in the above (4.14) and noting that for each fixed 1 ≤ k ≤ m, 0 < β ik < ∞, immediately yields that v i ≥ 1. However, v i ≤ 1 is always true and thus v i = 1 for all i ≥ 1. For the second case, the conclusion follows immediately from (i) of Lemma 2.2 and the proof of Theorem 4.3.
By Theorem 4.3, we know that either the conditions C (1) < 0, B (1) < ∞ and A (1) < ∞ or the conditions C (1) = 0, B (1) < 0 and A (1) < ∞ are sufficient for v i = 1 for all i ≥ 1. Of course, these conditions may not be necessary. However, we shall not consider these more subtle cases, since, interestingly, the conditions given in Theorem 4.3 also guarantee that the mean extinction time is finite. Considering this question is more essential, we now turn to consider this more important question. To this end, let E i (τ 0 ) be the mean extinction time starting from state i ≥ 1.

Theorem 4.4 If
Proof By the proven Theorem 4.3, we see that under the given conditions we have that v i = 1 for all i ≥ 1. Hence Eq. 4.13 reads (4.15) By Theorem 3.1, we know that the right-hand side of the above equality (4.15) can be written as m−1 k=1 β ik g k (x)x k−2 + ∞ k=m β ik g k (x)x k−2 where for all k ≥ m and all x ∈ (0, 1), we have g k (x) ≥ g m (x) > 0. Therefore by Eq. 4.15 we have But by Remark 2.1, we know that the condition C (1) < 0 implies that C(x) > 0 for all 0 ≤ x < 1 and therefore we can get from the above inequality that (4.16) Now letting x ↑ 1 in the above (4.16), we obtain that the left hand side of the above (4.16) tends to which is a finite positive value since C (1) < 0. However, the first term in the right hand side tends to m−1 k=1 β ik g k (1) C (1) which is a finite value. Indeed, by the same Theorem 4.3, we know that for each 1 ≤ k ≤ m − 1, g k (1) = k(k−1) 2 C (1) + kB (1) + A (1) which is finite, hence g k (1) is also a finite value and thus, so is the finite sum m−1 k=1 β ik g k (1) Therefore, when x ↑ 1, the first term in the right-hand side of Eq. 4.16 is finite and thus, so is the second term in the right-hand side of Eq. 4.16. But lim x↑1 g m (x) i0 (t))dt < ∞ by using the honest property of the Feller minimal Q (0) -function. But i0 (t))dt is nothing but just E i (τ 0 ) and hence the conclusion follows. Now, if C (1) = 0, B (1) < 0 and A (1) < ∞, then by (i) of Lemma 2.2, we know that under the conditions stated in this theorem, Eq. 4.13 can still be written as Hence we still have (4.17) If we let x ↑ 1 in Eq. 4.17, then the left-hand side of Eq. 4.17 tends to (−i) B (1) which is a finite positive value due to the fact that B (1) < 0. Then by Eq. 4.17 we know that the right-hand side of Eq. 4.17 also tends to a finite positive value when x ↑ 1. However, it is easy to see that the first term in the right hand side of Eq. 4.17 tends to m−1 k=1 β ik g k (1) B (1) which is a finite value when x ↑ 1. Therefore the second term in the right-hand side of Eq. 4.17 also tends to a finite positive value. But note that for all 0 < x < 1, g m (x) > 0 and B(x) > 0, and thus lim x↑1 g m (x) (1) B (1) which is also a finite positive value. Hence we must have ∞ k=m β ik < +∞ which implies E i (τ 0 ) < ∞. This ends the proof.
ik (t) dt < ∞ (Of course, the latter implies the former).
Proof Note that by Eq. 4.12, we have and thus we have for 0 ≤ x < 1 Now by the proof and conclusion obtained in Theorem 4.4, we see that if C (1) < 0, B (1) < ∞ and A (1) < ∞, then lim x↑1 G i (x) = G i (1) < ∞ which means the last term in the right hand side of Eq. 4.18 tends to a finite (but negative) value A (1) C (1) G i (1). But the left-hand side of Eq. 4.18 tends to a positive finite value (−i) C (1) . It follows that the sum of the first two terms in the right-hand side of Eq. 4.18 must be a finite positive value when x ↑ 1. Since we have further assumed that B (1) < 0, and thus lim x↑1 which is a positive finite value and thus both G i (1) and G i (1) must be finite. However, it is easy ik (t) dt and thus the conclusion follows.
Note that the intuitive meaning of Corollary 4.5 is that if C (1) < 0, B (1) < 0 and A (1) < ∞, then the mean and variance in staying at all positive states of the Feller minimal Q (0) -process are both finite. That is that, if both collision and branching components tend to extinction (C (1) < 0 and B (1) < 0), then the process will tend to extinction "strongly" and "quickly" unless the immigration effect is extraordinary strong (i.e. A (1) = ∞).
Remark 4.2 Intuitively speaking, the conclusions stated in Theorems 4.3 and 4.4 are clear. Indeed, it is just saying that the collision component dominates the branching and immigration components. This is no wonder since collision component is in quadratic form which is, stochastically speaking, "stronger" or more effective than the branching component (which is in a linear form) and "Immigration" component (which is in a "constant" form). In particular, Corollary 4.5 tells us that if both collision and branching components tend to extinction, then the effect of immigration component which tends to rescue the process from the extinction takes little effect unless the immigration is extremely strong (i.e. if A (1) = ∞).

Remark 4.3
Note that in proving Theorems 4.4, we have used Eq. 4.13. However,Eqs. 4.13 and 4.12 are equivalent and therefore we could use Eq. 4.12 to get the same result. In particular, if C (1) < 0 and B (1) < 0, then both G i (1) and G i (1) are finite. Hence in letting x ↑ 1 in Eq. 4.12, we may get that Although Eq. 4.19 does not give the value of E i (τ 0 ), it does provide some information about the value of E i (τ 0 ).

Ergodicity and Equilibrium Distribution
We now turn back to the branching collision processes with immigration. By the results we obtained in the previous section, we immediately obtain the following important conclusion. Proof This is clear. In fact, it is well-known that there exists a close relationship between the BCIP and the Feller minimal Q (0) -process discussed in the previous section. Indeed, since the BCI-q-matrix is irreducible, we know that the BCIP is recurrent if and only if the extinction probability of the Q (0) -process is 1 for all i ≥ 1. Furthermore, the BCIP is positive recurrent if and only if the mean extinction times E i (τ 0 ) (∀i ≥ 1) are all finite (which, by irreducibility, is equivalent to a particular i 0 ≥ 1 such that E i 0 (τ 0 ) < ∞) together with the fact that the mean return time from zero to all positive states are finite. However, this latter condition is guaranteed by A (1) < ∞, which we have assumed. Therefore, Theorem 5.1 follows from Theorems 4.3 and 4.4. This completes the proof.
Theorem 5.1 guarantees that under the given conditions, there exists a unique equilibrium distribution {π i ; i ≥ 0}. We are now interested to find this equilibrium distribution. Now, let π j x j (|x| ≤ 1). (5.1) Theorem 5.1 guarantees that (x) is well-defined and that for all j ≥ 0, we have that π j > 0 and that (1) = ∞ j =0 π j = 1.
By Eq. 5.2 if we let y to denote and let p( It should be pointed out that the ordinary differential equation (5.15) is a second-order linear differential equation, to which a huge number of results have been obtained, see the above mentioned book (Hsieh and Sibuya 1999) and the references therein. By the general theory of ordinary differential equations, we know that the equation (5.15) has two linear independent solutions. However, under the conditions provided in our Theorem 5.1, there is one (and only one) positive and summable solution. After choosing and normalizing this solution, we could get the solution (x) as required.
By choosing suitable u 0 > 0 and u 1 > 0, we can definitely obtain a positive and summable sequence {u j ; j ≥ 0} and then we can get the required equilibrium distribution {π j ; j ≥ 0} in terms of {u j ; j ≥ 0}. Also, without loss of generality, we may let u 0 = 1 in the above computations.

An Example
We now use an example to illustrate our results obtained in the previous sections. We fix the three sequence {c j }, {b j } and {a j } as follows.
We now use two concrete examples to illustrate our procedure. To save time and sources, we do iterations by 50 steps which are usually enough in practical situations. Obviously, the conditions stated in Eq. 7.4 are satisfied. Then by doing the direct calculations as stated in (6.13), (6.14) and (6.15), we obtain the following result. For simplicity, only up to the 50th terms are recorded. Now the results up to the 14th term are reported below with the other terms being bigger that 10 to the 6th.