1 Introduction

Recent progress in information technology has led to an explosive increase in Internet traffic. In addition to the development of the Internet of Things (IoT), COVID-19 drives online communication needs, making well networked environments essential. As a result, there is a concern about the shortage of wireless resources, such as radio bandwidth allocated to unlicensed users.

Cognitive wireless, which is used to develop 5G or Beyond (B5G) technologies, is expected to alleviate the bandwidth scarcity problem. Cognitive wireless networking separates users into two classes: primary users (PUs) and secondary users (SUs). PUs are licensed and have a dedicated bandwidth, while SUs can only opportunistically use that bandwidth. There are three main types of cognitive technology: overlay access, underlay access and interweave access (Nasser et al., 2021). In underlay access, SUs are allowed to transmit concurrently with PUs over the same channels. However, their traffic must not exceed a certain threshold so as to keep the interference on PUs below a fair value. In overlay access, SUs may, simultaneously with PUs, occupy the same channel until the capacity of the channel is maximized. In this case, the SU sends its data by relaying the PUs. This type of technology thus needs the cooperation of each user and may result in an invasion of PUs’ privacy. In interweave access, SUs are allowed to transmit at maximum power only when PUs are not present. This paradigm is also known as the classical cognitive radio (CR) (Nasser et al., 2021).

In this paper, we focus on interweave access. SUs must sense the availability of the channels before using the frequency bands. If an SU finds an idle channel after sensing, it occupies the channel and starts communication; otherwise, the SU must sense again to find an idle channel at a later time.

This sensing behavior of SUs resembles retrial queues. The reader can refer to Artalejo and Gómez (2008); Falin and Templeton (1997) for research in retrial queues. In retrial queues, arriving customers are blocked when the servers are already fully occupied. These blocked customers instead enter a virtual waiting room, called the orbit, and seek service again after a random waiting time until they successfully complete the communication (Phung-Duc, 2019). In the cognitive radio model, every arriving SU is sent to the sensing pool (the orbit in the retrial queue) and senses to find an idle channel.

Queueing systems for cognitive radio networks are extensively studied (PalunčIć et al., 2018). Salameh et al. (2017) considered a model with a stochastic choice of channels and a finite number of simultaneously sensing users (i.e., a finite sensing pool). In Akutsu and Phung-Duc (2019) and Phung-Duc et al. (2021), they assume that the size of the sensing pool is infinite.

Diffusion limits for queueing systems were deeply studied in Halfin and Whitt (1981) and Whitt (2004), and those for retrial queues were studied in Moiseev et al. (2020), Nazarov et al. (2019), Nazarov et al. (2020) and Nazarov et al. (2020b). The latter used the characteristic function approach.

In this paper, using the asymptotic-diffusion method (Moiseev et al., 2020; Nazarov et al., 2020; Nazarov et al., 2019, 2020b), we focus on the situation where it takes SUs a long time to sense the availability of channels. In this case, the evaluation of the number of SUs in the orbit is extremely difficult using a conventional method such as level-dependent quasi-birth-and-death process (QBD) (Phung-Duc et al., 2010). Because the number of SUs in the orbit is large, we need to truncate the orbit at an extremely large truncation level, denoted by \(N^*\). The complexity of the algorithm is proportional to that for computing \(N^*\) inverse matrices and thus is extremely large. From the computational point of view, the asymptotic diffusion method is more useful in this case and complements the level-dependent QBD approach.

When \(\sigma \) takes closer to 0, the number of SUs in the sensing pool diverges, but a scaled version of this number converges to a deterministic process. Furthermore, we study the second-order asymptotics, in which the scaled and centered number of SUs in the orbit weakly converges to a diffusion process. Finally, the limiting results are used to approximate stationary performance measures.

The main result of this study is a necessary condition of stability for the steady-state regime, which turns out to be consistent with the sufficient condition in Phung-Duc et al. (2021). In addition, we show the uniqueness of the stationary solution of the differential equation, which determines the asymptotic number of SUs in the sensing pool in the stationary regime.

The paper consists of seven sections. In Sect. 2, we introduce the model description and the mathematical model. Next, we consider the diffusion analysis of cognitive wireless networks in Sect. 3. In Sect. 4, we prove the main results for the stability condition of our model. Using the continuous probability distribution obtained from the diffusion limit, we construct approximations for the discrete distribution in Sect. 5. Section 6 compares the simulation results and the approximation obtained by the diffusion limit. We conclude with a summary synthesis in Sect. 7.

2 Model description and preliminaries

In this section, we model cognitive wireless networks as a queueing system of c servers with PUs and SUs arriving at the system according to Poisson processes with arrival rates \(\lambda _1\) and \(\lambda _2\), respectively. The transmission times of PUs and SUs are exponentially distributed with parameters \(\mu _1\) and \(\mu _2\), respectively. The arrival processes of both user types and the service times are assumed to be mutually independent. A new PU can use the channel and transmit unless all channels are occupied by other PUs; otherwise, it gets blocked and leaves the system. Thus, from PU’s point of view, the model is Erlang-B. Arriving SUs must enter the sensing pool and sense for available channels. The sensing times of the SUs follow the exponential distribution with parameter \(\sigma \) and do not depend on other SUs.

When a new PU arrives at the system where the total number of channels c are occupied by other SUs and PUs, the PU interrupts the transmission of an SU (if exists) and uses that channel. The interrupted SU must enter the sensing pool (orbit) to sense again.

Gómez-Corral et al. (2005) addressed the assumption that when all channels are busy with priority customers, a newly arriving priority unit balks. In the model in Salameh et al. (2017), the sensing pool is limited in size, but our model assumes an infinite sensing pool. Note that the service time distribution of an interrupted SU is the same as that of a newly arrived SU because of the memoryless property of exponential distributions.

Let us denote:

  • \(n_1(t)\): the number of PUs that occupy channels at instant t,

  • \(n_2(t)\): the number of SUs that occupy channels at instant t,

  • i(t): the number of SUs in the sensing pool at instant t.

Let \(P(n_1, n_2, i, t) = P\left\{ n_1(t)=n_1, \, n_2(t)=n_2, \, i(t)=i \right\} \) denote the joint probability distribution of the process \(\{ (n_1(t), \, n_2(t), \, i(t) ) \mid t \ge 0 \}\). Under the assumptions of the model, the process is a three-dimensional Markov chain. The transition rate from \(x = (n_1, n_2, i)\) to y (\(x \ne y\)) is given as follows.

$$\begin{aligned} q_{x, y} = {\left\{ \begin{array}{ll} \lambda _1 &{} \textrm{if} \ y = (n_1+1, n_2, i), n_1 + n_2 \le c-1, \\ \lambda _1 &{} \textrm{if} \ y = (n_1+1, n_2-1, i+1), n_1 + n_2 = c, n_2 \ge 1, \\ \lambda _2 &{} \textrm{if} \ y = (n_1, n_2, i+1), \\ n_1 \mu _1 &{} \textrm{if} \ y= (n_1-1, n_2, i), n_1 \ge 1, \\ n_2 \mu _2 &{} \textrm{if} \ y= (n_1, n_2-1, i), n_2 \ge 1, \\ i \sigma &{} \textrm{if} \ y = (n_1, n_2 + 1, i-1), n_1 + n_2 \le c-1, \\ 0 &{} \textrm{otherwise}. \end{array}\right. } \end{aligned}$$

Our goal is to obtain the scaling limits of \((n_1(t), n_2(t))\) and i(t). We solve this problem by the asymptotic-diffusion method (Moiseev et al., 2020; Nazarov et al., 2019; Nazarov et al., 2020) under the asymptotic condition when the sensing time is long: \(\sigma \rightarrow 0\).

The Kolmogorov forward equations are given as follows.

  1. (i)

    \(n_1 + n_2 = 0\), \(i \ge 0\)

    $$\begin{aligned} \begin{aligned} \frac{{{d}} P(0, 0, i, t)}{{{d}} t} =&-(\lambda _1 + \lambda _2 + i \sigma ) P(0, 0, i, t) + \lambda _2 P(0, 0, i-1, t) \\&+ \mu _1 P(1, 0, i, t) + \mu _2 P(0, 1, i, t). \end{aligned} \end{aligned}$$
  2. (ii)

    \(1 \le n_1 + n_2 \le c-1\), \(i \ge 0\)

    $$\begin{aligned} \begin{aligned} \frac{{{d}} P(n_1, n_2, i, t)}{{{d}} t} =&-(\lambda _1 + \lambda _2 + n_1 \mu _1 + n_2 \mu _2 + i \sigma ) P(n_1, n_2, i, t) \\&+ \lambda _2 P(n_1, n_2, i-1, t) + \lambda _1 P(n_1-1, n_2, i, t) \\&+ (n_1+1) \mu _1 P(n_1 +1, n_2, i, t) \\&+ (n_2+1)\mu _2 P(n_1, n_2+1, i, t) \\&+ (i+1) \sigma P(n_1, n_2-1, i+1, t). \end{aligned} \end{aligned}$$
  3. (iii)

    \(n_1 + n_2 = c, n_2 \ge 1\), \(i \ge 0\)

    $$\begin{aligned} \begin{aligned} \frac{{{d}} P(n_1, n_2, i, t)}{{{d}} t} =&-(\lambda _1 + \lambda _2 + n_1 \mu _1 + n_2 \mu _2)P(n_1, n_2, i, t) \\&+ \lambda _1 \{ P(n_1-1, n_2, i, t) + P(n_1-1, n_2+1, i-1, t) \} \\&+ \lambda _2 P(n_1, n_2, i-1, t) \\&+(i+1) \sigma P(n_1, n_2-1, i+1, t). \end{aligned} \end{aligned}$$
  4. (iv)

    \(n_1 =c\), \(i \ge 0\)

    $$\begin{aligned} \begin{aligned} \frac{{{d}} P(c, 0, i, t)}{{{d}} t} =&-(\lambda _2 +c \mu _1) P(c, 0, i, t) + \lambda _2 P(c, 0, i-1, t) \\&+\lambda _1 \{ P(c-1, 0, i, t) + P(c-1, 1, i-1, t) \}. \end{aligned} \end{aligned}$$

We note that \(P(n_1, n_2, i, t) = 0\) for \(n_1 < 0\) or \(n_2 < 0\) or \(i < 0\) and let \(j = \sqrt{-1}\) be the imaginary unit. The partial characteristic function is defined by

$$\begin{aligned} H(n_1, n_2, s, t) = \sum _{i=0}^{\infty } e^{j s i} P(n_1, n_2, i, t). \end{aligned}$$

We obtain the following differential equations.

  1. (i)

    \(n_1 + n_2 = 0\)

    $$\begin{aligned} \frac{\partial H(0, 0, s, t)}{\partial t} =&-(\lambda _1 + \lambda _2) H(0, 0, s, t) + e^{js} \lambda _2 H(0, 0, s, t) \nonumber \\&+ \mu _1 H(1, 0, s, t) + \mu _2 H(0, 1, s, t) + j \sigma \frac{\partial H(0, 0, s, t)}{\partial s}. \end{aligned}$$
    (1)
  2. (ii)

    \(1 \le n_1 + n_2 \le c-1\)

    $$\begin{aligned} \frac{{{\partial }} H(n_1, n_2, s, t)}{\partial t} =&-(\lambda _1 + \lambda _2 + n_1 \mu _1 + n_2 \mu _2) H(n_1, n_2, s, t) \nonumber \\&+ e^{js} \lambda _2 H(n_1, n_2, s, t) + \lambda _1 H(n_1-1, n_2, s, t) \nonumber \\&+ (n_1+1) \mu _1 H(n_1+1, n_2, s, t) \nonumber \\&+ (n_2+1) \mu _2 H(n_1, n_2+1, s, t) \nonumber \\&+ j \sigma \frac{\partial H(n_1, n_2, s, t)}{\partial s} - e^{-js} j \sigma \frac{\partial H(n_1, n_2-1, s, t)}{\partial s}. \end{aligned}$$
    (2)
  3. (iii)

    \(n_1 + n_2 =c, n_2 \ge 1\)

    $$\begin{aligned} \frac{{{\partial }} H(n_1, n_2, s, t)}{\partial t} =&-(\lambda _1 + \lambda _2 + n_1 \mu _1 + n_2 \mu _2) H(n_1, n_2, s, t) \nonumber \\&+ e^{js} \lambda _2 H(n_1, n_2, s, t) + \lambda _1 \{ H(n_1-1, n_2, s, t) \nonumber \\&+ e^{js} H(n_1-1, n_2+1, s, t) \} \nonumber \\&- e^{-j s} j \sigma \frac{\partial H(n_1, n_2-1, s, t)}{\partial s}. \end{aligned}$$
    (3)
  4. (iv)

    \(n_1 =c\)

    $$\begin{aligned} \frac{{{\partial }} H(c, 0, s, t)}{\partial t} =&-(\lambda _2 + c \mu _1) H(c, 0, s, t) + e^{js} \lambda _2 H(c, 0, s, t) \nonumber \\&+ \lambda _1 \{ H(c-1, 0, s, t) + e^{js} H(c-1, 1, s, t) \}. \end{aligned}$$
    (4)

By using linear finite difference operators \({\textbf{A}}\), \({\textbf{B}}\), \({\textbf{C}}\), \({{\textbf{I}}}_0\), \({{\textbf{I}}}_1\), we can rewrite (1)–(4) as follows.

$$\begin{aligned} \frac{\partial {\textbf{H}}(s, t)}{\partial t} = \left\{ {\textbf{A}} + e^{j s} (\lambda _1 {\textbf{B}} + \lambda _2 {\textbf{C}}) \right\} {\textbf{H}}(s,t) + ({\textbf{I}}_0- e^{- j s} {\textbf{I}}_1) j \sigma \frac{\partial {\textbf{H}}(s, t)}{\partial s}, \end{aligned}$$
(5)

where \({\textbf{H}}(s, t)\) is a \((c+1) \times (c+1)\) top-left triangle matrix with elements being equal to \(H(n_1, n_2, s, t)\) for \(n_1 \ge 0\), \(n_2 \ge 0\), \(n_1+n_2 \le c\). Operators in (5) are defined as:

$$\begin{aligned} {{\textbf{A}}}{{\textbf{H}}}(s, t)_{n_1, n_2} = {\left\{ \begin{array}{ll} &{}-(\lambda _1 + \lambda _2)H(0,0,s,t) + \mu _1 H(1, 0, s, t) + \mu _2 H(0,1,s,t), \\ &{}\qquad (n_1 + n_2 = 0), \\ &{}-(\lambda _1 + \lambda _2 + n_1 \mu _1 + n_2 \mu _2) H(n_1, n_2, s, t) \\ &{}\quad + \lambda _1 H(n_1-1, n_2, s, t) + (n_1+1) \mu _1 H(n_1+1, n_2, s, t) \\ &{}\quad + (n_2+1) \mu _2 H(n_1, n_2+1, s, t), \ \ (1 \le n_1 + n_2 \le c-1), \\ &{}-(\lambda _1 + \lambda _2 + n_1 \mu _1 + n_2 \mu _2) H(n_1, n_2, s, t) \\ &{}\quad + \lambda _1 H(n_1-1, n_2, s, t), \ \ (n_1+n_2 = c), \\ &{}-(\lambda _2 + c \mu _1) H(c, 0, s, t) + \lambda _1 H(c-1, 0, s, t), \ \ (n_1 = c), \end{array}\right. } \\ {\textbf{B}} {\textbf{H}}(s, t)_{n_1, n_2} = {\left\{ \begin{array}{ll} {{H(n_1-1, n_2+1, s, t)}}, &{} (n_1 + n_2 = c, n_2 \ge 1), \\ 0, &{} (\textrm{otherwise}), \end{array}\right. } \\ {{\textbf{C}}}{{\textbf{H}}}(s, t)_{n_1, n_2} = H(n_1, n_2, s, t), \ \ {{(0 \le n_1 + n_2 \le c)}}, \\ {\textbf{I}}_0 {\textbf{H}}(s, t)_{n_1, n_2} = {\left\{ \begin{array}{ll} H(n_1, n_2, s, t), &{} (n_1 + n_2 \le c-1), \\ 0, &{} (\textrm{otherwise}), \end{array}\right. } \\ {\textbf{I}}_1 {\textbf{H}}(s, t)_{n_1, n_2} = {\left\{ \begin{array}{ll} H(n_1, n_2-1, s, t), &{} (1 \le n_2 \le c), \\ 0, &{} (\textrm{otherwise}). \end{array}\right. } \end{aligned}$$

Summing (1)–(4), we obtain

$$\begin{aligned} \left\{ \frac{\partial }{\partial t} \sum _{n_1+n_2 \le c} H(n_1, n_2, s, t) \right\} = (e^{js} -1) \Big \{ \lambda _1 \sum _{\begin{array}{c} n_1+n_2=c \\ n_2 \ge 1 \end{array}} H(n_1, n_2, s, t) \nonumber \\ + \lambda _2 \sum _{n_1+n_2 \le c} H(n_1, n_2, s, t) + e^{-js} j \sigma \sum _{n_1+n_2 \le c-1} \frac{\partial H(n_1, n_2, s, t)}{\partial s} \Big \}. \end{aligned}$$
(6)

Let \(\textbf{S}_1\) be the summing operator for the elements \(n_1 + n_2 = c \) and \(n_2 \ge 1\) and \(\textbf{S}_2\) be that for the elements \(n_1 + n_2 \le c-1\). Furthermore, let \(\textbf{S}\) be the total summing operator. We rewrite (6) in the following form

$$\begin{aligned} \frac{\partial }{\partial t} [\textbf{S H}(s, t)] = (e^{js} -1) \Big \{ \lambda _1 \mathbf{S_1 H} (s, t) + \lambda _2 \textbf{S H}(s, t) + e^{-js} j \sigma \frac{\partial }{\partial s} [\mathbf{S_2 H} (s, t)] \Big \}. \end{aligned}$$
(7)

3 Asymptotic-diffusion analysis

We construct the solution for the first order asymptotic (fluid limit (Robert, 2013)) in Sect. 3.1 and that for the diffusion limit in Sect. 3.2. The idea of the fluid limit is that we scale the time by a factor that in our model is chosen to be the retrial rate \(\sigma \). In the diffusion limit, the time is scaled by another factor that is square of that in the fluid limit, i.e., \(\sigma ^2\). In the fluid limit, we prove that \(\sigma i(\frac{\tau }{\sigma })\) converges to a deterministic process \(x(\tau )\). Using this result, we can approximate the stochastic process \(i(\frac{\tau }{\sigma })\) by \(\frac{1}{\sigma }x(\tau )\). In Sect. 3.2, we study the fluctuation of i(t) around its mean for which we subtract \(\frac{1}{\sigma }x(\tau )\) from i(t).

3.1 First step of asymptotic analysis

In this section, we solve the system of Eqs. (5) and (7) using the asymptotic-diffusion method (Nazarov et al., 2020) under the asymptotic condition: \(\sigma \rightarrow 0\). Here, we make the following substitutions:

$$\begin{aligned} \sigma = \epsilon , \ \ \tau = \epsilon t, \ \ s = \epsilon \omega , \ \ {\textbf{H}}(s, t)= {\textbf{F}}(\omega , \tau , \epsilon ). \end{aligned}$$

It should be noted that \(\tau = \epsilon t\) is the standard procedure for the change of time scale in fluid limit while \(s= \epsilon \omega \) is to consider the characteristic function of the scaled version of i(t) that is \(\epsilon i(t)\). With these substitutions, \({\textbf{F}}(\omega , \tau , \epsilon )\) represents the characteristic function of \(\epsilon i(t)\).

Then, we obtain the following equations:

$$\begin{aligned} \epsilon \frac{\partial {\textbf{F}}(\omega , \tau , \epsilon )}{\partial \tau }= & {} \{ {\textbf{A}} + e^{j \epsilon \omega } (\lambda _1 {\textbf{B}} +\lambda _2 {\textbf{C}}) \} {\textbf{F}}(\omega , \tau , \epsilon ) + j({\textbf{I}}_0 - e^{-j \epsilon \omega } {\textbf{I}}_1) \frac{\partial {\textbf{F}}(\omega , \tau , \epsilon )}{\partial \omega },\nonumber \\ \end{aligned}$$
(8)
$$\begin{aligned} \epsilon \frac{\partial }{\partial \tau } [\textbf{S F}(\omega , \tau , \epsilon ) ]= & {} (e^{j \epsilon \omega } - 1) \{ \lambda _1 \mathbf{S_1 F}(\omega , \tau , \epsilon ) + \lambda _2 \textbf{S F}(\omega , \tau , \epsilon ) \nonumber \\{} & {} + e^{- j \epsilon \omega } j {{\frac{\partial }{\partial \omega }}} [\mathbf{S_2 F} (\omega , \tau , \epsilon )]\}. \end{aligned}$$
(9)

Lemma 3.1

The following equality holds as \(\sigma \rightarrow 0\).

$$\begin{aligned} \lim _{\sigma \rightarrow 0} {\mathbb {E}}[e^{j \omega \sigma i(\frac{\tau }{\sigma })}] = e^{j \omega x(\tau )}, \end{aligned}$$
(10)

where \(x(\tau )\) is a solution of

$$\begin{aligned} x'(\tau )=a(x)=(\lambda _1 \mathbf{S_1} + \lambda _2 \textbf{S}) \textbf{R} - x \mathbf{S_2 R}. \end{aligned}$$
(11)

Here, \(\textbf{R}=\textbf{R}(x)\) is a left-top triangle matrix which is a solution of the following system

$$\begin{aligned} \{ \textbf{A} + \lambda _1 \textbf{B} + \lambda _2 \textbf{C} - x(\tau ) (\textbf{I}_0 - \textbf{I}_1)\} \textbf{R} = 0, \end{aligned}$$
(12)

and satisfies the normalization condition of a probability distribution

$$\begin{aligned} \textbf{S R} = \sum _{n_1 + n_2 \le c} R(n_1, n_2, x) = 1. \end{aligned}$$
(13)

From (12) and (13), \(R(n_1, n_2, x)\) is the steady state probability of a Markov chain at state \((n_1, n_2)\). The transition diagram of the Markov chain is illustrated in Fig.  (for simplicity, we show the case of \(c=4\)). It follows from (12) that this Markov chain represents the corresponding loss system, where the arrival rates of PUs and SUs are \(\lambda _1\) and x, respectively.

Fig. 1
figure 1

Transitions among states of the Markov chain

Proof

Denoting \(\mathop {\lim \limits _{\epsilon \rightarrow 0}} \textbf{F}(\omega , \tau , \epsilon ) = \textbf{F}(\omega , \tau )\) and taking the limit \(\epsilon \rightarrow 0\) in (8), we have

$$\begin{aligned} (\textbf{A} + \lambda _1 \textbf{B} + \lambda _2 \textbf{C}) \textbf{F}(\omega , \tau ) + (\textbf{I}_0 - \textbf{I}_1) j \frac{\partial \textbf{F}(\omega , \tau )}{\partial \omega } = 0. \end{aligned}$$
(14)

Due to the structure of (14), similar to the scalar case, we find its solution in the form

$$\begin{aligned} \textbf{F}(\omega , \tau ) = \textbf{R} e^{j \omega x(\tau )}, \end{aligned}$$
(15)

where \(\textbf{R}\) is a left-top triangle matrix and \(x(\tau )\) is a scalar function which represents the asymptotic value of the normalized number of SUs in the sensing pool \(\sigma i(\frac{\tau }{\sigma })\). Substituting (15) into (14), we obtain (12). Because the matrix before \({\textbf{R}}\) in the left-hand side of (12) is the infinitesimal generator of the Markov chain (Fig. 1), we can choose \({\textbf{R}}\) as the stationary distribution of that Markov chain for which (13) holds.

Taking the limit \(\epsilon \rightarrow 0\) in (9) yields

$$\begin{aligned} \textbf{S} \frac{\partial \textbf{F}(\omega , \tau )}{\partial \tau } = j \omega \left\{ \lambda _1 \mathbf{S_1 F}(\omega , \tau ) + \lambda _2 \textbf{S F}(\omega , \tau ) + j \frac{\partial }{\partial \omega }[\mathbf{S_2 F}(\omega , \tau )] \right\} . \end{aligned}$$
(16)

Substituting (15) into (16), we obtain (11). Since the scalar function \(x(\tau )\) is the asymptotic value of the normalized number of SUs in the sensing pool \(\sigma i(\frac{\tau }{\sigma })\), (10) holds. \(\square \)

Remark 3.2

We note that the solution of (11) in the case \(c=1\) is given by

$$\begin{aligned} x(\tau ) = -K \cdot W \Big ( N \exp (-L \tau - M - 1) + 1 \Big ) - (\lambda _1 + \mu _2), \end{aligned}$$

where

$$\begin{aligned} K&= \frac{\mu _1 \mu _2 (\lambda _1 + \mu _2)}{\lambda _2 (\lambda _1 + \mu _1) - \mu _1 \mu _2}, \\ L&= \frac{\{ \lambda _2(\lambda _1 + \mu _1)-\mu _1 \mu _2 \}^2}{\mu _1 \mu _2 (\lambda _1 + \mu _1) (\lambda _1 + \mu _2)}, \\ M&= \frac{\lambda _2 (\lambda _1 + \mu _1) - \mu _1 \mu _2}{\mu _1 \mu _2}, \end{aligned}$$

and where N is a constant of integration given the initial value \(x(0)=0\), and W(x) is Lambert W-function.

3.2 Second step of asymptotic analysis

In this section, using the approach (Nazarov et al., 2020), we perform the second stage of the asymptotic-diffusion method to obtain the diffusion limit.

Let us rewrite \(\textbf{H}(s, t)\) as

$$\begin{aligned} \textbf{H}(s, t) = \textbf{H}^{(1)}(s, t) e^{j s \frac{x(\sigma t)}{\sigma }}. \end{aligned}$$

We obtain

$$\begin{aligned}&\frac{\partial \textbf{H}^{(1)}(s, t)}{\partial t} + j s a(x) \textbf{H}^{(1)}(s, t) \\&= [\textbf{A} + e^{js} (\lambda _1 \textbf{B} + \lambda _2 \textbf{C}) + x(e^{-js} \mathbf{I_1} - \mathbf{I_0})] \textbf{H}^{(1)}(s, t) \\&+ (\mathbf{I_0} - e^{-js}{} \mathbf{I_1}) j \sigma \frac{\partial \textbf{H}^{(1)}(s, t)}{\partial s},\\&\frac{\partial }{\partial t} [\textbf{SH}^{(1)}(s, t)] + jsa(x) \textbf{SH}^{(1)}(s, t) \\&= (e^{js} -1) \Big \{(\lambda _1 \mathbf{S_1} + \lambda _2 \textbf{S} -x e^{-js} \mathbf{S_2}) \textbf{H}^{(1)}(s, t) + j \sigma e^{-js} \frac{\partial }{\partial s}[\mathbf{S_2 H}^{(1)}(s, t)] \Big \}. \end{aligned}$$

We note that \(\textbf{H}^{(1)}(s, t)\) is the matrix characteristic function of the centered process \(i(t) - \frac{1}{\sigma }x(\sigma t)\). Here, \(x(\tau )\) is the asymptotic value of the normalized number of SUs in the sensing pool.

We denote \(\sigma = \epsilon ^2\) and make the following substitutions:

$$\begin{aligned} \tau = \epsilon ^2 t, \ s= \epsilon \omega , \ \textbf{H}^{(1)}(s,t) = \textbf{F}^{(1)}(\omega , \tau , \epsilon ) \end{aligned}$$

to obtain

$$\begin{aligned}&\epsilon ^2 \frac{\partial \textbf{F}^{(1)}(\omega , \tau , \epsilon )}{\partial \tau } + j \epsilon \omega a(x) \textbf{F}^{(1)}(\omega , \tau , \epsilon ) = [\textbf{A} + e^{j \epsilon \omega } (\lambda _1 \textbf{B} + \lambda _2 \textbf{C}) \nonumber \\&\qquad + x(e^{-j \epsilon \omega } \mathbf{I_1} - \mathbf{I_0})] \textbf{F}^{(1)}(\omega , \tau , \epsilon ) + j \epsilon (\mathbf{I_0} - e^{-j \epsilon \omega }{} \mathbf{I_1}) \frac{\partial \textbf{F}^{(1)}(\omega , \tau , \epsilon )}{\partial \omega }, \nonumber \\&\frac{\partial }{\partial \tau } [\textbf{SF}^{(1)}(\omega , \tau , \epsilon )] + j \epsilon \omega a(x) \textbf{SF}^{(1)}(\omega , \tau , \epsilon ) \nonumber \\&\qquad =(e^{j \epsilon \omega } -1) \Big \{(\lambda _1 \mathbf{S_1} + \lambda _2 \textbf{S} - x e^{-j \epsilon \omega } \mathbf{S_2}) \textbf{F}^{(1)}(\omega , \tau , \epsilon ) \nonumber \\&\qquad + j \epsilon e^{-j \epsilon \omega } \frac{\partial }{\partial \omega }[\mathbf{S_2 F}^{(1)}(\omega , \tau , \epsilon )] \Big \} \end{aligned}$$
(17)

It should be noted that \(\tau = \epsilon ^2 t\) is the standard time change procedure in diffusion limit.

The following lemma states the asymptotic property.

Lemma 3.3

Let \(\Phi (\omega , \tau )\) be the characteristic function of the asymptotic process, \( \mathop {\lim \limits _{\sigma \rightarrow 0}} \sqrt{\sigma } \left\{ i(\frac{\tau }{\sigma }) - \frac{1}{\sigma } x(\tau ) \right\} \). Then, we have

$$\begin{aligned} \frac{\partial \Phi (\omega , \tau )}{\partial \tau } = a'(x) \frac{\partial \Phi (\omega , \tau )}{\partial \omega } + b(x) \frac{(j \omega )^2}{2} \Phi (\omega , \tau ), \end{aligned}$$

where a(x) is given by (11), and b(x) is given by:

$$\begin{aligned} b(x) = a(x) + 2\{ (\lambda _1 + x)\mathbf{S_1} \textbf{g}(x) + x \textbf{S}_2 \textbf{R}(x) \}, \end{aligned}$$

in which \(\textbf{g}(x)\) is a left-top triangle matrix, and is the particular solution of the system of equations

$$\begin{aligned} \{ \textbf{A} + \lambda _1 \textbf{B} + \lambda _2 \textbf{C} + x(\mathbf{I_1} - \mathbf{I_0}) \} \textbf{g}(x) = a(x) \textbf{R}(x) + (x \mathbf{I_1} - \lambda _1 \textbf{B} -\lambda _2 \textbf{C}) \textbf{R}(x), \end{aligned}$$
(18)

such that

$$\begin{aligned} \textbf{S g}(x) = \sum _{n_1 + n_2 \le c} g(x) = 0. \end{aligned}$$

Proof

We consider the first equation of (17) up to \(O(\epsilon ^2)\)

$$\begin{aligned}&j \epsilon \omega a(x) \textbf{F}^{(1)}(\omega , \tau , \epsilon ) = [\textbf{A} + \lambda _1 \textbf{B} + \lambda _2 \textbf{C} + x(\mathbf{I_1} - \mathbf{I_0}) \nonumber \\&\qquad + j \epsilon \omega (\lambda _1 \textbf{B} + \lambda _2 \textbf{C} - x \mathbf{I_1})] \textbf{F}^{(1)}(\omega , \tau , \epsilon ) + j \epsilon (\mathbf{I_0} - \mathbf{I_1}) \frac{\partial \textbf{F}^{(1)}(\omega , \tau , \epsilon )}{\partial \omega } + O(\epsilon ^2). \end{aligned}$$
(19)

From the first-order result in Sect. 3.1, we find the solution of (19) in the following form

$$\begin{aligned} \textbf{F}^{(1)}(\omega , \tau , \epsilon ) = \Phi (\omega , \tau ) \{ \textbf{R}(x) + j \epsilon \omega \textbf{f}(x) \} + O(\epsilon ^2). \end{aligned}$$
(20)

Here, \(\textbf{f}(x)\) is some matrix function which we will find later. Substituting (20) into (19) yields

$$\begin{aligned} j \epsilon \omega a(x) \textbf{R}(x) =&\{ \textbf{A} + \lambda _1 \textbf{B} + \lambda _2 \textbf{C} + x(\mathbf{I_1} - \mathbf{I_0}) \} \{\textbf{R}(x) + j \epsilon \omega \textbf{f}(x) \} \\&+ j \epsilon \omega (\lambda _1 \textbf{B} + \lambda _2 \textbf{C} - x \mathbf{I_1}) \textbf{R}(x) + (\mathbf{I_0} - \mathbf{I_1}) \textbf{R}(x) j \epsilon \frac{\frac{\partial \Phi (\omega , \tau )}{\partial \omega }}{\Phi (\omega , \tau )} + O(\epsilon ^2). \end{aligned}$$

Dividing both sides by \(j \epsilon \omega \) and taking limit as \(\epsilon \rightarrow 0\), we obtain

$$\begin{aligned} a(x) \textbf{R}(x) = \{ \textbf{A} + \lambda _1 \textbf{B} + \lambda _2 \textbf{C} + x(\mathbf{I_1} - \mathbf{I_0}) \} \textbf{f}(x) \nonumber \\ + (\lambda _1 \textbf{B} + \lambda _2 \textbf{C} - x \mathbf{I_1}) \textbf{R}(x) + (\mathbf{I_0} - \mathbf{I_1}) \textbf{R}(x) \frac{\frac{\partial \Phi (\omega , \tau )}{\partial \omega }}{\omega \Phi (\omega , \tau )}. \end{aligned}$$
(21)

Applying the superposition principle, we find \(\textbf{f}(x)\) in the following form:

$$\begin{aligned} \textbf{f}(x) = C \textbf{R}(x) + \textbf{g}(x) - {\varvec{\varphi }}(x) \frac{\frac{\partial \Phi (\omega , \tau )}{\partial \omega }}{\omega \Phi (\omega , \tau )}. \end{aligned}$$
(22)

Substituting (22) into (21), we obtain

$$\begin{aligned} \{ \textbf{A} + \lambda _1 \textbf{B} + \lambda _2 \textbf{C} + x(\mathbf{I_1} - \mathbf{I_0}) \} \textbf{g} (x) =&a(x) \textbf{R}(x) + (x \mathbf{I_1} - \lambda _1 \textbf{B} - \lambda _2 \textbf{C})\textbf{R}(x), \nonumber \\ \{ \textbf{A} + \lambda _1 \textbf{B} + \lambda _2 \textbf{C} + x(\mathbf{I_1} - \mathbf{I_0}) \} {\varvec{\varphi }} (x) =&(\mathbf{I_0} - \mathbf{I_1}) \textbf{R}(x). \end{aligned}$$
(23)

Differentiating (12) by x, we have

$$\begin{aligned} \{ \textbf{A} + \lambda _1 \textbf{B} + \lambda _2 \textbf{C} - x(\tau ) (\textbf{I}_0 - \textbf{I}_1) \} \frac{d \textbf{R}(x)}{d x} - (\textbf{I}_0 - \textbf{I}_1) \textbf{R}(x)= 0. \end{aligned}$$
(24)

Because this equation has the same form as (23), we can interpret \({\varvec{\varphi }}(x)\) as the solution of (24) in the form

$$\begin{aligned} {\varvec{\varphi }}(x) = \frac{d \textbf{R}(x)}{dx}. \end{aligned}$$

We notice that \(\textbf{S} \varvec{\varphi }(x) = 0\) because of the normalization condition. We rewrite (17) up to order \(\epsilon ^3\)

$$\begin{aligned}&\epsilon ^2 \frac{\partial }{\partial \tau } [\textbf{SF}^{(1)}(\omega , \tau , \epsilon )] + j \epsilon \omega a(x) \textbf{SF}^{(1)}(\omega , \tau , \epsilon ) \\&\quad = j \epsilon \omega \left\{ ( \lambda _1 \mathbf{S_1} + \lambda _2 \textbf{S} -x \mathbf{S_2} + j \epsilon \omega x \mathbf{S_2}) \textbf{F}^{(1)}(\omega , \tau , \epsilon ) + j \epsilon \frac{\partial }{\partial \omega }[\mathbf{S_2 F}^{(1)}(\omega , \tau , \epsilon )] \right\} \\&\qquad + \frac{(j \epsilon \omega )^2}{2}(\lambda _1 \mathbf{S_1} + \lambda _2 \textbf{S} -x \mathbf{S_2}) \textbf{F}^{(1)}(\omega , \tau , \epsilon ) + O(\epsilon ^3). \end{aligned}$$

Substituting (20) into this equation, we have

$$\begin{aligned}&\epsilon ^2 \frac{\partial \Phi (\omega , \tau )}{\partial \tau } + j \epsilon a(x) \Phi (\omega , \tau ) [1 + j \epsilon \omega \textbf{Sf}(x)] \nonumber \\&\quad =\, j \epsilon \omega \{( \lambda _1 \textbf{S}_1 + \lambda _2 \textbf{S} -x \textbf{S}_2) \Phi (\omega , \tau )[\textbf{R}(x) + j \epsilon \omega \textbf{f}(x)] + j \epsilon \omega x \mathbf{S_2} \Phi (\omega , \tau ) \textbf{R}(x) \nonumber \\&\qquad + j \epsilon \frac{\partial \Phi (\omega , \tau )}{\partial \omega } \mathbf{S_2 R}(x) \} + \frac{(j \epsilon \omega )^2}{2} \Phi (\omega , \tau ) (\lambda _1 \mathbf{S_1} + \lambda _2 \textbf{S} -x \mathbf{S_2}) +O(\epsilon ^3). \end{aligned}$$
(25)

We take into account (11), (12), dividing (25) by \(\epsilon ^2\), and taking the limit as \(\epsilon \rightarrow 0\) to obtain

$$\begin{aligned} \frac{\partial \Phi (\omega , \tau )}{\partial \tau } + (j \omega )^2 a(x) \Phi (\omega , \tau ) \textbf{Sf}(x) = \frac{(j \omega )^2}{2} \Phi (\omega , \tau ) a(x) \\ + (j \omega )^2 \Phi (\omega , \tau ) \Big \{( \lambda _1 \mathbf{S_1} -x \mathbf{S_2}) \textbf{f}(x) + x \mathbf{S_2} \textbf{R}(x) + \frac{\partial \Phi (\omega , \tau )}{\partial \omega } \frac{1}{\omega \Phi (\omega , \tau )} \mathbf{S_2 R}(x) \Big \}. \end{aligned}$$

Substituting (22) into this equation, and combining with \(\textbf{S R}(x) \equiv 1\), \(\textbf{S g}(x) \equiv 0\), \(\textbf{S} \varvec{\varphi } (x) \equiv 0\), \(\textbf{S f}(x) \equiv 0\), we obtain the following:

$$\begin{aligned} \frac{\partial \Phi (\omega , \tau )}{\partial \tau } = \omega \frac{\partial \Phi (\omega , \tau )}{\partial \omega } \{ (\lambda _1 \mathbf{S_1} -x \mathbf{S_2}) {\varvec{\varphi }}(x) - \mathbf{S_2 R}(x) \} \nonumber \\ + \frac{(j \omega )^2}{2} \Phi (\omega , \tau ) \left\{ a(x) + 2 [(\lambda _1 \mathbf{S_1} -x \mathbf{S_2}) \textbf{g}(x) + x \mathbf{S_2} \textbf{R}(x)] \right\} . \end{aligned}$$
(26)

Differentiating (11) yields

$$\begin{aligned} a'(x) = \lambda _1 \mathbf{S_1} \frac{\partial \textbf{R}(x)}{\partial x} - x \mathbf{S_2} \frac{\partial \textbf{R}(x)}{\partial x} - \mathbf{S_2} \textbf{R}(x). \end{aligned}$$
(27)

The coefficient of \(\omega \frac{\partial \Phi (\omega , \tau )}{\partial \omega }\) of (26) matches the right-hand side of (27). Let us denote the coefficient in the second term by b(x):

$$\begin{aligned} b(x) = a(x) + 2[(\lambda _1 \mathbf{S_1} -x \mathbf{S_2}) \textbf{g}(x) + x \mathbf{S_2} \textbf{R}(x)]. \end{aligned}$$
(28)

We rewrite (26) using a(x) and b(x) as

$$\begin{aligned} \frac{\partial \Phi (\omega , \tau )}{\partial \tau } = a'(x) \frac{\partial \Phi (\omega , \tau )}{\partial \omega } + b(x) \frac{(j \omega )^2}{2} \Phi (\omega , \tau ). \end{aligned}$$

Thus, the lemma is proved. \(\square \)

3.3 Stationary distribution of the diffusion limit

Following the asymptotic-diffusion method, we construct an approximation to the stationary distribution of the number of sensing SUs. First, we perform an inverse Fourier transform as follows:

$$\begin{aligned} \frac{1}{2} \int _{- \infty }^{\infty } e^{- j \omega y} \Phi (\omega , \tau ) \, d \omega = P(y, \tau ). \end{aligned}$$

We obtain the Fokker-Planck equation for \(P(y, \tau )\) as follows:

$$\begin{aligned} \frac{\partial P(y, \tau )}{\partial \tau } = - \frac{\partial }{\partial y} \{a'(x) yP(y, \tau ) \} + \frac{1}{2} \frac{\partial ^2}{\partial y^2} \{b(x) P(y, \tau ) \}, \end{aligned}$$

where \( y(\tau ) = \mathop {\lim \limits _{\sigma \rightarrow 0}} \sqrt{\sigma } \left\{ i \left( \frac{\tau }{\sigma } \right) - \frac{1}{\sigma } x(\tau ) \right\} \). \(\Phi (\omega , \tau )\) and \(P(y, \tau )\) are the characteristic function and probability density function of \(y(\tau )\) respectively. The reader may refer to Risken (1996) for the details of the methodologies.

\(y(\tau )\) is the diffusion process with drift coefficient \(a'(x) y\) and diffusion coefficient b(x). Since \(y(\tau )\) is a solution of the stochastic differential equation (SDE), we can write

$$\begin{aligned} dy(\tau ) = a'(x) y \, d \tau + \sqrt{b(x)} \, d \omega (\tau ). \end{aligned}$$
(29)

Here, \(\omega (\tau )\) is the standard Brownian motion. To solve this equation, we rewrite the ordinary differential equation (11) as

$$\begin{aligned} dx(\tau ) = a(x) \, d \tau . \end{aligned}$$
(30)

We consider the stochastic process:

$$\begin{aligned} z(\tau ) = x(\tau ) + \epsilon y(\tau ). \end{aligned}$$
(31)

Because \(z(\tau ) = \mathop {\lim \limits _{\sigma \rightarrow 0}} \sigma i \left( \frac{\tau }{\sigma } \right) \), \(z(\tau )\) is associated with the number of SUs in the orbit i(t).

To solve the SDE for \(z(\tau )\), differentiating (31) and considering (29), (30), we obtain

$$\begin{aligned} dz(\tau ) = dx(\tau ) + \epsilon dy(\tau ) = \{ a(x) + \epsilon y ^{\prime \prime }(x) \} \, d \tau + \epsilon \sqrt{b(x)} \, d \omega (\tau ). \end{aligned}$$

We rewrite the coefficients of the above equation in the form of the first-order Taylor series expansion

$$\begin{aligned} a(x) + \epsilon y a'(x) = a(x+ \epsilon {{y}}) + O(\epsilon ^2) = a(z) + O(\epsilon ^2), \\ \epsilon \sqrt{b(x)} = \epsilon \sqrt{b(x + \epsilon y) + O(\epsilon )} = \epsilon \sqrt{b(z)} + O(\epsilon ^2). \end{aligned}$$

In this way, taking into account \(\epsilon = \sqrt{\sigma }\), we obtain

$$\begin{aligned} d z(\tau ) = a(z) \, d \tau + \sqrt{\sigma b(z)} \, d \omega (\tau ). \end{aligned}$$

This explains that \(z(\tau )\) is a diffusion process with drift coefficient a(z) and diffusion coefficients \(\sigma b(z)\).

Letting \(\pi (z)\) as the stationary PDF of \(z(\tau )\), \(\pi (z)\) is the stationary solution of the Fokker-Plank equation and thus we have

$$\begin{aligned} -\{ a(z) \pi (z) \}' + \frac{\sigma }{2} \{ b(z) \pi (z) \}'' = 0. \end{aligned}$$

For simplicity, we find a special solution as follows.

$$\begin{aligned} -\{ a(z) \pi (z) \} + \frac{\sigma }{2} \{ b(z) \pi (z) \}' = 0. \end{aligned}$$

The general solution of the differential equation is given as follows.

$$\begin{aligned} \pi (z) = \frac{C}{b(z)} \exp \left\{ \frac{2}{\sigma } \int _0^z \frac{a(x)}{b(x)} \, dx \right\} , \end{aligned}$$
(32)

where C is the integration constant for normalizing \(\pi (z)\), given by

$$\begin{aligned} C = \left\{ \int _0^{\infty } \frac{1}{b(z)} \exp \left\{ \frac{2}{\sigma } \int _0^z \frac{a(x)}{b(x)} dx \right\} \, dz \right\} ^{-1}. \end{aligned}$$

Hence, we obtain the asymptotic probability density function \(\pi (z)\) of the scaled number of SUs in the sensing pool in the steady state, that is \(\sigma i(\frac{\tau }{\sigma })\) as \(\sigma \rightarrow 0\) and \(\tau \rightarrow +\infty \).

4 Stability condition

In this section, we consider the necessary stability condition to guarantee the existence of the steady state. (Nazarov et al., 2020) and Phung-Duc and Kawanishi (2019) found that \(\mathop {\lim \limits _{x \rightarrow \infty }}\, a(x) < 0\) is necessary and sufficient for the stability of the orbit size for a related model. We show that the necessary stability condition is equivalent to the sufficient stability condition proved in Phung-Duc et al. (2021). Furthermore, we define \(R(n_1, n_2):= \mathop {\lim \limits _{x \rightarrow \infty }} R(n_1, n_2, x)\).

Theorem 4.1

  When \(c \ge 1\), \( \mathop {\lim \limits _{x \rightarrow \infty }} a(x) < 0\) is equivalent to

$$\begin{aligned} \frac{\lambda _2}{\mu _2} < \sum _{i=0}^c (c-i) \pi _i, \end{aligned}$$
(33)

where \(\pi _i\) is the steady-state probability that the number of transmitting PUs is i, and is given as follows:

$$\begin{aligned} \pi _i = \frac{\frac{\lambda _1^i}{i! \mu _1^i}}{\sum _{k=0}^c \frac{\lambda _1^k}{k! \mu _1^k}}. \end{aligned}$$

Proof

Applying the method of cut off graph, we obtain the equations for the probabilities \(R(n_1, n_2, x)\).

For diagonal cuts when \(n_1+n_2=n \le c-1\) (see Fig. 1), we can derive

$$\begin{aligned} (\lambda _1 + x) \sum _{n_1 + n_2 = n} R(n_1, n_2, x) = \sum _{n_1+n_2 = n+1} (n_1 \mu _1 + n_2 \mu _2) R(n_1, n_2, x). \end{aligned}$$
(34)

Therefore, taking a limit \(x \rightarrow \infty \), all the probabilities \(R(n_1, n_2, x)\) for \(n_1+n_2 \le c-1\) are equal to zero. Combining with the normalization condition,

$$\begin{aligned} \lim _{x \rightarrow \infty } \sum _{n_1+n_2=c} R(n_1, n_2, x) = 1, \end{aligned}$$
(35)

we can notice that the probabilities \(R(n_1, n_2)\) are non-zero when \(n_1 + n_2 = c\). Furthermore, from (34), we have \( \mathop {\sum \limits _{n_1+n_2=n}} R(n_1, n_2, x)\) are infinitesimals order of \(\frac{1}{x^{c-n}}\) for \(n \le c-1\). Specifically, the probabilities \(R(n_1, n_2, x)=o(\frac{1}{x})\) for \(n_1 + n_2 = c - 1\), and all other probabilities \(R(n_1, n_2, x)\) have higher infinitesimal order.

Hence, taking the limit \(x \rightarrow \infty \) for \(n=c-1\) in (34), we obtain

$$\begin{aligned} \lim _{x \rightarrow \infty } x \sum _{n_1 + n_2 = c-1} R(n_1, n_2, x) =&\sum _{n_1+n_2 = c} (n_1 \mu _1 + n_2 \mu _2) R(n_1, n_2) \nonumber \\ =&\sum _{n_1 =0}^c \left( n_1 \mu _1 + (c- n_1) \mu _2 \right) \pi _{n_1}. \end{aligned}$$
(36)

Here, we note that the asymptotic property holds \( \mathop {\lim \limits _{x \rightarrow \infty }} R(n_1, n_2, x) = \pi _{n_1}\) for \(n_1 + n_2 = c\). Especially, \(R(c, 0, x) = \pi _c\), which is independent of x.

Moreover, using (35), and we obtain

$$\begin{aligned} \lim _{x \rightarrow \infty } \sum _{\begin{array}{c} n_1 + n_2 = c \\ n_2 \ge 1 \end{array}} R(n_1, n_2, x)&= 1 - R(c, 0) \nonumber \\&= 1 - \pi _c. \end{aligned}$$
(37)

Taking \(x \rightarrow \infty \) in (11) and substituting (36) and (37) into the result, we have

$$\begin{aligned} \lim _{x \rightarrow \infty } a(x) = \lambda _1 (1 - \pi _c) + \lambda _2 - \sum _{n_1 =0}^c \left( n_1 \mu _1 + (c- n_1) \mu _2 \right) \pi _{n_1} < 0. \end{aligned}$$

Applying Little’s law, we have the following equation.

$$\begin{aligned} \lambda _1 (1 - \pi _c) = \mu _1 \sum _{n_1=0}^c n_1 \pi _{n_1}. \end{aligned}$$

Thus, we obtain Eq. (33), and the theorem is proved. \(\square \)

Figure illustrates the behavior of a(x). The function determines the derivative of the process of the normalized number of SUs. When x is sufficiently large, a(x) converges to a certain value, and Fig. 2 shows that the limit of a(x) is negative under the stability condition (33). If the parameters do not satisfy the stability condition (33), however, the limit is positive.

Fig. 2
figure 2

The behavior of a(x) for \(\lambda _1=1\), \(\mu _1=4\), \(\mu _2=20\)

Theorem 4.2

  Under the stability condition, the solution of \(a(x)=0\) is unique.

Proof

Due to (11) and (33), it is clear  that  \(a(0)>0\) and \( \mathop {\lim \limits _ {x \rightarrow \infty }} a(x)<0\). From the intermediate value theorem, there is at least one value \(\kappa \) in \((0, \infty )\) for which \(a(\kappa ) = 0\). We show that this equation has a unique solution \(\kappa \).

We rewrite the function a(x) as follows:

$$\begin{aligned} a(x) =&\, \lambda _1 \textbf{S}_1 R(x) + \lambda _2 \textbf{SR}(x) - x \mathbf{S_2 R}(x) \nonumber \\ =&\,\lambda _1 \{ 1- \mathbf{S_2 R}(x) - R(c, 0, x) \} + \lambda _2 - x \mathbf{S_2 R}(x) \nonumber \\ =&\,\lambda _1(1 - \pi _c) + \lambda _2 - (\lambda _1 + x) \mathbf{S_2 R}(x) . \end{aligned}$$
(38)

Here, \((\lambda _1 + x) \mathbf{S_2 R}(x)\) in (38) expresses the ingoing flow of the two types of users successfully entering the system without blocking or interrupting. In other words, \((\lambda _1 + x) \mathbf{S_2 R}(x)\) represents the arrival rate that increases the number of customers (PUs or SUs) in the channels, i.e., the throughput.

Thus, \((\lambda _1 + x) \mathbf{S_2 R}(x)\) increases with the original arrival rate \(\lambda _1 + x\) (that may be blocked or interrupt). As a consequence, a(x) decreases with the increase in x, and thus the theorem is proved. \(\square \)

Fig. 3
figure 3

The transition of the third term for \(\lambda _1=1\), \(\mu _1=4\), \(\mu _2=20\)

Figure  shows \(-(\lambda _1+x) \mathbf{S_2 R}(x)\) against x. We observe that this quantity decreases when x increases and thus a(x) decreases with x.

5 Approximation of steady state distributions

We first construct the approximations for the distribution \(P(n_1, n_2)\) of the states of the channels and the mean number of sensing SUs using the first-order asymptotic analysis (Sect. 3.1).

Algorithm for finding \(\kappa \) and \(R(n_1, n_2, \kappa )\) and is as follows.

  1. 1.

    We solve the balance Eq. (12) and the normalization condition (13) for elements \(R(n_1, n_2, x)\).

  2. 2.

    We find all \(R(n_1, n_2, x)\) for given parameters and x.

  3. 3.

    From (11), we have

    $$\begin{aligned} a(x) = \lambda _1 \sum _{\begin{array}{c} n_1 + n_2 = c \\ n_2 \ge 1 \end{array}} R(n_1, n_2, x) + \lambda _2 - x \sum _{n_1 + n_2 \le c-1} R(n_1, n_2, x). \end{aligned}$$
    (39)
  4. 4.

    We solve the equation \(a(x)=0\) and obtain the solution \(x=\kappa \).

Using \(R(n_1, n_2, \kappa )\) as the steady-state probability, we calculate performance measures and verify the accuracy of the approximations in Sect. 6.

Remark 5.1

In case \(c=1\), the solution of \(a(x)=0\) is given by

$$\begin{aligned} \kappa = \frac{\lambda _2 (\lambda _1 + \mu _1)(\lambda _1 + \mu _2) }{\mu _1 \mu _2 - \lambda _2(\lambda _1 + \mu _1)}. \end{aligned}$$

Next, we also construct an approximation \(G_A(i)\) for the probability density function P(i) of the number of sensing SUs in the orbit by using the diffusion analysis (Sects.  3.2 and 3.3).

Algorithm for building the approximation \(G_A(i)\) is given as follows.

  1. 1.

    Solving (18) and \(\displaystyle \sum _{\begin{array}{c} n_1 + n_2 = c \end{array}}\, g(n_1,n_2,x) = 0\) for a particular solution \(g(n_1,n_2,x)\).

  2. 2.

    From (28), we calculate b(x) by

    $$\begin{aligned} b(x) = a(x) + 2 \Bigg [(\lambda _1 + x) \sum _{\begin{array}{c} n_1 + n_2 = c \\ n_2 \ge 1 \end{array}} g(n_1, n_2, x) + x \sum _{n_1+n_2 \le c-1} R(n_1, n_2, x) \Bigg ]. \end{aligned}$$
  3. 3.

    To obtain an approximation for the discrete steady-state probability distribution, we normalize the Eq. (32) using \(\sigma \) as follows.

    $$\begin{aligned} G(i) = \frac{1}{b (\sigma i)} \exp \left\{ \frac{2}{\sigma } \int _0^{\sigma i} \frac{a(x)}{b(x)} \, dx \right\} . \end{aligned}$$
    (40)
  4. 4.

    We obtain the probability distribution \(G_A(i)\) as follows because i(t) only take non-negative values.

    $$\begin{aligned} G_A(i) = \frac{G(i)}{\sum _{i=0}^{\infty } G(i)}. \end{aligned}$$

In the next section, we compare this analytic approximation with simulation results. The comparison will be shown in Fig. .

6 Numerical experiment

In this section, we provide some numerical experiments on the results derived in Sect. 5 above. In our experiment, for fixed \(\mu _1 = 4\), \(\mu _2 = 20\) and \(c = 5\), we examine the changes in performance measures against \(\lambda _1\) and \(\lambda _2\). Under the same frameworks, we also conduct simulations and collect the same results as those obtained by the diffusion limit approach. The length of all experimental simulations is set at \(10^6\) time steps, which suffices for the simulations to converge to their corresponding numerical solutions. The simulation results in all the figures are denoted by the points marked "Sim." \(x=\kappa \) and "App" indicate the theoretical value and the approximation derived in Sect. 5.

Fig. 4
figure 4

The average number of terminations against arrival rate of PUs (\(\lambda _2=8\))

Fig. 5
figure 5

The average number of terminations against SU arrival rate (\(\lambda _1=1\))

Figures   and  illustrate the mean number of SU interruptions per one SU from arrival to departure. From Morozov et al. (2022) the average number of SU terminations is given by \({\mathbb {E}} [N_\mathrm{{int}}]\) as

$$\begin{aligned} {\mathbb {E}}[N_\mathrm{{int}}] = \frac{\lambda _1}{\lambda _2} \sum _{\begin{array}{c} n_1 + n_2 = c \\ n_2 \ge 1 \end{array}} R(n_1, n_2, \kappa ). \end{aligned}$$
(41)

Although the effect on the value of the SUs arrival rate is small (see Fig. 5), the effect of the PUs is significant (see Fig. 4). We also notice that the effect of the sensing rate is smaller when \(\lambda _2\) is lower (see Fig. 5), and the differences between the simulations and theoretical values become big when user arrival rates are bigger. The closer \(\sigma \) approaches 0, the longer the sensing time is. In that sense, we can regard the system as an M/M/c/c loss system with arrival rates \(\kappa \) for SUs and \(\lambda _1\) for PUs.

Fig. 6
figure 6

SU blocking probability against PU arrival rate (\(\lambda _2=8\))

Fig. 7
figure 7

SU blocking probability against SU arrival rate (\(\lambda _1=1\))

Next, we consider the SU blocking probability with the change of the PU arrival rate and the SU arrival rate when \(\sigma =0.1, 1, 2, 32, 64\) and 128. Here, blocking means that SUs are blocked when all channels are occupied by SUs or PUs, and hence go back to the sensing pool. Figure  presents the SU blocking probability against \(\lambda _1\). We can see that the difference between different sensing rates is not significant because the mean number of available channels for SU is fixed for a given arrival rate of PU. Figure   compares the SU blocking probability while \(\lambda _1\) is fixed and \(\sigma \) varies. The difference becomes smaller as \(\lambda _2\) is smaller, and the difference is larger with larger \(\sigma \), following the theoretical results.

Fig. 8
figure 8

The distribution of the number of SU-occupied channels (\(c=5\))

Figure  depicts the proportion of the number of SU-occupied channels. We observe that there are few channels occupied by SUs when \(\lambda _2\) is small. When the SU arrival rate is low, the probability distribution is less sensitive to the sensing rate, but when \(\lambda _2\) is larger, the distribution is increasingly sensitive to \(\sigma \).

Fig. 9
figure 9

Log-plot of the number of SUs in the orbit against PU arrival rate (\(\lambda _2=8\))

Fig. 10
figure 10

Log-plot of the number of SUs in the orbit against SU arrival rate (\(\lambda _1=1\))

Figures  and  reflect the mean number of SUs in the orbit for several sensing rates, plotted on a logarithmic scale. We use the following approximation:

$$\begin{aligned} {\mathbb {E}} [N_\mathrm{{orbit}}] \approx \frac{\kappa }{\sigma }. \end{aligned}$$

These figures demonstrate that our estimation is more accurate as \(\sigma \) is closer to 0. The difference between simulations and the approximation becomes negligible when \(\sigma \) is close to 0. In addition, when the arrival rates of the two user types are larger, the approximation is further from the simulation.

Fig. 11
figure 11

The transition of the normalized number of SUs in the orbit (\(\lambda _1=1\), \(\lambda _2=8\), \(\sigma =0.1\))

Figure  indicates the transition of the normalized number of SUs in the orbit. We note that 100 times average expresses the average values obtained by running the simulation 100 times with different randomization seeds for each run. The simulation lines evolve around \(x(\tau )\) over time. This figure shows that our presumption increases in accuracy as we carry out more simulations.

Fig. 12
figure 12

Comparison of the simulation and the approximation results

In Fig. 12, we compare the simulation results with the approximation obtained for the probability distribution of the number of SUs in the orbit. When the sensing rate is close to 0, the accuracy is high, and the approximation is reasonable. However, for a large value of \(\sigma \), we observe a jump at the starting point in the distribution and cannot get a reliable approximation. These observations agree with theoretical results.

7 Conclusion

In this paper, we considered cognitive wireless networks with the sensing time of secondary users. Using asymptotic diffusion analysis, we obtained the diffusion process and the probability density function of the number of SUs in the orbit. We proved that the stability condition obtained in Phung-Duc et al. (2021) is equivalent to the condition that the limit of the derivative of the normalized number of sensing SUs is negative. We also carried out numerical experiments and showed that the approximation obtained is suitable for various parameter settings. As a future work, we plan to compare with other methods such as level-dependent QBD.