A learning automata-based adaptive uniform fractional guard channel algorithm

  • 122 Accesses

  • 9 Citations


In this paper, we propose an adaptive call admission algorithm based on learning automata. The proposed algorithm uses a learning automaton to specify the acceptance/rejection of incoming new calls. It is shown that the given adaptive algorithm converges to an equilibrium point which is also optimal for uniform fractional channel policy. To study the performance of the proposed call admission policy, the computer simulations are conducted. The simulation results show that the level of QoS is satisfied by the proposed algorithm and the performance of given algorithm is very close to the performance of uniform fractional guard channel policy which needs to know all parameters of input traffic. The simulation results also confirm the analysis of the steady-state behaviour.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 199

This is the net price. Taxes to be calculated in checkout.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7


  1. 1.

    Ramjee R, Towsley D, Nagarajan R (1997) On optimal call admission control in cellular networks. Wirel Netw 3:29–41

  2. 2.

    Hong D, Rappaport S (1986) Traffic modelling and performance analysis for cellular mobile radio telephone systems with prioritized and non-prioritized handoffs procedure. IEEE Trans Veh Technol 35:77–92

  3. 3.

    Haring G, Marie R, Puigjaner R, Trivedi K (2001) Loss formulas and their application to optimization for cellular networks. IEEE Trans Veh Technol 50:664–673

  4. 4.

    Beigy H, Meybodi MR (2004) A new fractional channel policy. J High Speed Netw 13:25–36

  5. 5.

    Yoon CH, Kwan C (1993) Performance of personal portable radio telephone systems with and without guard channels. IEEE J Sel Areas Commun 11:911–917

  6. 6.

    Guern R (1988) Queuing-blocking system with two arrival streams and guard channels. IEEE Trans Commun 36:153–163

  7. 7.

    Li B, Li L, Li B, Sivalingam KM, Cao X-R (2004) Call admission control for voice/data integrated cellular networks: performance analysis and comparative study. IEEE J Sel Areas Commun 22:706–718

  8. 8.

    Chen X, Li B, Fang Y (2005) A dynamic multiple-threshold bandwidth reservation (DMTBR) scheme for QoS provisioning in multimedia wireless networks. IEEE Trans Wirel Commun 4:583–592

  9. 9.

    Beigy H, Meybodi MR (2005) A general call admission policy for next generation wireless networks. Comput Commun 28:1798–1813

  10. 10.

    Beigy H, Meybodi MR (2004) Adaptive uniform fractional channel algorithms. Iran J Electr Comput Eng, 3:47–53

  11. 11.

    Beigy H, Meybodi MR (2005) An adaptive call admission algorithm for cellular networks. Electr Comput Eng 31:132–151

  12. 12.

    Beigy H, Meybodi MR (2008) Asynchronous cellular learning automata. Automatica 44:1350–1357

  13. 13.

    Beigy H, Meybodi MR (2011) Learning automata based dynamic guard channel algorithms. J Comput Electr Eng 37(4):601–613

  14. 14.

    Baccarelli E, Cusani R (1996) Recursive Kalman-type optimal estimation and detection of hidden markov chains. Signal Process 51:55–64

  15. 15.

    Baccarelli E, Biagi M (2003) Optimized power allocation and signal shaping for interference-limited multi-antenna ad hoc networks, vol 2775 of Springer lecture notes in computer science, Springer, pp 138–152

  16. 16.

    Beigy H, Meybodi MR (2009) Cellular learning automata based dynamic channel assignment algorithms. Int J Comput Intell Appl 8(3):287–314

  17. 17.

    Srikantakumar PR, Narendra KS (1982) A learning model for routing in telephone networks. SIAM J Control Optim 20:34–57

  18. 18.

    Nedzelnitsky OV, Narendra KS (1987) Nonstationary models of learning automata routing in data communication networks. IEEE Trans Syst Man Cybern. SMC–17:1004–1015

  19. 19.

    Oommen BJ, de St Croix EV (1996) Graph partitioning using learning automata. IEEE Trans Comput 45:195–208

  20. 20.

    Beigy H, Meybodi MR (2006) Utilizing distributed learning automata to solve stochastic shortest path problems. Int J Uncertain Fuzziness Knowl Based Syst 14:591–615

  21. 21.

    Oommen BJ, Roberts TD (2000) Continuous learning automata solutions to the capacity assignment problem. IEEE Trans Comput 49:608–620

  22. 22.

    Moradabadi B, Beigy H (2014) A new real-coded Bayesian optimization algorithm based on a team of learning automata for continuous optimization. Genetic programming and evolvable machines 15:169–193

  23. 23.

    Meybodi MR, Beigy H (2001) Neural network engineering using learning automata: determining of desired size of three layer feedforward neural networks. J Fac Eng 34:1–26

  24. 24.

    Beigy H, Meybodi MR (2001) Backpropagation algorithm adaptation parameters using learning automata. Int J Neural Syst 11:219–228

  25. 25.

    Oommen BJ, Hashem MK (2013) Modeling the learning process of the teacher in a tutorial-like system using learning automata. IEEE Trans Syst Man Cybern Part B Cybern 43(6):2020–2031

  26. 26.

    Yazidi A, Granmo OC, Oommen BJ (2013) Learning automaton based on-line discovery and tracking of spatio-temporal event patterns. IEEE Trans Syst Man Cybern Part B Cybern 43(3):1118–1130

  27. 27.

    Narendra KS, Thathachar KS (1989) Learning automata: an Introduction. Printice, New York

  28. 28.

    Srikantakumar P (1980) Learning models and adaptive routing in telephone and data communication networks. PhD thesis, department of electrical engineering, University of Yale, USA

  29. 29.

    Norman MF (1972) Markovian process and learning models. Academic Press, New York

  30. 30.

    Mood AM, Grabill FA, Bobes DC (1963) Introduction to the theory of statistis. McGraw-Hill

Download references


The authors would like to thank the anonymous reviewers for their valuable comments and suggestions which improved the paper.

Author information

Correspondence to Hamid Beigy.

Appendix: Proof of Theorems and Lemmas

Appendix: Proof of Theorems and Lemmas

In this appendix, we give the proof of some lemmas and theorems given in this paper.

Proof of Lemma 1

Before we begin to prove the lemma, we introduce some definitions and notations. To count how many calls are arrived, we introduce concept of local time for each type of calls. The local time for each type of calls starts with 0 and incremented by 1 when a call of given type is arrived. Let us to define \(n^n\) and \(n^h\) as the local times for new and hand-off calls, respectively. Then, we define two sequences of random variables \(n^n_m\) (\(n^n_1 < n^n_2<\cdots \)) and \(n^h_m\) (\(n^h_1 < n^h_2 < \cdots \)), where \(n^n_m(n^h_m)\) is the global time when the \(m^{th}\) new (hand-off) call is arrived.

The proof for penalty probability of \(c_1(p)\) is trivial, because action \(\mathrm{ACCEPT}\) is penalized when all allocated channels are busy. Since the probability of all channels being busy is equal to \(P_C\), then \(c_1(p)\) is equal to \(P_C\). To find expression for \(c_2(p)\), we define \(X_n\) as the indicator of dropping of a hand-off call at the hand-off local time \(n\), where \(X_n=1\) if a hand-off call arrives at hand-off local time \(n^h=n\) and dropped and \(X_n=0\) if a hand-off call arrives at hand-off local time \(n^h=n\) and accepted. Since in interval \([n,n+1]\), it is possible that \(M \ge 0\) new calls to be accepted or \(N \ge 0\) calls to be completed, then the state of the Markov chain describing cell at hand-off local time \(n+1\) is independent of its state at the hand-off local time \(n\) when \(N+M > 0\). Although there is an exception \(N+M=0\), which we ignore in our analysis due to the violation of Markov chain properties. Therefore, \(X_1,X_2,\ldots ,X_n\) are independent identically distributed (i.i.d) random variables with the following first- and second-order statistics.

$$\begin{aligned} \mathrm{E}\left[ X_n\right]&= \sum _{k=0}^{C} k P_k= \rho \gamma \left[ 1-P_C\right] .\end{aligned}$$
$$\begin{aligned} \mathrm{Var}\left[ X_n\right]&= \mathrm{E} \left[ X^2_k\right] - \left( \mathrm{E} \left[ X_k\right] \right) ^2= \rho \gamma \left[ 1-P_C\right] \left[ 1+\rho \gamma P_C\right] - (\rho \gamma )^2P_{C-1}.\nonumber \\ \end{aligned}$$

Using the central limit theorem \(\bar{X}_n=\hat{B}_h=\frac{1}{n}\sum _{k=0}^nX_k\) is a random variable with normal distribution \((\hat{B}_h \sim N(\mu _b,\sigma _b))\) with the following mean and variance [30].

$$\begin{aligned} \mu _b&= \mathrm{E}\left[ \hat{B}_h\right] =\mathrm{E}\left[ \bar{X}_n\right] = \mathrm{E}\left[ X_n\right] = \rho \gamma \left[ 1-P_C\right] .\end{aligned}$$
$$\begin{aligned} \sigma _b&= \mathrm{Var}\left[ \hat{B}_h\right] = \mathrm{Var}\left[ \bar{X}_n\right] = \frac{\mathrm{Var}\left[ X_n\right] }{n}, \nonumber \\&= \frac{\rho \gamma \left[ 1-P_C\right] \left[ 1+\rho \gamma P_C\right] - (\rho \gamma )^2P_{C-1}}{n}. \end{aligned}$$

Thus, the value of penalty probability of \(c_2(p)\) is equal to

$$\begin{aligned} c_2(p)&= \mathrm{Prob} \left[ \hat{B}_h < p_h\right] , \\&= \frac{1}{\sqrt{2\pi }\sigma _b}\int _{-\infty }^{p_h}e^{-\frac{1}{2}\left( \frac{x-\mu _b}{\sigma _b}\right) ^2}dx \end{aligned}$$

which completes the proof of this lemma. \(\square \)

Proof of Lemma 2

The proofs for items one through three are trivial using Eq. (7) and we only give the proof of item 4. From Eq. (7) and when \(\rho < C\), we have

$$\begin{aligned} \frac{\partial c_1(p)}{\partial p_1}&= (1-a)P_C\left[ \frac{C}{\gamma }-\rho (1-P_C)\right] >0, \end{aligned}$$
$$\begin{aligned} \frac{\partial c_1(p)}{\partial p_2}&= -(1-a)P_C\left[ \frac{C}{\gamma }-\rho (1-P_C)\right] <0. \end{aligned}$$

Using Eq. (7), we obtain

$$\begin{aligned} \frac{\partial c_2(p)}{\partial p_2}&= \frac{1}{\sigma _b \sqrt{2\pi }}\left[ e^{\frac{-1}{2} \left( \frac{p_h - \mu _b}{\sigma _b}\right) ^2}\left\{ \left( \frac{\mu -p_h}{\sigma _b}\right) \frac{\partial \sigma _b}{\partial p_2}-\frac{\partial \mu _b}{\partial p_2}\right\} \right. \nonumber \\&\quad \left. -\frac{2}{\sigma _b}\frac{\partial \sigma _b}{\partial p_2}\int _{-\infty }^{p_h}e^{\frac{-1}{2} \left( \frac{x - \mu _b}{\sigma _b}\right) ^2} dx\right] , \\ \frac{\partial c_2(p)}{\partial p_1}&= \frac{-1}{\sigma _b \sqrt{2\pi }}\left[ e^{\frac{-1}{2} \left( \frac{p_h - \mu _b}{\sigma _b}\right) ^2}\left\{ \left( \frac{\mu -p_h}{\sigma _b}\right) \frac{\partial \sigma _b}{\partial p_1}-\frac{\partial \mu _b}{\partial p_1}\right\} \right. \nonumber \\&\quad \left. -\frac{2}{\sigma _b}\frac{\partial \sigma _b}{\partial p_1}\int _{-\infty }^{p_h}e^{\frac{-1}{2} \left( \frac{x - \mu _b}{\sigma _b}\right) ^2} dx\right] . \end{aligned}$$

Increasing \(p_2\), decreases the probability of accepting new calls and hence the number of busy channels decreased. Therefore, the dropping probability of hand-off calls is decreased or \(c_2(p)=\mathrm{Prob} \left[ \hat{B}_h < p_h\right] \) is increased. Thus, we have

$$\begin{aligned} \frac{\partial c_2(p)}{\partial p_2}&> 0 \end{aligned}$$
$$\begin{aligned} \frac{\partial c_2(p)}{\partial p_1}&< 0. \end{aligned}$$

However, by choosing the proper value for parameters, condition \(\frac{\partial c_2(p)}{\partial p_2} > 0\) is also satisfied. From Eqs. (22) and (24), Eq. (9) is concluded, from Eqs. (22) and (25), Eq. (10) is concluded and from Eqs. (23) and (24), Eq. (11) is concluded. This completes the proof of this lemma. \(\square \)

Proof of Lemma 3

Consider \(f(p)\) at its two end points

$$\begin{aligned} f(p) = \left\{ \begin{array}{lll} c_2(0,1) &{} p_1=0 \\ -c_1(1,0) &{} p_1=1. \end{array} \right. \end{aligned}$$

Since \(f(p)\) is a continuous function of \(p_1\) and \(p_2\), there exists at least a \(p^*\) such that \(f(p^*)=0\). For proving the uniqueness of \(p^*\), the derivative of \(f(p)\) with respect to \(p_1\) is computed and then using Lemma 2, we obtain

$$\begin{aligned} \frac{\partial f(p)}{\partial p_1}&= \frac{\partial c_2(p)}{\partial p_1} - \left( 1+p_1\right) \left( c_1+c_2\right) ,\\&< 0. \end{aligned}$$

Since the derivative of \(f(p)\) with respect to \(p_1\) is negative, \(f(p)\) is a strictly decreasing function of \(p_1\). Thus there exists one and only one point \(p^*\) for which function \(f(p)\) crosses the horizontal line and hence the lemma. \(\square \)

Proof of Lemma 4

Define \(p^2=p^Tp\) for vector \(p\). Let \(p=p_1\) and

$$\begin{aligned} g(p) = \left\{ \begin{array}{ll} \frac{w(p)}{p^*-p} &{} p \ne p^* \\ \left. -\frac{\partial w(p)}{\partial p}\right| _{p=p^*} &{} p=p^* \end{array} \right. \end{aligned}$$

Since \(w(p) < 0\) when \(p > p^*\) and \(w(p) > 0\) when \(p<p^*\), \(g(p)\) is positive and continuous in interval \([0,1]\). Hence, there exists a \(R > 0\) such that \(g(p) \ge R\). Thus, we have

$$\begin{aligned} \left[ p^* - p(n)\right] w(p(n))&= \left[ p^*-p(n)\right] ^2g(p(n)), \nonumber \\&\ge R \left[ p^*-p(n)\right] ^2. \end{aligned}$$

for all probability \(p\), then computing

$$\begin{aligned} \left[ p(n+1)-p^*\right] ^2= \left[ p(n)-p^*\right] ^2 + 2\left[ p(n)-p^*\right] \Delta p(n) + \Delta p^2(n) \end{aligned}$$

and taking expectation on both sides, cancelling \(\mathrm{E}\left[ p(n)-p^*\right] ^2\) and dividing by \(2a\), we obtain

$$\begin{aligned} \mathrm{E}\left[ \left\{ p(n)-p^*\right\} \frac{\Delta p(n)}{a}\right] + \frac{a}{2} \mathrm{E} \left[ \frac{\Delta p^2(n)}{a^2}\right] =0, \end{aligned}$$


$$\begin{aligned} \mathrm{E}\left[ \left\{ p(n)-p^*\right\} w\left( p(n)\right) \right] + \frac{a}{2} \mathrm{E} \left[ \tilde{S} \left( p(n)\right) \right] =0. \end{aligned}$$

Since, we have only bounded variables, \(\tilde{S} \left( p(n)\right) \) is also bounded; thus, there exists a \(K > 0\) such that \(\mathrm{E}\left[ \tilde{S} \left( p(n)\right) \right] \le K\). Hence, we obtain

$$\begin{aligned} \mathrm{E}\left[ \left\{ p^*-p(n)\right\} w\left( p(n)\right) \right]&= \frac{a}{2} \mathrm{E} \left[ \tilde{S} \left( p(n)\right) \right] , \\&\le K a. \end{aligned}$$

Using this Eq. (28), we obtain

$$\begin{aligned} \mathrm{E} \left[ p^*-p(n)\right] ^2&\le K \mathrm{E} \left[ \left\{ p^* - p(n)\right\} w(p(n))\right] ,\\&\le Ka, \\&= O(a). \end{aligned}$$

and hence the lemma.\(\square \)

Proof of Lemma 5

To prove Eq. (15), let us to define

$$\begin{aligned} \zeta =\frac{\mathrm{E} \left[ \Delta z(n)|z(n)\right] }{\sqrt{a}}=\frac{\mathrm{E} \left[ \Delta p(n)|z(n)\right] }{a} = w\left( p(n)\right) -w\left( p^*\right) . \end{aligned}$$

Since \(w(.)\) is Lipshitz with bound \(\beta \), we have \(|w\left( p(n)\right) -w\left( p^*\right) | \le K |p(n) - p^*|,\) where \(K>0\) is a constant. Using this Eq. (29), we obtain

$$\begin{aligned} |\zeta | \le K |p(n) - p^*| \le K \sqrt{a} |z(n)|. \end{aligned}$$


$$\begin{aligned} h(\lambda )=w(x+\lambda (y-x)) \end{aligned}$$

where \(\lambda \in [0,1]\). It follows that

$$\begin{aligned} h'(\lambda )&= \frac{\partial h(\lambda )}{\partial \lambda }, \nonumber \\&= w'(x+\lambda (y-x))(y-x). \end{aligned}$$

Since \(h'(.)\) is continuous, we have

$$\begin{aligned} w(y)-w(x) = h(1) - h(0) = \int _0^1w'(x+\lambda (y-x))[y-x]d.\lambda \end{aligned}$$

Subtracting \(w'(x)(y-x)\) from both sides of the above equation, we obtain

$$\begin{aligned} w(y)-w(x)-w'(x)[y-x]=\int _0^1\left[ w'(x+\lambda (y-x))-w'(x)\right] [y-x]d\lambda \end{aligned}$$

Since \(w(.)\) is Lipschitz with bound \(\beta \), we obtain

$$\begin{aligned} w(y)-w(x)-w'(x)(y-x) \le \frac{\beta }{2}|y-x|^2. \end{aligned}$$

Substituting \(y\) with \(p(n)\) and \(x\) with \(p^*\) in the above equation, we obtain

$$\begin{aligned} w(p(n))-w(p^*)-w'(p^*)(p(n)-p^*)&\le K |p(n)-p^*|^2, \nonumber \\ w(p(n))-w(p^*)-\sqrt{a}w'(p^*)z(n)&\le K a |z(n)|^2. \end{aligned}$$

Using this Eqs. (29) and (30) and Lemma 4, we obtain

$$\begin{aligned} |\zeta -\sqrt{a}w'(p^*)z(n)|&\le K a |z(n)|^2, \nonumber \\&\le K |p(n)-p^*|^2, \nonumber \\&\le K a. \end{aligned}$$

Multiplying both sides of the above equation by \(\sqrt{a}\), we obtain

$$\begin{aligned} \left| \sqrt{a}\zeta -aw'(p^*)z(n)\right| \le K a^{3/2}, \end{aligned}$$


$$\begin{aligned} \left| \mathrm{E} [\Delta z(n)|z(n)]- aw'(p^*)z(n)\right| \le K \sqrt{a}, \end{aligned}$$

which implies Eq. (15). To derive Eq. (16), let us to define

$$\begin{aligned} \eta =\frac{\mathrm{E} \left[ \Delta z^2(n)|z(n)\right] }{a} = S(p(n))= \tilde{S}(p(n)) + \zeta ^2. \end{aligned}$$

By subtracting \(\tilde{S}(p(n))\) from both sides of the above equation, we obtain

$$\begin{aligned} |\eta - \tilde{S}(p^*)|&= |\tilde{S}(p(n)) + \zeta ^2 - \tilde{S}(p^*)| \nonumber \\&\le |\tilde{S}(p(n)) - \tilde{S}(p^*)|+ |\zeta ^2|. \end{aligned}$$

Since \(\tilde{S}(.)\) is Lipschitz, we have

$$\begin{aligned} |\tilde{S}(p(n)) - \tilde{S}(p^*)| \le K |p(n) - p^*|. \end{aligned}$$

Substituting this Eq. (30) into Eq. (35), we obtain

$$\begin{aligned} |\eta - \tilde{S}(p^*)| \le K |p(n) - p^*| + K |p(n) - p^*|^2 \end{aligned}$$

Using Lemma 4, we have \(\mathrm{E}\left[ p(n)-p^*\right] ^2 \le Ka \) and \(\mathrm{E}\left[ p(n)-p^*\right] \le K\sqrt{a} \). Thus, we obtain \(|\eta - \tilde{S}(p^*)| = o(a)\). Hence, as a consequence, we have \(\mathrm{E}|\eta - \tilde{S}(p^*)| \rightarrow 0\) as \(a \rightarrow 0\), which confirms Eq. (16). Equation (17) follows by observing that

$$\begin{aligned} \mathrm{E} \left[ \left. \left| \frac{\Delta p(n) }{a}\right| ^3\right| p(n)=p\right] =\xi (p) < \xi < \infty \end{aligned}$$

Substituting Eq. (17) into the above equation, we obtain

$$\begin{aligned} \mathrm{E} \left[ \left. \left| \frac{\Delta p(n)}{a}\right| ^3\right| p(n) \right]&< \xi \nonumber \\ \mathrm{E} \left[ \left| \frac{|\Delta z(n)|^3}{a^{3/2}}\right| p(n)\right]&< \xi \nonumber \\ \mathrm{E} \left[ \left. |\Delta z(n)|^3\right| p(n)\right]&< \xi a ^{3/2} \end{aligned}$$

where \(\xi a ^{3/2} \rightarrow 0\) as \(a \rightarrow 0\). This completes the proof of this lemma. \(\square \)

Proof of Theorem 2

Let \(h(u)=\mathrm{E}\left[ e^{iuz(n)}\right] \) be the characteristic function of \(z(n)\). Then using the third-order taylor’s expansion of \(e^{iu}\) for real \(u\), we obtain

$$\begin{aligned} \mathrm{E}\left[ \left. e^{iuz(n)}\right| z(n)\right]&= 1 + iu \mathrm{E}[\Delta z(n)|z(n)] - \frac{u^2}{2} \mathrm{E}\left[ \left. \Delta z^2(n)\right| z(n)\right] \nonumber \\&\quad + k |u|^3 \mathrm{E}\left[ \left. |\Delta z(n)|^3 \right| z(n)\right] , \end{aligned}$$

where \(k \le 1/6\); thus

$$\begin{aligned} h(u)&= \mathrm{E}\left[ e^{iuz(n+1)}\right] , \nonumber \\&= \mathrm{E}\left[ e^{iuz(n)} \mathrm{E}\left( \left. e^{iu\Delta z(n)}\right| z(n)\right) \right] ,\nonumber \\&= h(u) + iu \mathrm{E}\left[ e^{iuz(n)}\mathrm{E}\left\{ \Delta z(n)|z(n)\right\} \right] , \nonumber \\&- \frac{u^2}{2} \mathrm{E}\left[ e^{iuz(n)}\mathrm{E}\left\{ \left. \Delta z^2(n)\right| z(n)\right\} \right] + k |u|^3 \mathrm{E}\left[ k e^{iuz(n)}\mathrm{E}\left\{ \left. |\Delta z(n)|^3 \right| z(n)\right\} \right] . \nonumber \\ \end{aligned}$$

Cancelling \(h(u)\) and dividing by \(u\), results

$$\begin{aligned}&i \mathrm{E}\left[ e^{iuz(n)}\mathrm{E}\left\{ \Delta z(n)|z(n)\right\} \right] - \frac{u}{2} \mathrm{E}\left[ e^{iuz(n)}\mathrm{E}\left\{ \left. \Delta z^2(n)\right| z(n)\right\} \right] \nonumber \\&\qquad +\, k |u|^2 \mathrm{E}\left[ k e^{iuz(n)}\mathrm{E}\left\{ \left. |\Delta z(n)|^3 \right| z(n)\right\} \right] =0. \end{aligned}$$

Thus, using estimates of Lemma 5, we have

$$\begin{aligned}&i a w'(p^*)\mathrm{E}\left[ e^{iuz(n)}z(n)\right] -\frac{u}{2} \tilde{S}(p^*) \mathrm{E}\left[ e^{iuz(n)}\right] +\mathrm{E}\left[ o(a)\right] \nonumber \\&\qquad +\,u\mathrm{E}\left[ o(a)\right] +u^2\mathrm{E}\left[ o(a)\right] =0. \end{aligned}$$

From Eqs. (15) and (17), it is evident that \(\mathrm{E}[|z(n)|] < \infty \) when \(a\) is small or

$$\begin{aligned} a w'(p^*)\frac{dh(u)}{du}-\frac{u}{2} \tilde{S}(p^*) h(u)+\mathrm{E}\left[ o(a)\right] +u\mathrm{E}\left[ o(a)\right] +u^2\mathrm{E}\left[ o(a)\right] =0. \end{aligned}$$

Dividing the above equation by \(aw'(p^*)\) and using fact \(w'(p^*)<0\), we obtain

$$\begin{aligned} \frac{dh(u)}{du}+u\frac{\tilde{S}(p^*)}{2 \left| w'(p^*)\right| } h(u)+\epsilon (u) = 0, \end{aligned}$$


$$\begin{aligned} \varphi = \sup _u \frac{|\epsilon (u)|}{1+u^2} \rightarrow 0, \end{aligned}$$

as \(a \rightarrow 0\). Since \(h(0)=1\), it follows that

$$\begin{aligned} h(u)=e^{-\frac{(u\sigma )^2}{2}}\left( 1-\int _0^ue^\frac{(ux)^2}{2}dx\right) , \end{aligned}$$

where \(\sigma ^2=\frac{\tilde{S}(p^*)}{2 \left| w'(p^*)\right| }\). But we have

$$\begin{aligned} \left| \int _0^{|u|}e^\frac{(ux)^2}{2}\epsilon (u)dx\right| \le \varphi \int _0^{|u|}e^\frac{(ux)^2}{2}\left( 1+x^2\right) dx \rightarrow 0, \end{aligned}$$

as \(a \rightarrow 0\); thus

$$\begin{aligned} h(u) \rightarrow e^{-\frac{(u\sigma )^2}{2}}. \end{aligned}$$

Then using the facts that each characteristic function determines the distribution uniquely and \(h(u)\) is characteristic function of \(N(0,\sigma ^2)\), thus we obtain

$$\begin{aligned} z(n) \sim N(0,\sigma ^2), \end{aligned}$$

and hence the theorem. \(\square \)

Proof of Theorem 3

In the equilibrium state, the average penalty rates for both actions are equal or \(f_1(p^*)=f_2(p^*),\) which results \(c_1\pi ^*=c_2(1-\pi ^*)\). Thus we have

$$\begin{aligned} \pi ^* = \frac{\delta }{\delta +P_C}, \end{aligned}$$

where \(\delta =\mathrm{Prob} \left[ \hat{B}_h < p_h\right] \). Thus average number of blocked new calls, \(\bar{N}_n\), is equal to

$$\begin{aligned} \bar{N}_n&= \lambda _n \left[ 1-\pi ^*(1-P_C)\right] , \nonumber \\&= \lambda _n (1+\delta ) \frac{P_C}{P_C+\delta }. \end{aligned}$$

Computing derivative of \(\bar{N}_n\) with respect to \(\delta \) results

$$\begin{aligned} \frac{\partial \bar{N}_n}{\partial \delta }&= - \lambda _n \frac{P_C(1-P_C)}{(P_C+\delta )^2},\nonumber \\&< 0. \end{aligned}$$

Thus \(\bar{N}_n\) is a strictly decreasing function of \(\delta \). Since the adaptive UFC algorithm gives the higher priority to the hand-off calls, it attempts to minimize the dropping probability of hand-off calls. Using this fact and Eq. (40), it is evident that \(\bar{N}_n\) is minimized which results in minimization of the blocking probability of new calls and hence the theorem.\(\square \)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Beigy, H., Meybodi, M.R. A learning automata-based adaptive uniform fractional guard channel algorithm. J Supercomput 71, 871–893 (2015) doi:10.1007/s11227-014-1330-7

Download citation


  • Reinforcement learning
  • Learning automata
  • Uniform fractional guard channel policy
  • Adaptive uniform fractional guard channel policy