Advertisement

SN Computer Science

, 1:38 | Cite as

Diffusion Approximation for an Open Queueing Network with Limited Number of Customers and Time-Dependent Service

  • D. KopatsEmail author
  • M. Matalytski
Original Research
  • 233 Downloads
Part of the following topical collections:
  1. Modelling methods in Computer Systems, Networks and Bioinformatics

Abstract

The object of this research is an open queueing network (QN) with a single class of customers, in which the total number of customers is limited. Service parameters are dependent on time, the routing of customers is determined by an arbitrary stochastic transition probability matrix, which is also depends on time. The service times of customers in each queue of the system is exponentially distributed with FIFO service, and it is assumed that a birth and death process generates and destroys the customers. The random vector, which determines the network state, forming a Markov random process is introduced. The purpose of the research is an asymptotic analysis of this Markov process, describing the queueing network state with a large number of customers, obtained from a system of differential equations, and used to find the mean relative number of customers in the network queues at any time. The results are illustrated with a specific example. This approach can be used tp model processes of customer service in the insurance companies, banks, logistics companies and other cyber-economical or service organizations.

Keywords

Expected revenues Limited number customer Diffusion approximation Distribution density for the expected revenue 

Introduction

It is well known that various real systems modelled by queueing networks, such as computer systems, networks or Cloud servers, have time-dependent parameters [7, 8, 12]. In the process of their design and optimisation [9] one needs to be able to determine their time dependent probability characteristics.

Exact results for non-stationary state probabilities of Markov networks are difficult to obtain because of the high dimensionality of the systems of difference-differential equations which they satisfy. To find these probabilities under high load conditions, the diffusion approximation method is often used [1, 2, 3, 6].

Queueing networks (QN) are mathematical models of various economical and technical systems, associated with the of service of customers, jobs or network packets [4, 5, 10]. Although the total number of customers in such systems may be limited, their total number changes over time, what allows us to uses models of open QN with limited numbers of customers.

Consider an open Markov network composed of \(n+1\) queues, where the total number of customers is limited K. Suppose, that the service parameters in these networks are dependent or time t. Let the service time at server i be a function \(m_i(t)\) that takes integer values, \(i=\overline{1,n}\) . The service probability of customers in each service line of the system \(S_i\) on the time interval \([t + \Delta t]\) equal to \(\mu _i(t) \Delta t + o(\Delta t)\). Customers are selected on the service according to the FIFO discipline. When customer service in QS \(S_i\) was completed, a customer with probability \(p_{ij}(t)\) move in the queue of QS \(S_j\). Transition matrix, \(P(t)=||p_{ij}(t)||\) is the matrix of transition probabilities of an irreducible Markov chain and depends on time \(0 \le p_{ij}(t) \le 1\). In addition, we assume that the number of customers in the system \(S_0\), except for functions \(\mu _0(t), ~p_{i0}(t), m_0(t)\) determined by the birth and death process, which generates new customers with the intensity \(\lambda ^+ (t) , i=\overline{0,n}\) and destroys the existing with intensity \(\lambda ^- (t) , i=\overline{0,n}\). Hence, the object under study is an open QN, the total number of customers which is limited and varies in accordance with the process of birth and death, occurring in the system \(S_0\). The network state is determined by the vector
$${\mathbf {k}}(t)=(k_0(t),k_1(t),\ldots,k_n(t)),$$
(1)
where \(k_i(t)\) is the count of customers in the system \(S_i\) in time t . Vector (1) in view of the above is a Markov random process with continuous time and a finite number of states. Obviously, the total number of customers serviced in the network at time t equals \(K(t) = \sum\nolimits_{{i = 0}}^{n} {k_{i} } (t),i = \overline{{0,n}}\). We carry out the asymptotic analysis of Markov process (1) with a big number of customers using the technique proposed in [5, 6]. Supposing that QN operate under a high load regime of customers, i.e. value K (t). It is sufficiently big, but not limited: \(0<<K(t)<K\). This article partial differential equation of the second order, and is the equation of the Kolmogorov–Fokker–Planck equation for the probability density of the investigated process. A system of non-homogeneous ordinary differential equations of the first order for the average values of the components of the vector of the network state was obtained. The solution of this system allows us to find the average relative number of customers in each QS at interested time.

The Derivation of the System of Differential Equations for the Average Relative Number of Customers in the Network Systems

Theorem 1

In the asymptotic case for large K, the density p(xt), of variable \(\mathbf {\xi (t)}=(\xi _1(t),\xi _2(t),\ldots,\xi _n(t))=\Biggl (\frac{\mathbf {k}(t)}{K}\Biggr )=\Biggl (\frac{k_0(t)}{K},\frac{k_1(t)}{K},\ldots,\frac{k_n(t)}{K}\Biggr )\) satisfies the Kolmogorov–Fokker–Planck equation:
$$\begin{aligned} \frac{\partial p(x,t)}{\partial t} & = - \sum _{i=0}^{n} \frac{\partial }{\partial x_i} (A_i(x,t)p(x,t))\nonumber \\&\quad +\frac{\epsilon }{2}\sum _{i,j=0}^{n} \frac{\partial ^2}{\partial x_i \partial x_j}(B_{ij}(x,t)p(x,t)) \end{aligned}$$
(2)
at points of existence of derivatives, in which
$$\begin{aligned} A_{0} (x,t) & = \sum\limits_{{j = 0}}^{n} {\mu _{j} } (t)\min (l_{j} (t),x_{j} )p_{{j0}} (t) \\ & \quad - \lambda _{0}^{ - } x_{0} - \lambda _{0}^{ + } (t)\left(1 - \sum\limits_{{i = 0}}^{n} {x_{i} } \right), \\ \end{aligned}$$
(3)
$$A_{i} (x,t) = \sum\limits_{{j = 0}}^{n} {\mu _{j} } (t)\min (l_{j} (t),x_{j} )p_{{ji}} (t),$$
(4)
$$\begin{aligned} B_{{00}} (x,t) & = \sum\limits_{{j = 0}}^{n} {\mu _{j} } (t)\min (l_{j} (t),x_{j} )p_{{0j}} (t) \\ & \quad + \lambda _{0}^{ - } x_{0} - \lambda _{0}^{ + } (t)\left(1 - \sum\limits_{{i = 0}}^{n} {x_{i} } \right), \\ \end{aligned}$$
(5)
$$B_{{ii}} (x,t) = \sum\limits_{{j = 0}}^{n} {\mu _{j} } (t)\min (l_{j} (t),x_{j} )p_{{ij}} (t),$$
(6)
$$B_{{ij}} (x,t) = \mu _{i} (t)\min (l_{i} (t),x_{i} )p_{{ij}} (t),$$
(7)
$$\begin{gathered} p_{{ji}} (t) = \left\{ {\begin{array}{*{20}l} {p_{{ji}} (t),i \ne j,} \hfill \\ {p_{{ji}} (t) - 1,i = j,} \hfill \\ \end{array} } \right., \hfill \\ p_{{ij}} (t) = \left\{ {\begin{array}{*{20}l} {p_{{ij}} (t),i \ne j,} \hfill \\ {p_{{ij}} (t) - 1,i = j,} \hfill \\ \end{array} } \quad \right.,l_{j} (t) = \frac{{m_{j} (t)}}{K}. \hfill \\ \end{gathered}$$
(8)

Proof

Consider the \(I_i\)\((n+1)\)-vector, all components are equals to zero except i-th, which equals 1, \(i=\overline{0,n}\) . Consider all possible transitions to the state \({\mathbf {k}}(t+ \Delta t)=({\mathbf {k}},t+ \Delta t )\) of process \({\mathbf {k}}(t)\) at time \(\Delta t\):
  • from state \(({\mathbf {k}}+I_i-I_j,t)\) can get to \(({\mathbf {k}},t+ \Delta t )\) with probability \(\mu _{i} (t)\min (m_{i} (t),\) \(k_i(t)+1)p_{ij}(t) \Delta t+ o(\Delta t), i,j= \overline{0,n}\)

  • from state \(({\mathbf {k}}+I_0,t)\) can get to \(({\mathbf {k}},t+ \Delta t )\) with probability \(\lambda ^-_0(t)(k_0(t)+1)\Delta t+ o(\Delta t),\)

  • from state \(({\mathbf {k}}-I_0,t)\) can get to \(({\mathbf {k}},t+ \Delta t )\) with probability \(\lambda _{0}^{ + } (t)\left( {K - \sum\nolimits_{{i = 0}}^{n} + 1} \right)\Delta t + o(\Delta t),\)

  • from state \(({\mathbf {k}},t)\) can get to \(({\mathbf {k}},t+ \Delta t )\) with probability

    \(1 - \left[ {\sum\nolimits_{{i = 0}}^{n} {\mu _{i} } (t)\min (m_{i} (t),k_{i} (t))p_{{ij}} (t) + \lambda _{0}^{ - } (t)(k_{0} (t) + 1) + \lambda _{0}^{ + } (t)(K - \sum\nolimits_{{i = 0}}^{n} {k_{i} } (t) + 1)} \right]\Delta t + o(\Delta t)\),

    and from other states with probability \(o(\Delta t)\).

\(\square\)
From the formula of total probability, we obtain a system of difference equations for the state probabilities \(P({\mathbf {k}},t)\):
$$P({\mathbf{k}},t + \Delta t) =$$
$$\sum\limits_{{i,j = 0}}^{n} P ({\mathbf{k}} + I_{i} - I_{j} ,t)(\mu _{i} (t)\min (m_{i} (t),k_{i} (t) + 1)p_{{ij}} (t)\Delta t + o(\Delta t))$$
$$+ P({\mathbf{k}} + I_{0} ,t)(\lambda _{0}^{ - } (t)(k_{0} (t) + 1)\Delta t + o(\Delta t))$$
$$+ P({\mathbf{k}} - I_{0} ,t)\left( {\lambda _{0}^{ + } (t)\left( {K - \sum\limits_{{i = 0}}^{n} + 1} \right)\Delta t + o(\Delta t)} \right)$$
$$+ P({\mathbf{k}},t)(1 - \left[ {\sum\limits_{{i = 0}}^{n} {\mu _{i} } (t)\min (m_{i} (t),k_{i} (t))p_{{ij}} (t)} \right.$$
$$\left. { + \lambda _{0}^{ - } (t)(k_{0} (t) + 1) + \lambda _{0}^{ + } (t)\left( {K - \sum\limits_{{i = 0}}^{n} {k_{i} } (t) + 1} \right)} \right]\Delta t + o(\Delta t)).$$
Therefore, the system of difference-differential Kolmogorov equations for these probabilities is
$$\frac{{{\text{d}}P({\mathbf{k}},t)}}{{{\text{d}}t}} = \sum\limits_{{i,j = 0}}^{n} {\mu _{i} } (t)\min (m_{i} (t),k_{i} (t))p_{{ij}} (t)(P({\mathbf{k}} + I_{i} - I_{j} ,t) - P({\mathbf{k}},t))$$
$$+ \sum\limits_{{i,j = 0}}^{n} {(\mu _{i} (t)\min (m_{i} (t)} ,k_{i} (t) + 1) - \mu _{i} (t)\min (m_{i} (t),k_{i} (t))p_{{ij}} (t)P({\mathbf{k}} + I_{i} - I_{j} ,t)$$
$$+ \lambda _{0}^{ - } (t)k_{0} (t)(P({\mathbf{k}} + I_{0} ,t) - P({\mathbf{k}},t)) + \lambda _{0}^{ - } (t))P({\mathbf{k}} + I_{0} ,t)$$
$$+ \lambda _{0}^{ + } (t)(K - \sum\limits_{{i = 0}}^{n} {k_{i} } (t))(P({\mathbf{k}} - I_{0} ,t) - P({\mathbf{k}},t)) + \lambda _{0}^{ + } (t)P({\mathbf{k}} - I_{0} ,t).$$
The solution of the system (9) in an analytic form is difficult. We shall, therefore, consider the asymptotic case of a big number of customers on the network, that is, we assume that \(K>>1\). To find the probability distribution of the random vector k(t), we move on to the relative variables and consider the vector \(\xi (t)\). Possible values of this vector at a fixed t belong to a bounded closed set
$$G = \left\{ {{\mathbf{x}} = (x_{0} ,x_{1} , \ldots ,x_{n} ):x_{i} > 0,i = \overline{{0,n}} ,~\sum\limits_{{i = 0}}^{n} {x_{i} } \le 1} \right\},$$
which contains nodes \((n +1)\)-dimensional lattice at a distance \(\epsilon =\frac{1}{K}\) from each other. By increasing K “filling density” of the set G possible vector components \(\xi (t)\) increases, and it becomes possible to consider that it has a continuous distribution with probability density function p(xt) , which satisfies the asymptotic relation
$$\begin{aligned} K^{n+1}P(\mathbf {k},t)=K^{n+1}P(\mathbf {x}k,t) \rightarrow ~{t \rightarrow \infty } p(x,t). \end{aligned}$$
We hence use the approximation function:
$$\begin{aligned} K^{n+1}P(\mathbf {k},t)=K^{n+1}P(\mathbf {x}k,t)= p(x,t), x \in G. \end{aligned}$$
Rewriting the system of equations (2) for the density p(xt), we obtain:
$$\frac{{{\text{d}}p({\mathbf{x}},t)}}{{{\text{d}}t}} = \sum\limits_{{i,j = 0}}^{n} {\mu _{i} } (t)\min (l_{i} (t),x_{i} (t))p_{{ij}} (t)(p({\mathbf{x}} + I_{i} - I_{j} ,t) - p({\mathbf{x}},t))$$
$$+ \sum\limits_{{i,j = 0}}^{n} {\mu _{i} } (t)\frac{{\partial \min (l_{i} (t),x_{i} (t))}}{{\partial x_{i} }}p_{{ij}} (t)p({\mathbf{x}} + I_{i} - I_{j} ,t)$$
$$+ \lambda _{0}^{ - } (t)x_{0} (t)(p({\mathbf{x}} + I_{0} ,t) - p({\mathbf{x}},t)) + \lambda _{0}^{ - } (t))p({\mathbf{x}} + I_{0} ,t)$$
$$+ \lambda _{0}^{ + } (t)\left( {1 - \sum\limits_{{i = 0}}^{n} {x_{i} } (t)} \right)(p({\mathbf{x}} - I_{0} ,t) - p({\mathbf{x}},t)) + \lambda _{0}^{ + } (t)p({\mathbf{k}} - I_{0} ,t).$$
Using that \(\epsilon K=1\) , we obtain the following representation:
$$\frac{{{\text{d}}p({\mathbf{x}},t)}}{{{\text{d}}t}} = \sum\limits_{{i,j = 0}}^{n} {\mu _{i} } (t)\min (l_{i} (t),x_{i} (t))p_{{ij}} (t)\left[ {\left( {\frac{{\partial p({\mathbf{x}},t)}}{{{\text{d}}x_{i} }} - \frac{{\partial p({\mathbf{x}},t)}}{{{\text{d}}x_{j} }}} \right)} \right.$$
$$\left. { + \frac{ \epsilon }{2}\left( {\frac{{\partial ^{2} p({\mathbf{x}},t)}}{{{\text{d}}x_{i}^{2} }} - 2\frac{{\partial ^{2} p({\mathbf{x}},t)}}{{{\text{d}}x_{i} {\text{d}}x_{j} }} + \frac{{\partial ^{2} p({\mathbf{x}},t)}}{{{\text{d}}x_{j}^{2} }}} \right)} \right]$$
$$+ \sum\limits_{{i,j = 0}}^{n} {\mu _{i} } (t)\frac{{\partial \min (l_{i} (t),x_{i} (t))}}{{\partial x_{i} }}p_{{ij}} (t)\left[ {p(x,t) + \epsilon \left( {\frac{{\partial p({\mathbf{x}},t)}}{{{\text{d}}x_{i} }} - \frac{{\partial p({\mathbf{x}},t)}}{{{\text{d}}x_{j} }}} \right) + } \right.$$
$$\left. { + \frac{{ \epsilon ^{2} }}{2}\left( {\frac{{\partial ^{2} p({\mathbf{x}},t)}}{{{\text{d}}x_{i}^{2} }} - 2\frac{{\partial ^{2} p({\mathbf{x}},t)}}{{{\text{d}}x_{i} {\text{d}}x_{j} }} + \frac{{\partial ^{2} p({\mathbf{x}},t)}}{{{\text{d}}x_{j}^{2} }}} \right)} \right]$$
$$+ \lambda ^{ - } (t)x_{0} \left[ {\frac{{\partial p({\mathbf{x}},t)}}{{{\text{d}}x_{0} }} + \frac{ \epsilon }{2}\frac{{\partial ^{2} p({\mathbf{x}},t)}}{{{\text{d}}x_{0}^{2} }}} \right] + \lambda ^{ - } (t)\left[ {p(x,t) + \epsilon \frac{{\partial p({\mathbf{x}},t)}}{{{\text{d}}x_{0} }} + \frac{{ \epsilon ^{2} }}{2}\frac{{\partial ^{2} p({\mathbf{x}},t)}}{{{\text{d}}x_{0}^{2} }}} \right]$$
$$+ \lambda _{0}^{ + } (t)\left( {1 - \sum\limits_{{i = 0}}^{n} {x_{i} } (t)} \right)\left[ { - \frac{{\partial p({\mathbf{x}},t)}}{{{\text{d}}x_{0} }} + \frac{ \epsilon }{2}\frac{{\partial ^{2} p({\mathbf{x}},t)}}{{{\text{d}}x_{0}^{2} }}} \right]$$
$$+ \lambda _{0}^{ + } (t)\left[ {p(x,t) - \epsilon \frac{{\partial p({\mathbf{x}},t)}}{{{\text{d}}x_{0} }} + \frac{{ \epsilon ^{2} }}{2}\frac{{\partial ^{2} p({\mathbf{x}},t)}}{{{\text{d}}x_{0}^{2} }}} \right] + o( \epsilon ^{2} ).$$
(10)
Using the notation (3)–(8) gives equation (2), completing the proof.
Then, according to [11], the expectations of the \(n_i(t)=M(\xi _i(t)), i=\overline{0,n}\), accurate to terms order of magnitude \(O(\epsilon ^2)\), are determined from the system of differential equations:
$$\frac{{{\text{d}}n_{i} (t)}}{{{\text{d}}t}} = A_{i} (n_{i} (t)),i = \overline{{0,n}} .$$
(11)
From (3) and (4), it is obvious that the right-hand side of (11) are continuous piecewise linear functions. Such systems are appropriately addressed by dividing the phase space and finding solutions in the areas of the linearity of the right parts.
Let \(\varOmega (t) = \{0,1,2,\ldots,n \}\) be the components of the index set n(t) . We divide into two disjoint \(\varOmega _0(t),\varOmega _1(t)\) : \(\varOmega _0(t)= \{i: l_i(t)<n_i(t) \le 1 \}\) , \(\varOmega _1(t)= \{j: 0<n_i(t) \le l_j(t) \}\). For fixed t, the number of partitions is \(2^{n+1}\) . Each partition will be defined in the \(G(t)= \{ n(t): n_i(t)\ge 0, \sum _{i=0}^{n}n_i(t) \le 1 \}\) such that \(G_\tau (t)= \{n(t): l_i(t)<n_i(t) \le 1, i \in \varOmega _0(t),0<n_j(t) \le l_j(t), j \in \varOmega _1(t), \sum _{c=0}^{n}n_c(t) \le 1 \},\) \(\tau = 1,2,\ldots,2^{n+1}, G_\tau (t)=G(t).\) You can write the system of equations (11) explicitly for each of the areas of phase space subdivision. Consider the field \(A: \varOmega _0(t)= \{0\},\varOmega _1(t)= \{1,2,\ldots,n \}\) which according to no queues in systems \(S_1,S_2,\ldots,S_n\) in average and the presence of queues in the system \(S_0\). The system of differential equations (11) in this field is of the form:
$$n^{\prime}_{0} (t) = \sum\limits_{{j = 1}}^{n} {\mu _{j} } (t)n_{j} (t)p_{{j0}} (t) + \mu _{0} (t)l_{0} (t)p_{{00}} (t) - \lambda _{0}^{ - } n_{0} (t) - \lambda _{0}^{ + } (t)$$
(12)
$$\times \left( {1 - \sum\limits_{{i = 0}}^{n} {n_{i} } (t)} \right),$$
$$\begin{aligned}&n'_i(t)=\sum _{j=1}^{n} \mu _j(t)n_i(t)p_{j0}(t)+\mu _0(t)l_0(t)p_{00}(t). \end{aligned}$$
In the computer system Mathematica, a mathematical programming procedure has been developed that implements calculation examples. It shows one example of the calculation of the average relative number of customers in the system network, which is a mathematical model of the processing of customer requests for an insurance company.

Example

Consider the QN, consisting of 6 QS \(S_{0} ,S_{1} ,S_{2} ,S_{3} ,S_{4} ,S_{5}\), wherein \(K = 100000\). Define the following transition probabilities between QS: \(p_{{05}} (t) = 0,2\cos ^{2} (3t);\) \(p_{{04}} (t) = 0,2\sin ^{2} (3t);\,p_{{03}} (t) =\) \(0,4\cos ^{2} t;\,p_{{02}} (t) = 0,4\sin ^{2} t;\,p_{{01}} (t) = 0,2\sin ^{2} (2t);\) \(p_{{00}} (t) = 0.2\cos ^{2} (2t);\,p_{{i0}} (t) = 1;\,p_{{ij}} = 0\) in other cases; \(l_{0} (t) = \frac{{[5\sin (5t)]}}{{10000}};\) \(N_{0} (t) = 17000;\,N_{1} (t) = 13000;\,N_{2} (t) = 25000;\,N_{3} (t) = 23000;\,N_{4} (t) = 12000;\) \(N_5(t)= 10000.\) Let us pretend that \(n_{i} (0) = 0,i = \overline{{1,5}} \mu _{0} = 5/K,\mu _{1} = 0,4,\mu _{2} = 0,3,\mu _{3} = 0,4,\mu _{4} = 0,5,\mu _{5} = 0,5\) [.]-integer part, in parentheses. Solving the system (12) in the package Mathematica, we obtain:
$$\begin{aligned} N_0(t) & = (1,3^{-t}+1,2^{-t})(t^2-5t)+17000;\\ N_1(t) & = (e^{-t}+1,2^{-t})(-t^2-5t)+13000; \\ N_2(t) & = (e^{-t}+0,5^{-t})(-t^3-7t^2+8t)+25000;\\ N_3(t) & = (0,7^{-t}+1,3^{-t})(t^2-0,7t)+23000; \\ N_4(t) & = (0,9^{-t}+3^{-t})(-t^2-0,7t)+12000;\\ N_5(t) & = (0,9^{-t}+3^{-t}+0,5^{-t})(t^2-1,1t)+10000. \end{aligned}$$

Conclusions

In this paper, Markov queueing network with a limited number of the same type of customers has been investigated. The number of customers in the system varies in accordance with the birth and death in process. To obtain a system of differential equations for the average number of customers in the system, the method of diffusion approximation is applied, allowing one to find the average number with high accuracy for a large number of customers.

The results may be useful in modelling and optimization of customer service in computer systems, networks and in insurance companies, banks, logistics companies and other socio-economic and cyber-physical organizations [8, 12].

Notes

References

  1. 1.
    Abdelrahman OH, Gelenbe E. Time and energy in team-based search. Phys Rev E. 2013;87:032125.  https://doi.org/10.1103/PhysRevE.87.032125.CrossRefGoogle Scholar
  2. 2.
    Gelenbe E. Probabilistic models of computer systems. Part ii: Diffusion approximations, waiting times and batch arrivals. Acta Inform. 1979;12(4):285–303.MathSciNetCrossRefGoogle Scholar
  3. 3.
    Gelenbe E. Search in unknown random environments. Phys Rev E. 2010;82(6):061112.CrossRefGoogle Scholar
  4. 4.
    Gelenbe E, Muntz RR. Probabilistic models of computer systems—part I (exact results). Acta Inform. 1976;7(1):35–60.CrossRefGoogle Scholar
  5. 5.
    Gelenbe E, Pujolle G. Introduction to queueing networks. New York: Wiley; 1998.zbMATHGoogle Scholar
  6. 6.
    Kobayashi H. Application of the diusion approximation to queueing networks. J ACM. 1974;21:456–69.Google Scholar
  7. 7.
    Majewski K. Single class queueing networks with discrete and fluid customers on the time interval R. 2000;36:405.MathSciNetGoogle Scholar
  8. 8.
    Matalytski M. Mathematical analysis of HM-networks and its application in transport logistics. Autom Remote Control. 1998;70:1683.CrossRefGoogle Scholar
  9. 9.
    Medvedev GA. On the optimization of the closed queueing system. Izvestiia Akademii Nauk SSSR Tekhnicheskaya Kibernetika. 1975;6:199.Google Scholar
  10. 10.
    Medvedev GA. Closed queuing systems and their optimization. Izvestiia Akademii Nauk SSSR Tekhnicheskaya Kibernetika. 1978;6:199.Google Scholar
  11. 11.
    Paraev YI. Introduction to statistical dynamics in control of processes and filtering. Moscow: Sovetskoye Radio; 1976.Google Scholar
  12. 12.
    Rusilko TV, Matalytski MA. Queuing network models of claims processing in insurance companies. Saarbrucken; 2012.Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd 2019

Authors and Affiliations

  1. 1.Grodno State University Yanka KoupalaGrodnoBelarus
  2. 2.Czestochowa University of TechnologyCzestochowaPoland

Personalised recommendations