Keywords

1 Introduction

The behavior of various systems, including communication and computer systems, as well as production and manufacturing procedures, can be represented and analyzed through queuing network models to evaluate their performance [10, 25, 26]. System performance analysis consists of the derivation of a set of particular features [2, 6, 7, 9]. This usually includes the queue length distribution and various performance indicators such as response time, throughput and utilization.

Tandem queues are widely used in design, capacity planning and performance evaluation of computer and communication systems, call centers, flexible manufacturing systems, etc. Some examples of their application in real systems (two transmitter communication networks with DBA (Dynamic Bandwidth Allocation), service facility with front and back room operations) can be found in [24], [1] respectively.

The theory behind tandem queues is well developed, see, e.g. [35, 11, 22, 23]. However, there is still a great interest around more complicated setups involving blocking and deadlocking phenomena as well as different mechanisms for offering services. In particular, the two-node tandem queuing model with the BMAP (Batch Markovian Arrival Process) input flow and non-exponential service time distribution described in the paper [8]. Additionally, systems with finite capacity queues under various blocking mechanisms and scheduling constraints are analyzed by the author in [1221].

In [12, 13], the closed type, multi-center computer networks with different blocking strategies are investigated and measures of effectiveness based on Quality of Service (QoS) are studied. Markovian and semi-Markovian approaches for analysis of open tandem networks with blocking are presented in [14, 16, 18, 20, 21]. Some two-stage tandem queues with blocking and an optional feedback are presented in [17, 19]. In such systems, feedback is the likelihood of a task return, with fixed probability to the first node of the tandem immediately after the service at the second one. Tandems with feedback are usually more complex than the ones without and they are mostly investigated given stationary Poisson arrival process and exponential service time distribution. Blocking and deadlocking phenomena in an open series, linked network model with HOL (head-of-line) priority feedback service was investigated and presented by the author in [15].

The rest of this paper is organized as follows: Sect. 2 presents and explains the analytical model. Section 3, analyzes a tandem network with blocking, feedback service and deadlock phenomena. Procedures for calculating performance measures and Quality of Service (QoS) parameters are presented in Sect. 4. Numerical results obtained using our solution technique is given in Sect. 5. Finally, Sect. 6 concludes the paper.

2 Model Specification and Description

The general model outline:

  • The arrival process from source station is Poisson.

  • Each station consists of a single server and buffer.

  • Two stations provide service that is exponentially distributed.

  • The deadlock resolving rate μd is also exponentially distributed.

  • Scheduling disciplines are FCFS.

  • All buffers have finite capacity m1 and m2.

Figure 1 presents a simplified two-server tandem setup of the proposed model. Tasks arrive from the source at station A according to the Poisson process with rate λ and they are processed in a FIFO manner. The service received by station A is as follows: the task first accepts an exponentially distributed service with rate μA, after service completion at station A, the task proceeds to station B (exponentially distributed service with rate μB), once it finishes at station B, it gets sent back to station A for re-processing with probability 1 − σ. We are also assuming that tasks, after being processed in the station B, may leave the network with probability σ.

Fig. 1.
figure 1

Two-server tandem configuration with feedback.

The successive service times at both servers are mutually independent and are not conditioned on the state of the network. A finite capacity buffers (with capacity m1 and m2) are allowed at the front of each server. A task, upon service completion at server A, attempts with probability 1 − σ to join server B. If server B at the given time is full, then the first server must hold the completed task and consequently becomes blocked (i.e., not available for service on incoming tasks) until the second server completes service. The nature of the service process in this case depends only of the service rate at station B. It allows one to treat this task as located in additional places in the buffer B. Similarly, if the first buffer (with capacity m1) ahead of the first station is full, then the source station or the server B becomes blocked. In this case, the nature of the service process depends only of the service rates in station A and we can treat these tasks as located in additional places in the buffer A. There can be a maximum of m1 + 3 tasks assigned to the first servicing station including the tasks at the source and server B that can become blocked. Likewise, there can be a maximum of m2 + 2 tasks assigned to station B with a task blocked at server A (see Fig. 2).

Fig. 2.
figure 2

State transmission diagram for tandem with blocking and deadlock

In this special type of multi-stage network with blocking a deadlock may occur. For example, let us suppose that server A is full and server B blocks it. A deadlock will occur if the task in service at server B is sent to server A upon completion of its service. We assume that a deadlock is detected instantaneously and resolved with some delay time by simultaneously exchanging both blocked tasks at the mean rate equal to μd.

Generally, blocking and deadlocking phenomena is a very important mechanism for controlling and regulating intensity of arriving tasks from the source to the tandem stations. The arrival rate to the first server depends on the state of the network and blocking factor that reduces the rate at which source is sending traffic to this server.

3 Exact Analysis

Markov processes constitute the fundamental theory underlying the concept of queuing systems and provide very powerful and descriptive means for analysis of dynamic computer networks. Each queuing system can, in principle, be mapped onto an instance of a Markov process and then mathematically evaluated in terms of this process. According to general assumptions, a continuous-time homogeneous Markov chain can represent a tandem network. The queuing network model reaches a steady-state condition and the underlying Markov chain has a stationary state distribution. Also, such queuing network with finite capacity queues has finite state space. The solution of the Markov chain representation may then be computed and the desired performance characteristics, as queue length distribution, utilizations, and throughputs, obtained directly from the stationary probability vector. In addition, features such as deadlocks, blocking, feedback service, may be incorporated into a Markov chain representation – although the effect of doing so will increase the size of the state space.

In theory, any Markov model can be solved numerically. In particular, solution algorithm for Markov queuing networks with blocking, feedback service and deadlocks is a five-step procedure:

  1. 1.

    Definition of the series network state space (choosing a state space representation).

  2. 2.

    Enumerating all the transitions that can possible occur among the states.

  3. 3.

    Definition of the transition rate matrix Q that describes the network evaluation (generating the transition rate).

  4. 4.

    Solution of linear system of the global balance equations to derive the stationary state distribution vector (computing appropriate probability vector).

  5. 5.

    Computation from the probability vector of the average performance indices.

In this type of a network we may denote its state by the pair (i,j), where i represents the number of tasks in server A and j denotes the number in server B (including the tasks in service, in blocking, or in deadlock). For any non-negative integer values of i and j, (i,j) represent a feasible state of this queuing network, and pi,j denotes the probability for that state in equilibrium. These states and the possible transitions among them are shown in Fig. 2. The flux into a state of the model is given by all arrows into the corresponding state, and the flux out of the state is determined from the set of all outgoing arrows from the state. The arrows indicate the only transitions that are possible for this model. Transitions from top to bottom represent a change of state due to an arrival from the source station. Diagonal transitions from left to right or from right to left represent a change of state due to a task completing service at server A or at server B. Finally, transitions indicated by bottom to top arrows represent a change of state due to departures from the network, which occurs at rate μAσ. The state diagram of the blocking network (see Fig. 2) contains all possible non-blocked states (marked by ovals) as well as the blocking states (marked by rectangles). Additionally, one special state, which represents deadlock state – is marked by a bold rectangle. The number of states in the blocking network is the sum of all possible non-blocking states plus all the blocking states: (m2 + 2)(m1 + 3) + m1 + 2 + m2 + 2. Based on an analysis the state space diagram, the process of constructing the steady-state equations in the Markov model, can be divided into several independent steps, which describe some similar, repeatable patterns. For the server A, the steady-state blocking equations are:

$$ \begin{aligned} \left( {\lambda \, + \, \mu^{B} } \right)\cdot \, p_{0,m2 + 2} & = \mu^{A} \left( {1 - \sigma } \right)\cdot \, p_{1,m2 + 1} \\ \left( {\lambda \, + \, \mu_{{}}^{B} } \right) \, \cdot \, p_{i,m2 + 2} & = \, \lambda \, \cdot \, p_{i - 1,m2 + 2} + \, \mu^{A} \left( {1 - \sigma } \right) \, \cdot \, p_{i + 1,m2 + 1} \;\;{\text{for}}\;\;i \, = \, 1, \, \cdots , \, m1 \\ \mu_{{}}^{B} \cdot \, p_{m1 + 1,m2 + 2} & = \lambda \cdot \, p_{m1,m2 + 2} + \mu^{A} \left( {1 - \sigma } \right)\cdot \, p_{m1 + 2,m2 + 1} \\ \end{aligned} $$
(1)

For the source, the blocking equations are:

$$ \begin{aligned} (\mu^{A} \sigma \, + \, \mu^{A} \left( {1 - \sigma } \right))\cdot \, p_{m1 + 2,j} & = \lambda \cdot \, p_{m1 + 1,j} \;\;{\text{for}}\;\;j \, = \, 0, \, \cdots , \, m2 \\ (\mu^{A} \sigma \, + \, \mu^{A} \left( {1 - \sigma } \right))\cdot \, p_{m1 + 2,m2 + 1} & = \lambda \cdot \, p_{m1 + 1,m2 + 1} + \mu_{{}}^{d} \cdot \, p_{m1 + 3,m2 + 2} \\ \end{aligned} $$
(2)

For the server B, the blocking equations are:

$$ (\mu^{A} \sigma \, + \, \mu^{A} \left( {1 - \sigma } \right))\cdot \, p_{m1 + 3,j} = \mu_{{}}^{B} \cdot \, p_{m1 + 1,j + 1} \;\;{\text{for}}\;\;j \, = \, 0, \, \cdots , \, m2 $$
(3)

For the deadlock, the steady-state blocking equation is:

$$ \mu_{{}}^{d} \cdot \, p_{m1 + 3,m2 + 2} = \mu^{B} \cdot \, p_{m1 + 1,m2 + 2} $$
(4)

The steady-state equations for all states without blocking are:

$$ \begin{aligned} & \lambda \cdot p_{0,0} = \, \mu^{A} \sigma \cdot \, p_{1,0} \\ & \left( {\lambda \, + \, \mu^{B} } \right)\cdot \, p_{0,j} = \mu^{A} \left( {1 - \sigma } \right)\cdot \, p_{1,j - 1} + \mu^{A} \sigma \cdot \, p_{1,j} \quad {\text{for}}\;\;j \, = 1, \, \cdots , \, m2 + 1 \\ & \left( {\lambda \, + \, \mu^{A} \sigma \, + \, \mu^{A} \left( {1 - \sigma } \right)} \right)\cdot \, p_{i,0} = \lambda \cdot \, p_{i - 1,0} + \mu_{{}}^{B} \cdot \, p_{i - 1,1} + \mu^{A} \sigma \cdot \, p_{i + 1,0} \quad {\text{for}}\;\;i \, = 1, \, \cdots , \, m1 \\ & \left( {\lambda \, + \, \mu^{A} \sigma \, + \, \mu^{A} \left( {1 - \sigma } \right) \, + \, \mu_{{}}^{B} } \right)\cdot \, p_{i,j} = \lambda \cdot \, p_{i - 1,j} + \mu_{{}}^{B} \cdot \, p_{i - 1,j + 1} + \mu^{A} \sigma \cdot \, p_{i + 1,j} + \mu^{A} \left( {1 - \sigma } \right)\cdot \, p_{i + 1,j - 1} \\ & \quad {\text{for}}\;\;i \, = 1, \, \cdots , \, m1, \, j \, = 1, \, \cdots , \, m2 + 1 \\ & \left( {\lambda \, + \, \mu^{A} \sigma \, + \, \mu^{A} \left( {1 - \sigma } \right)} \right)\cdot \, p_{m1 + 1,0} = \lambda \cdot \, p_{m1,0} + \mu_{{}}^{B} \cdot \, p_{m1,1} + \mu^{A} \sigma \cdot \, p_{m1 + 2,0} + \mu^{A} \sigma \cdot \, p_{m1 + 3,0} \\ & \left( {\lambda \, + \, \mu^{A} \sigma \, + \, \mu^{A} \left( {1 - \sigma } \right) \, + \, \mu_{{}}^{B} } \right)\cdot \, p_{m1 + 1,j} = \lambda \cdot \, p_{m1,j} + \mu_{{}}^{B} \cdot \, p_{m1,j + 1} + \mu^{A} \sigma \cdot \, p_{m1 + 2,j} + \mu^{A} \sigma \cdot \, p_{m1 + 3,j} + \mu^{A} \left( {1 - \sigma } \right)\cdot \, p_{m1 + 3,j - 1}\\ & \quad {\text{for}}\;\;j \, = 1, \, \cdots , \, m2 \\ & \left( {\lambda \, + \, \mu^{A} \sigma \, + \, \mu^{A} \left( {1 - \sigma } \right) \, + \, \mu_{{}}^{B} } \right)\cdot \, p_{m1 + 1,m2 + 1} = \lambda \cdot \, p_{m1,m2 + 1} + \mu_{{}}^{B} \cdot \, p_{m1,m2 + 2} + + \mu^{A} \sigma \cdot \, p_{m1 + 2,m2 + 1} + \mu^{A} \left( {1 - \sigma } \right)\cdot \, p_{m1 + 3,m2} \\ \end{aligned} $$
(5)

A queuing network with feedback, blocking and deadlock, under appropriate assumptions, is formulated here as a Markov process. The stationary probability vector can be obtained from (1)–(5) equations, by using numerical methods for linear systems of equations. The generation of the rate matrix Q can now be accomplished by going through the list of states and generating all the feasible transitions out of each state and the associated rates of each transition. For Markov processes in steady state, we have:

$$ \varvec{xQ} \, = \, 0 $$
(6)

where x is the stationary probability vector whose l-th element xl is the steady-state probability of a system in state l. Vector x can be obtained from (6) and a normalization condition for all network states \( \sum {x_{l} } = 1 \), using equation-solving techniques.

4 Selected Performance Measures

When steady-state probabilities are known, one can easily obtain various performance measures. For example, the procedures for calculating the Quality of Service (QoS) parameters use the steady-state probabilities in the following manner:

  1. 1.

    Deadlock probability pdead:

    $$ p_{dead} = p_{m1 + 3,m2 + 2} $$
    (7)
  2. 2.

    Source node blocking probability pblS:

    $$ p_{blS} = \sum\limits_{j = 0}^{m2 + 1} {p_{m1 + 2,j} } $$
    (8)
  3. 3.

    Server A blocking probability pblA:

    $$ p_{blA} = \sum\limits_{i = 0}^{m1} {p_{i,m2 + 2} } $$
    (9)
  4. 4.

    Server B blocking probability pblB:

    $$ p_{blB} = \sum\limits_{j = 0}^{m2} {p_{m1 + 3,j} } $$
    (10)
  5. 5.

    Simultaneously source and node A blocking probability pblSA:

    $$ p_{blSA} = p_{m1 + 1,m2 + 2} $$
    (11)
  6. 6.

    Network blocking probability pbl:

    $$ p_{bl} = p_{blA} + p_{blB} + p_{blS} + p_{m1 + 1,m2 + 2} $$
    (12)
  7. 7.

    Idle probability pidle:

    $$ p_{idle} = p_{0,0} $$
    (13)
  8. 8.

    Server A utilization ρA:

    $$ \rho_{A} = \sum\limits_{i = 1}^{m1 + 2} {} \sum\limits_{j = 0}^{m2 + 1} {1 \cdot p_{i,j} } + \sum\limits_{j = 0}^{m2} {1 \cdot p_{m1 + 3,j} } + \sum\limits_{i = 0}^{m1 + 1} {1 \cdot p_{i,m2 + 2} } $$
    (14)
  9. 9.

    Server B utilization ρB:

    $$ \rho_{B} = \sum\limits_{i = 0}^{m1 + 2} {} \sum\limits_{j = 1}^{m2 + 1} {1 \cdot p_{i,j} } + \sum\limits_{i = 0}^{m1 + 1} {1 \cdot p_{i,m2 + 2} } + \sum\limits_{j = 0}^{m2} {1 \cdot p_{m1 + 3,j} }$$
    (15)
  10. 10.

    The mean deadlocking time:

    $$ t_{dead} = 2 \cdot p_{m1 + 3,m2 + 2} \cdot \frac{1}{{\mu^{d} }} $$
    (16)
  11. 11.

    The mean blocking time in server A:

    $$ t_{blA} = (\sum\limits_{i = 0}^{m1 + 1} {1 \cdot p_{i,m2 + 2} } ) \cdot \frac{1}{{\mu^{B} }} $$
    (17)
  12. 12.

    The mean blocking time in server B:

    $$ t_{blB} = (\sum\limits_{j + 0}^{m2} {1 \cdot p_{m1 + 3,j} } ) \cdot \frac{1}{{\mu^{A} }} $$
    (18)
  13. 13.

    The mean blocking time in source node:

    $$ t_{blS} = (\sum\limits_{j = 0}^{m2 + 1} {1 \cdot p_{m1 + 2,j} } + 1 \cdot p_{m1 + 1,m2 + 2} ) \cdot \frac{1}{{\mu^{A} }} $$
    (19)
  14. 14.

    The network blocking time:

    $$ t_{bl} = t_{blA} + t_{blB} + t_{blS} $$
    (20)

5 Numerical Results

In this section, we present some numerical examples to study the effect of the parameters on the selected performance characteristics. We will concentrate on the several important performance descriptors, such as the probability that the tandem network is deadlocked or blocked, blocking probabilities for A and B servers and various time measures. Different values of the parameters are taken for task inter-arrival rates, service intensities or feedback probabilities. To demonstrate this, the following configuration was chosen: the inter-arrival rate λ from the source station to server A is changed within a range from 0.5 to 5.0. The service rates in server A and server B are equal to: μA = 10.0, μB = 5.0. The deadlock resolving rate μd is 1.5, the depart probability σ is chosen as 0.3 and the buffer capacities are equal to: m1 = 10, m2 = 5.

For this model with deadlock and blocking, the following results were obtained; the majority of them are presented in Fig. 3 and Table 1.

Fig. 3.
figure 3

Graphs of QoS parameters, where, pbloc-A - the server A blocking probability, pblocB - the server B blocking probability, pdead is the deadlock probability, pbl is the common network blocking probability, utility-A and utility-B are the servers utilization coefficients.

Table 1. The time measures versus mean inter arrival rates

Figure 3 and Table 1 illustrate dependencies of blocking and deadlock probabilities and their time measures given certain parameters. In this example, the utilization of the tandem network is determined only by one parameter, e.g. by increasing the inter-arrival rates for the fixed values of other parameters. Also, results in Fig. 3 and Table 1 evidently show that the deadlock and blocking phenomena must be taken into account because variation of inter-arrival rate drastically changes QoS and time measures of the tandem network with feedback.

Sensitivity analysis of the proposed tandem model is performed with respect to the effect of changes in the parameters μ A, μB, μd, σ, m1 and m2 (without changes λ - see investigation in the previous example) on the deadlock and blocking probabilities, the time measures and the nodes utilization. The data parameters for the sensitivity analysis are: λ = 1.0, μ A = 0.5, μB = 0.5, μd = 0.25, σ = 0.5, m1 = 10, m2 = 5. The deadlock probabilities and other measures are computed with variation of −40%, −20%, 0%, +20%, +40% on the tandem model and are presented in Table 2.

Table 2. Sensitive analysis of the proposed tandem model

From the Table 2 it is clear that the probability and time measures are highly effected by small changes in the parameters μ A, μd, and σ. On the average effect was observed, when the μB parameter changes from −40% to +40% and minimal effect was observed, when parameters m1 and m2 varies from −40% to +40%.

6 Conclusions

In this paper, the problem of analytical (mathematical) modelling and calculation of the stationary state probabilities for a two-server tandem network with recycling, task blocking and deadlocking is investigated. Deadlock and task blocking probabilities and some other fundamental performance characteristics of such network are derived and followed by numerical examples. The results confirm importance of a special treatment for the models with blocking, deadlock and with feedback service, that justifies my research. The results can be used for capacity planning and performance evaluation of real-time computer networks where deadlock, blocking and feedback are present.