Advertisement

Bayesian change point problem for traffic intensity in \(M/E_r/1\) queueing model

  • Saroja Kumar SinghEmail author
  • Sarat Kumar Acharya
Original Paper

Abstract

In this paper, we study the change point problem for the \(M/E_r/1\) queueing system. Bayesian estimators of parameter and the change point are derived under different loss functions using both the informative (beta prior) and non-informative priors (Jeffreys prior). Also empirical Bayes procedure is used to compute the parameters. Simulation and data analysis on real life are given to illustrate the results.

Keywords

Change point Bayesian estimation Squared error loss function Precautionary loss function General entropy loss function 

Mathematics Subject Classification

60K25 62F15 

1 Introduction

The change point problem can be considered as one of the central problems of statistical inference, linking together statistical control theory, theory of estimation and testing of hypotheses, classical and Bayesian approaches, fixed sample and sequential procedures. It is very often the case that observations are taken sequentially over time or can be intrinsically ordered in some other fashion. The basic question is, therefore, whether the observations represent independent and identically distributed random variables or whether at least one change in the distribution law has taken place. This is the fundamental problem of statistical control theory, testing the stationarity of stochastic processes, estimation of the current position of a time-series, etc. Accordingly, a survey of all the major developments in statistical theory and methodology connected with the very general outlook of the change point problem would require review of the field of statistical quality control, astrophysics, the switching regression problems, finance, bioinformetrics, inventory, queueing control, etc.

The problem of testing and estimating change points in queueing theory has attracted much attention in the literature. Jain (1995) has studied the change point problem for traffic intensity for M / M / 1 queue in case of changing arrival rate. Acharya and Villarreal (2013) have studied change point estimation of traffic intensity for changing service rate for M / M / 1 / m queueing system.

Besides maximum likelihood estimates, the Bayesian method is also a very useful technique for estimating parameters. From the Bayesian point of view for estimating the queueing parameters, we refer to Muddapur (1972), McGrath et al. (1987), McGrath and Singpurwalla (1987), Thiruvaiyaru and Basawa (1992), Armero (1994), Chowdhury and Maiti (2014) Almeida and Cruz (2017), etc. In all these articles, independent random variables such as number of arrivals, number of service completion, initial queue length, interarrival times, service times were observed and beta and gamma as prior distributions have been used to estimate traffic intensity, arrival and service rates in a single queue.

The Bayesian inferential applications can play an important role in study of such problem of change points. In general, an investigator first performs a test to detect a change and, if it is indicated, then the change point is estimated under a specified loss function. Chernoff and Zacks (1964), Broemeling (1972), Smith (1975), Guttman and Menzefriche (1982), Raftery and Akman (1986), Barry and Hartigan (1993) and Lee (1998) have studied the change point problem using the Bayesian method for normal distributions, general sequence of random variables, Poisson processes, exponential families of random variables, etc. Jain (2001) obtained the Bayesian estimator of the change point for the interarrival time distribution in \(E_k/G/c\) queueing model under squared error loss function. However, no such study have been done for \(M/E_r/1\) queue. As such very less work seems to have been done for queueing theory in case of Bayesian change point problem.

The relation of Erlang distribution to exponential allows one to describe queueing models where the service may be a series of identical phases. For example, in performing a laboratory test the lab technician must perform 5 steps, each taking the same mean time (say, \(\frac{1}{5\mu }\)), with the times distributed exponentially. The role of change in the arrival process is very important as the number of customers arriving the system may not be the same throughout the day. If the input process were Poisson, we will have an \(M/E_5/1\) model.

In this paper, we study the change point problem for the \(M/E_r/1\) queue for changing arrival rate by observing the number of arrivals during service of the \((t+1)^{\text {st}}\) customer. The model of our interest and preliminaries are given in Sect. 2. Section 3 deals with the Bayesian estimation of change point \(\tau \), the traffic intensities before and after change, i.e., \(\rho \) and \(\rho _1\) by considering the informative beta prior and Jeffreys prior as a non-informative prior under different loss functions, viz. squared error loss function(SELF), precautionary loss function(PLF) and general entropy loss function(GELF). In Sect. 4 empirical Bayes estimators are obtained. We have given a numerical example in Sect. 5 to illustrate the results while Sect. 6 provides a data analysis on aircraft arrival times. Finally, concluding remarks are given in Sect. 7.

2 Preliminaries

In \(M/E_r/1\) queueing system, service times are Erlang distributed with pdf,
$$\begin{aligned} f(t|r,\mu )=\frac{\mu r}{\varGamma (r)} (\mu rt)^{r-1}\mathrm{e}^{-\mu rt}, \end{aligned}$$
(1)
with mean \(1/\mu \), where \(\mu >0\) and \(r\in N\)(known), interarrival time being exponentially distributed with mean \(1/\lambda .\) The Erlang service model may also be thought of as a model with service in r exponential stages where service rate at each stage is exponential at rate \(\mu \).
Let \(A_t\) denote the number of arrivals during service of the \((t+1)st\) customer. Then \(A_t\) has negative binomial distribution and is given by,
$$\begin{aligned} p(A_t=x) = \genfrac(){0.0pt}0{x+r-1}{x} \left( \frac{\rho }{\rho + r} \right) ^x \left( \frac{r}{\rho + r} \right) ^r; \quad x=0,1,2,\ldots . \end{aligned}$$
(2)
Change point problem
Let us consider the case where the interarrival time distribution a(t) is assumed to change after some unknown \(\tau \), where \(1\le \tau \le n-1\). Thus,
$$\begin{aligned} a_i(t)= {\left\{ \begin{array}{ll} 1- \exp (-\lambda t), &\quad {\text {if}} ~~ i = 0,1,2,\ldots ,\tau \\ 1- \exp (-\lambda _1 t), &\quad {\text {if}} ~~ i= \tau +1,\tau +2,\ldots ,n. \end{array}\right. } \end{aligned}$$
(3)
Let \(x_1,x_2,\ldots ,x_{\tau },x_{\tau +1},\ldots ,x_n\) be the number of arrivals during the first n service times. So under (3), (2),
$$\begin{aligned} p(A_t=x_i)= {\left\{ \begin{array}{ll} \genfrac(){0.0pt}0{x_i+r-1}{x_i} \left( \frac{\rho }{\rho + r} \right) ^{x_i} \left( \frac{r}{\rho + r} \right) ^r; &\quad {\text {if}} ~ i = 0,1,2,\ldots ,\tau \\ \genfrac(){0.0pt}0{x_i+r-1}{x_i} \left( \frac{\rho _1}{\rho _1 + r} \right) ^{x_i} \left( \frac{r}{\rho _1 + r} \right) ^r; & \quad {\text {if}} ~ i= \tau +1,\tau +2,\ldots ,n. \end{array}\right. } \end{aligned}$$
(4)
Then the likelihood function is given by,
$$\begin{aligned} L(\tau ,\rho ,\rho _1)&=\prod _{i=1}^{\tau } \genfrac(){0.0pt}0{x_i+r-1}{x_i} \left( \frac{\rho }{\rho + r} \right) ^{x_i} \left( \frac{r}{\rho + r} \right) ^r \nonumber \\&\quad \times \prod _{i=\tau +1}^{n}\genfrac(){0.0pt}0{x_i+r-1}{x_i} \left( \frac{\rho _1}{\rho _1 + r} \right) ^{x_i} \left( \frac{r}{\rho _1 + r} \right) ^r. \end{aligned}$$
(5)
The log-likelihood function is,
$$\begin{aligned} \ell \equiv {\text {log}} ~L(\rho , \rho _1, \tau )&= \text {const.} + S_{\tau } ~{\text {log}} ~\rho - (S_{\tau } + \tau r) ~{\text {log}}~(r+\rho ) \nonumber \\&\quad + S_{n-\tau } ~{\text {log}}~\rho _1 - (S_{n-\tau }+ (n-\tau ) r) ~{\text {log}}~(r+\rho _1), \end{aligned}$$
(6)
where \(S_{\tau }=\sum _{i=1}^{\tau }x_i\) and \(S_{n-\tau }=\sum _{i=\tau +1}^{n}x_{n-\tau }\). Taking partial derivatives w.r.t. \(\rho \) and \(\rho _1\) in (6), and solving the equations \(\frac{\partial \ell }{\partial \rho }=0\) and \(\frac{\partial \ell }{\partial \rho _1}=0\), the maximum likelihood estimators of \(\rho \) and \(\rho _1\) are obtained to be,
$$\begin{aligned} {\hat{\rho }}_{ML}&= \frac{S_{\tau }}{\tau } ={\bar{x}}_{\tau } \quad {\text {and}} \quad {\hat{\rho }}_{1ML} = \frac{S_{n-\tau }}{n-\tau } ={\bar{x}}_{n-\tau }. \end{aligned}$$
(7)
Therefore, the maximum likelihood estimator \({\hat{\tau }}_{ML}\) is the value of \(\tau \) which maximizes the log-likelihood function (6) (see Hinkley 1970).
In the next section, we will use a special function, namely hypergeometric function denoted by \(\,_2F_1(\alpha , \beta , \gamma ; z)\) (see  Abramowitz and Stegun (1964), chap. 15) with integral representation,
$$\begin{aligned} \,_2F_1(\alpha , \beta , \gamma ; z)= \frac{1}{{B}(\beta , \gamma -\beta )} \int _{0}^{1} t^{\beta -1} (1-t)^{\gamma -\beta -1} (1-tz)^{-\alpha } ~\mathrm{d}t, \quad \gamma> \beta >0, \end{aligned}$$
(8)
where \(\alpha \), \(\beta \) and \(\gamma \) are the hyper-parameters.

3 Bayesian estimation

In this section, we obtain the Bayes estimator of \(\tau \), \(\rho \) and \(\rho _1\) under three different loss functions. The beta prior as an informative prior and the Jeffreys prior as a non-informative prior have been taken into consideration to estimate the parameters.

3.1 Beta prior

Let \(\tau \), \(\rho \) and \(\rho _1\) be assumed to have independent priors of the following form:
$$\begin{aligned} \pi (\tau )&= \frac{1}{n-1}, ~ ~ \tau \in \{1,2,3,\ldots ,n-1\}; \nonumber \\ \pi (\rho )&\propto \rho ^{a-1} (1-\rho )^{b-1}, ~ 0< \rho<1; \nonumber \\ \pi (\rho _1)&\propto \rho _1^{a_1-1} (1-\rho _1)^{b_1-1}, ~ 0< \rho _1 <1, \end{aligned}$$
(9)
where a, b, \(a_1\), \(b_1\) > 0.
Then the joint posterior density of \(\tau \), \(\rho \) and \(\rho _1\) is given by,
$$\begin{aligned} \pi (\tau ,\rho ,\rho _1)&= k ~\rho ^{S_{\tau }+a-1} (1-\rho )^{b-1} (r+\rho )^{-(S_{\tau }+\tau r)} \nonumber \\&\quad \times \rho _1^{S_{n-\tau }+a_1-1} (1-\rho _1)^{b_1-1} (r+\rho _1)^{-(S_{n-\tau }+(n-\tau ) r)}, \end{aligned}$$
(10)
where k is a constant such that,
$$\begin{aligned} \sum _{\tau =1}^{n-1} \int _{0}^{1} \int _{0}^{1} \pi (\tau ,\rho , \rho _1)\mathrm{d}\rho \mathrm{d}\rho _1 &= 1\Rightarrow \frac{1}{k}= \sum _{\tau =1}^{n-1} \int _{0}^{1} \int _{0}^{1}\rho ^{S_{\tau }+a-1} (1-\rho )^{b-1} (r+\rho )^{-(S_{\tau }+\tau r)} \nonumber \\&\quad \times \rho _1^{S_{n-\tau }+a_1-1} (1-\rho _1)^{b_1-1} (r+\rho _1)^{-(S_{n-\tau }+(n-\tau ) r)} ~\mathrm{d}\rho ~\mathrm{d}\rho _1 \nonumber \\&= r^{-(\sum _{i=1}^{n}x_i+nr)} \nonumber \\&\quad \times \sum _{\tau =1}^{n-1} B(S_{\tau }+a,b) ~\,_2F_1 \bigg (S_{\tau }+\tau r, S_{\tau }+a, S_{\tau }+a+b; -\frac{1}{r} \bigg ) \nonumber \\&\quad \cdot B(S_{n-\tau }+a_1,b_1) ~\,_2F_1 \bigg (S_{n-\tau }+(n-\tau ) r, S_{n-\tau }+a_1, S_{n-\tau }+a_1+b_1; -\frac{1}{r} \bigg ) \nonumber \\&= r^{-(\sum _{i=1}^{n}x_i+nr)} \sum _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r), \end{aligned}$$
(11)
where,
$$\begin{aligned} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r)&= B(S_{\tau }+a,b) ~\,_2F_1 \left( S_{\tau }+\tau r, S_{\tau }+a, S_{\tau }+a+b; -\frac{1}{r} \right) \nonumber \\&\quad \cdot B(S_{n-\tau }+a_1,b_1) \,_2F_1 \left( S_{n-\tau }+(n-\tau ) r, S_{n-\tau }+a_1, S_{n-\tau }+a_1+b_1; -\frac{1}{r} \right) . \end{aligned}$$
(12)
The marginal posterior distribution of \(\tau \), \(\rho \) and \(\rho _1\) are computed as,
$$\begin{aligned} \pi (\tau |x)&= \int \limits _{0}^{1} \int \limits _{0}^{1} \pi (\rho , \rho _1, \tau ) \mathrm{d}\rho \mathrm{d}\rho _1 \nonumber \\&= k \int \limits _{0}^{1} \int \limits _{0}^{1} \rho ^{S_{\tau }+a-1} (1-\rho )^{b-1} (r+\rho )^{-(S_{\tau }+\tau r)} \nonumber \\&\quad \times \rho _1^{S_{n-\tau }+a_1-1} (1-\rho _1)^{b_1-1} (r+\rho _1)^{-(S_{n-\tau }+(n-\tau ) r)} \mathrm{d}\rho ~ \mathrm{d}\rho _1 \nonumber \\&=\frac{ {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r)}{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r)}, \end{aligned}$$
(13)
$$\begin{aligned} \pi (\rho |x)&= \sum _{\tau =1}^{n-1} \int\limits_{0}^{1} \pi (\rho , \rho _1, \tau ) \mathrm{d}\rho _1 \nonumber \\&= k \rho ^{S_{\tau }+a-1} (1-\rho )^{b-1} (r+\rho )^{-(S_{\tau }+\tau r)} \nonumber \\&\quad \times \int \limits _{0}^{1}\rho _1^{S_{n-\tau }+a_1-1} (1-\rho _1)^{b_1-1} (r+\rho _1)^{-(S_{n-\tau }+(n-\tau ) r)} \mathrm{d}\rho _1 \nonumber \\&=\frac{ \rho ^{S_{\tau }+a-1} (1-\rho )^{b-1}(1+\frac{\rho }{r})^{-(S_{\tau }+\tau r)} }{\sum \nolimits _{\tau =1}^{n-1}{\mathscr {K}} (a, b, a_1, b_1, \tau , n, r)} \nonumber \\&\quad \times B(S_{n-\tau }+a_1,b_1) \,_2F_1 \left( S_{n-\tau }+(n-\tau ) r, S_{n-\tau }+a_1, S_{n-\tau }+a_1+b_1; -\frac{1}{r} \right) . \end{aligned}$$
(14)
Similarly,
$$\begin{aligned} \pi (\rho _1|x)&= \frac{ \rho _1^{S_{n-\tau +a_1-1}} (1-\rho _1)^{b_1-1} \left( 1+\frac{\rho _1}{r}\right) ^{-(S_{n-\tau }+\tau r)} }{\sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r) } \nonumber \\&\quad \times B(S_{\tau }+a,b) ~\,_2F_1 \left( S_{\tau }+\tau r, S_{\tau }+a, S_{\tau }+a+b; -\frac{1}{r} \right) . \end{aligned}$$
(15)

3.2 Jeffreys prior

Jeffreys prior is defined as,
$$\begin{aligned} \pi _1(\theta ) \propto [I(\theta )]^{\frac{1}{2}}, \end{aligned}$$
(16)
where \(I(\theta )\) is the Fisher information.
From Eq. (2), we have,
$$\begin{aligned} \log p(x | \rho )&= \text {Const.}+x \log \rho -(x+r)\log (r+\rho ). \end{aligned}$$
The first and second partial derivatives of \(\log p(x|\rho )\) w.r.t. are,
$$\begin{aligned} \frac{\partial \log p(x|\rho )}{\partial \rho }&= \frac{x}{\rho } - \frac{x+r}{r+\rho } \\ \frac{\partial ^2 \log p(x|\rho )}{\partial \rho ^2}&= -\frac{x}{\rho ^2} + \frac{x+r}{(r+\rho )^{2}}. \end{aligned}$$
Fisher information is given by,
$$\begin{aligned} I(\rho )&=E\left( -\frac{\partial ^2 \log p(x|\rho )}{\partial \rho ^2}\right) \\&= \frac{r}{\rho (r+\rho )} \quad (\text {since} ~E(x)=\rho ). \end{aligned}$$
Hence,
$$\begin{aligned} \pi _1(\rho )\propto \rho ^{-\frac{1}{2}} \left( 1+\frac{\rho }{r}\right) ^{-\frac{1}{2}}. \end{aligned}$$
(17)
Assume that,
$$\begin{aligned} \pi _1(\tau )&=\frac{1}{n-1},\quad \tau \in \{1,2,3,\ldots ,n-1\}\nonumber \\ \pi _1(\rho )&\propto \rho ^{-\frac{1}{2}} \left( 1+\frac{\rho }{r}\right) ^{-\frac{1}{2}}, \quad 0< \rho<1; \nonumber \\ \pi _1(\rho _1)&\propto \rho _1^{-\frac{1}{2}} \left( 1+\frac{\rho _1}{r}\right) ^{-\frac{1}{2}}, \quad 0< \rho _1 <1. \end{aligned}$$
(18)
Then the joint posterior density of \(\tau \), \(\rho \) and \(\rho _1\) is, given by,
$$\begin{aligned} \pi _1(\tau ,\rho ,\rho _1)&= k_1 ~\rho ^{S_{\tau }+\frac{1}{2}-1} (r+\rho )^{-\left( S_{\tau }+\tau r+\frac{1}{2}\right) } \nonumber \\&\quad \times \rho _1^{S_{n-\tau }+\frac{1}{2}-1} (r+\rho _1)^{-\left( S_{n-\tau }+(n-\tau ) r + \frac{1}{2}\right) }, \end{aligned}$$
(19)
where \(k_1\) is a constant such that,
$$\begin{aligned} \sum _{\tau =1}^{n-1} \int _{0}^{1} \int _{0}^{1} \pi _1(\tau ,\rho , \rho _1)\mathrm{d}\rho \mathrm{d}\rho _1&= 1\Rightarrow \frac{1}{k_1}= \sum _{\tau =1}^{n-1} \int _{0}^{1} \int _{0}^{1}\rho ^{S_{\tau }+\frac{1}{2}-1} (r+\rho )^{-\left( S_{\tau }+\tau r +\frac{1}{2}\right) } \nonumber \\&\quad \times \rho _1^{S_{n-\tau }+\frac{1}{2}-1} (r+\rho _1)^{-\left( S_{n-\tau }+(n-\tau ) r + \frac{1}{2}\right) } ~\mathrm{d}\rho ~\mathrm{d}\rho _1 \nonumber \\&= r^{-\left( \sum _{i=1}^{n}x_i+nr\right) } \nonumber \\&\quad \times \sum _{\tau =1}^{n-1} B\left( S_{\tau }+\frac{1}{2}, 1\right) ~\,_2F_1\left( S_{\tau }+\tau r + \frac{1}{2}, S_{\tau }+\frac{1}{2}, S_{\tau }+\frac{3}{2}; -\frac{1}{r} \right) \nonumber \\&\quad \cdot B\left( S_{n-\tau }+\frac{1}{2}, 1\right) ~\,_2F_1\left( S_{n-\tau }+(n-\tau ) r, S_{n-\tau }+\frac{1}{2}, S_{n-\tau }+\frac{3}{2}; -\frac{1}{r} \right) \nonumber \\&= r^{-(\sum _{i=1}^{n}x_i+nr)} \sum _{\tau =1}^{n-1} {\mathscr {K}}_1 \left( \frac{1}{2}, 1, \frac{1}{2}, 1, \tau , n, r \right) , \end{aligned}$$
(20)
where,
$$\begin{aligned} {\mathscr {K}}_1 \bigg (\frac{1}{2},1,\frac{1}{2},1,\tau ,n,r \bigg )&= B\left( S_{\tau }+\frac{1}{2}, 1\right) \,_2F_1 \left( S_{\tau }+\tau r +\frac{1}{2}, S_{\tau }+\frac{1}{2}, S_{\tau }+\frac{3}{2}; -\frac{1}{r} \right) \nonumber \\&\quad \cdot B\left( S_{n-\tau }+ \frac{1}{2},1\right) \,_2F_1 \left( S_{n-\tau }+(n-\tau ) r +\frac{1}{2}, S_{n-\tau }+\frac{1}{2}, S_{n-\tau }+\frac{3}{2}; -\frac{1}{r} \right) . \end{aligned}$$
(21)
The marginal posterior distributions of \(\tau \), \(\rho \) and \(\rho _1\) are computed as,
$$\begin{aligned} \pi _1(\tau |x)&= \int \limits _{0}^{1} \int \limits _{0}^{1} \pi _1(\rho , \rho _1, \tau ) \mathrm{d}\rho \mathrm{d}\rho _1 \nonumber \\&= k \int \limits _{0}^{1} \int \limits _{0}^{1} \rho ^{S_{\tau }+\frac{1}{2}-1} (r+\rho )^{-\left( S_{\tau }+\tau r + \frac{1}{2}\right) } \nonumber \\&\quad \times \rho _1^{S_{n-\tau }+\frac{1}{2}-1} (r+\rho _1)^{-\left( S_{n-\tau }+(n-\tau ) r +\frac{1}{2}\right) } \mathrm{d}\rho ~ \mathrm{d}\rho _1 \nonumber \\&= \frac{{\mathscr {K}}_1\left( \frac{1}{2}, 1, \frac{1}{2}, 1, \tau , n, r \right) }{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1\left( \frac{1}{2}, 1, \frac{1}{2}, 1, \tau , n, r \right) }, \end{aligned}$$
(22)
$$\begin{aligned} \pi _1(\rho |x)&= \sum _{\tau =1}^{n-1} \int _{0}^{1} \pi _1(\rho , \rho _1, \tau ) \mathrm{d}\rho _1 \nonumber \\&= k \rho ^{S_{\tau }+\frac{1}{2}-1} (r+\rho )^{-(S_{\tau }+\tau r + \frac{1}{2})} \nonumber \\&\quad \times \int \limits _{0}^{1}\rho _1^{S_{n-\tau }+\frac{1}{2}-1} (r+\rho _1)^{-\left( S_{n-\tau }+(n-\tau ) r + \frac{1}{2}\right) } \mathrm{d}\rho _1 \nonumber \\&=\frac{ \rho ^{S_{\tau }+\frac{1}{2}-1} \left( 1+\frac{\rho }{r}\right) ^{-\left( S_{\tau }+\tau r + \frac{1}{2}\right) } }{\sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1\left( \frac{1}{2}, 1, \frac{1}{2}, 1, \tau , n, r \right) } \nonumber \\&\quad \times B\left( S_{n-\tau }+\frac{1}{2},\frac{1}{2} \right) \,_2F_1 \left( S_{n-\tau }+(n-\tau ) r +\frac{1}{2}, S_{n-\tau }+\frac{1}{2}, S_{n-\tau }+\frac{3}{2}; -\frac{1}{r} \right) . \end{aligned}$$
(23)
Similarly,
$$\begin{aligned} \pi _1(\rho _1|x)&= \frac{ \rho _1^{S_{n-\tau }+\frac{1}{2}-1} \left( 1+\frac{\rho _1}{r}\right) ^{-\left( S_{n-\tau }+\tau r + \frac{1}{2}\right) } }{\sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1\left( \frac{1}{2}, 1, \frac{1}{2}, 1, \tau , n, r \right) } \nonumber \\&\quad \times B\left( S_{\tau }+\frac{1}{2}, \frac{1}{2} \right) ~\,_2F_1 \left( S_{\tau }+\tau r + \frac{1}{2}, S_{\tau }+\frac{1}{2}, S_{\tau }+\frac{3}{2}; -\frac{1}{r} \right) . \end{aligned}$$
(24)

3.3 Squared error loss function (SELF)

Let us consider the widely used squared error loss function (SELF) which is symmetric and is given by
$$\begin{aligned} L_1({\hat{\theta }}_B)= ({\hat{\theta }}_B-\theta )^2, \end{aligned}$$
(25)
where \(\theta \) and \({\hat{\theta }}_B\) are parameter and estimator, respectively. Minimizing \(E(L_1({\hat{\theta }}_B))\), i.e. solving \(\frac{\mathrm{d}E(L_1({\hat{\theta }}_B))}{\mathrm{d}\theta }=0\), we get,
$$\begin{aligned} {\hat{\theta }}_{B}= E(\theta |x). \end{aligned}$$
(26)
Under SELF, for the beta prior, the Bayes estimators of \(\tau \), \(\rho \) and \(\rho _1\) are obtained as follows:
$$\begin{aligned} {\hat{\tau }}_{BS}^{B}&= \sum _{\tau =1}^{n-1} \tau ~\pi (\tau |x) =\frac{ \sum _{\tau =1}^{n-1} \tau ~{\mathscr {K}} (a, b, a_1, b_1, \tau , n, r)}{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r) }, \end{aligned}$$
(27)
$$\begin{aligned} {\hat{\rho }}_{BS}^{B}&= \int _{0}^{1} \rho ~ \pi (\rho |x) \mathrm{d}\rho \nonumber \\&= \left[ \sum \limits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r) \right] ^{-1} \nonumber \\&\quad \times \sum \limits _{\tau =1}^{n-1} B(S_{n-\tau }+a_1,b_1) \,_2F_1 \left( S_{n-\tau }+(n-\tau ) r, S_{n-\tau }+a_1, S_{n-\tau }+a_1+b_1; -\frac{1}{r} \right) \nonumber \\&\quad \times \cdot \int _{0}^{1}\rho ^{S_{\tau }+a} (1-\rho )^{b-1}\left( 1+\frac{\rho }{r}\right) ^{-\left( S_{\tau }+\tau r\right) } ~\mathrm{d}\rho \nonumber \\&= \frac{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a+1, b, a_1, b_1, \tau , n, r) }{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r)}. \end{aligned}$$
(28)
Similarly,
$$\begin{aligned} {\hat{\rho }}_{1BS}^{B}= \frac{\sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1+1, b_1, \tau , n, r) }{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r) }. \end{aligned}$$
(29)
Under SELF, for Jeffreys prior, the Bayes estimators of \(\tau \), \(\rho \) and \(\rho _1\) are,
$$\begin{aligned} {\hat{\tau }}_{BS}^{J} = \frac{ \sum _{\tau =1}^{n-1} \tau ~{\mathscr {K}}_1 \left( \frac{1}{2}, 1, \frac{1}{2}, 1, \tau , n, r \right) }{ \sum \nolimits _{\tau =1}^{n-1}{\mathscr {K}}_1\left( \frac{1}{2},1,\frac{1}{2},1,\tau ,n,r \right) }, \end{aligned}$$
(30)
$$\begin{aligned} {\hat{\rho }}_{BS}^{J} = \frac{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1\left( \frac{3}{2}, 1,\frac{1}{2},1, \tau , n, r\right) }{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1\left( \frac{1}{2}, 1, \frac{1}{2}, 1, \tau , n, r\right) }, \end{aligned}$$
(31)
$$\begin{aligned} {\hat{\rho }}_{1BS}^{J} =\frac{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1\left( \frac{1}{2},1,\frac{3}{2}, 1, \tau , n, r\right) }{\sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1\left( \frac{1}{2},1,\frac{1}{2},1, \tau , n, r\right) }. \end{aligned}$$
(32)

3.4 Precautionary loss function (PLF)

Norstrom (1996) introduced an alternative asymmetric loss function and also presented a general class of precautionary loss function as a special case. These loss functions approach infinity near the origin to prevent the underestimation and thus giving conservative estimators, especially when low arrival rates are being estimated which may lead to serious consequences.

A very useful and simple asymmetric precautionary loss function (PLF) is given by,
$$\begin{aligned} L_2({\hat{\theta }}_B)= \frac{\left( {\hat{\theta }}_B-\theta \right) ^2}{{\hat{\theta }}_B}. \end{aligned}$$
(33)
Minimizing \(E(L_2({\hat{\theta }}_B))\) and solving \(\frac{\mathrm{d}E(L_2({\hat{\theta }}_B))}{\mathrm{d}\theta }=0\) we get,
$$\begin{aligned} {\hat{\theta }}_{B}=\left[ E(\theta ^2|x)\right] ^{\frac{1}{2}}. \end{aligned}$$
(34)
Under PLF, for the beta prior, the Bayes estimators of \(\tau \), \(\rho \) and \(\rho _1\) are obtained as follows:
$$\begin{aligned} {\hat{\tau }}_{BP}^{B}&= \left[ \sum _{\tau =1}^{n-1} \tau ^2 ~\pi (\tau |x)\right] ^{\frac{1}{2}} = \left[ \frac{ \sum _{\tau =1}^{n-1} \tau ^2 ~{\mathscr {K}} (a, b, a_1, b_1, \tau , n, r)}{\sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r)} \right] ^{\frac{1}{2}}, \end{aligned}$$
(35)
$$\begin{aligned} {\hat{\rho }}_{BP}^{B}&= \left[ \int _{0}^{1} \rho ^2 ~ \pi (\rho |x) ~\mathrm{d}\rho \right] ^{\frac{1}{2}} \nonumber \\&= \left[ \sum \limits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r) \right] ^{-\frac{1}{2}} \nonumber \\&\quad \times \left[ \sum \limits _{\tau =1}^{n-1} B(S_{n-\tau }+a_1,b_1) \,_2F_1 \left( S_{n-\tau }+(n-\tau ) r, S_{n-\tau }+a_1, S_{n-\tau }+a_1+b_1; -\frac{1}{r} \right) \right. \nonumber \\&\quad \left. \vphantom {\left[ \sum \limits _{\tau =1}^{n-1} B(S_{n-\tau }+a_1,b_1) \,_2F_1 \left( S_{n-\tau }+(n-\tau ) r, S_{n-\tau }+a_1, S_{n-\tau }+a_1+b_1; -\frac{1}{r} \right) \right. } \cdot \int _{0}^{1}\rho ^{S_{\tau }+a +1} (1-\rho )^{b-1}\left( 1+\frac{\rho }{r}\right) ^{-(S_{\tau }+\tau r)} ~\mathrm{d}\rho \right] ^{\frac{1}{2}} \nonumber \\&= \left[ \frac{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a+2, b, a_1, b_1, \tau , n, r) }{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r) } \right] ^{\frac{1}{2}}. \end{aligned}$$
(36)
Similarly,
$$\begin{aligned} {\hat{\rho }}_{1BP}^{B} = \left[ \frac{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1+2, b_1, \tau , n, r) }{\sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r) } \right] ^{\frac{1}{2}}. \end{aligned}$$
(37)
Under PLF, for Jeffreys prior, the Bayes estimators of \(\tau \), \(\rho \) and \(\rho _1\) are,
$$\begin{aligned} {\hat{\tau }}_{BP}^{J}&= \left[ \frac{ \sum _{\tau =1}^{n-1} \tau ^2 ~{\mathscr {K}}_1\left( \frac{1}{2}, 1, \frac{1}{2}, 1, \tau , n, r\right) }{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1\left( \frac{1}{2}, 1, \frac{1}{2}, 1, \tau , n, r\right) } \right] ^{\frac{1}{2}}, \end{aligned}$$
(38)
$$\begin{aligned} {\hat{\rho }}_{BP}^{J}&= \left[ \frac{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1 \left( \frac{5}{2},1,\frac{1}{2},1,\tau ,n,r\right) }{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1\left( \frac{1}{2}, 1, \frac{1}{2}, 1, \tau , n, r\right) } \right] ^{\frac{1}{2}} \end{aligned}$$
(39)
$$\begin{aligned} {\hat{\rho }}_{1BP}^{J}&= \left[ \frac{\sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1 \left( \frac{1}{2},1,\frac{5}{2},1,\tau ,n,r \right) }{\sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1\left( \frac{1}{2},1,\frac{1}{2},1,\tau ,n,r\right) }\right] ^{\frac{1}{2}} \end{aligned}$$
(40)

3.5 General entropy loss function (GELF)

Sometimes, the use of symmetric loss function, namely (SELF), was found inappropriate. Thus, large attention has been given to asymmetric loss function recently. Calabria and Pulcini (1994) proposed a general entropy loss function defined by
$$\begin{aligned} L_3\left( {\hat{\theta }}_B\right) =\left( \frac{{\hat{\theta }}_B}{\theta }\right) ^{\gamma } - \gamma \mathrm{ln}\left( \frac{{\hat{\theta }}_B}{\theta }\right) - 1; \quad \gamma \ne 0. \end{aligned}$$
(41)
This loss function is a generalization of the entropy loss function used by many authors when the shape parameter \(\gamma \) is taken equal to 1. It may be noted that when \(\gamma > 0\), a positive error \(({\hat{\theta }}_B > \theta )\) causes more serious consequences than a negative error, and vice versa.
Minimizing \(E(L_3({\hat{\theta }}_B))\) and solving \(\frac{\mathrm{d}E(L_3({\hat{\theta }}_B))}{\mathrm{d}\theta }=0\) we get,
$$\begin{aligned} {\hat{\theta }}_{B}=[E(\theta ^{-\gamma }|x)]^{-\frac{1}{\gamma }} \end{aligned}$$
(42)
provided that \(E( \theta ^{-\gamma }|x)\) exists and finite. It can be shown that, when \(\gamma = -1\), the Bayes estimator (42) coincides with the Bayes estimator under the squared error loss function. Similarly, when \(\gamma = -2\) the Bayes estimator (42) coincides with the Bayes estimator under precautionary loss function.
Under GELF, for the beta prior, the Bayes estimator of \(\tau \), \(\rho \) and \(\rho _1\) are obtained as follows:
$$\begin{aligned} {\hat{\tau }}_{BE}^{B} =&\left[ \sum _{\tau =1}^{n-1} \tau ^{-\gamma } ~\pi (\tau |x) \right] ^{-\frac{1}{\gamma }} = \left[ \frac{ \sum _{\tau =1}^{n-1} \tau ^{-\gamma } ~{\mathscr {K}} (a, b, a_1, b_1, \tau , n, r)}{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r)}\right] ^{-\frac{1}{\gamma }}, \end{aligned}$$
(43)
$$\begin{aligned} {\hat{\rho }}_{BE}^{B}&= \left[ \int _{0}^{1} \rho ^{-\gamma } ~ \pi (\rho |x) ~\mathrm{d}\rho \right] ^{- \frac{1}{\gamma }} \nonumber \\&= \left[ \sum \limits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r) \right] ^{\frac{1}{\gamma }} \nonumber \\&\quad \times \left[ \sum \limits _{\tau =1}^{n-1}B(S_{n-\tau }+a_1,b_1) \,_2F_1 \left( S_{n-\tau }+(n-\tau ) r, S_{n-\tau }+a_1, S_{n-\tau }+a_1+b_1; -\frac{1}{r} \right) \right. \nonumber \\&\quad \left. \vphantom {\left[ \sum \limits _{\tau =1}^{n-1}B(S_{n-\tau }+a_1,b_1) \,_2F_1 \left( S_{n-\tau }+(n-\tau ) r, S_{n-\tau }+a_1, S_{n-\tau }+a_1+b_1; -\frac{1}{r} \right) \right. } \cdot \int _{0}^{1}\rho ^{S_{\tau }+a -\gamma -1} (1-\rho )^{b-1}\left( 1+\frac{\rho }{r}\right) ^{-(S_{\tau }+\tau r)} ~\mathrm{d}\rho \right] ^{-\frac{1}{\gamma }}\nonumber \\&= \left[ \frac{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a-\gamma , b, a_1, b_1, \tau , n, r) }{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r) } \right] ^{- \frac{1}{\gamma }} . \end{aligned}$$
(44)
Similarly,
$$\begin{aligned} {\hat{\rho }}_{1BE}^{B}= \left[ \frac{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1 - \gamma , b_1, \tau , n, r) }{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}} (a, b, a_1, b_1, \tau , n, r) } \right] ^{- \frac{1}{\gamma }}. \end{aligned}$$
(45)
Under GELF, for Jeffreys prior, the Bayes estimator of \(\tau \), \(\rho \) and \(\rho _1\) are,
$$\begin{aligned} {\hat{\tau }}_{BE}^{J}&= \left[ \frac{ \sum _{\tau =1}^{n-1} \tau ^{-\gamma } ~{\mathscr {K}}_1(\frac{1}{2}, 1, \frac{1}{2}, 1, \tau , n, r) }{\sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1(\frac{1}{2}, 1, \frac{1}{2}, 1, \tau , n, r)}\right] ^{-\frac{1}{\gamma }}, \end{aligned}$$
(46)
$$\begin{aligned} {\hat{\rho }}_{BE}^{J}&= \left[ \frac{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1 \left( \frac{1}{2}-\gamma , 1, \frac{1}{2}, 1, \tau , n, r\right) }{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1\left( \frac{1}{2},1,\frac{1}{2},1, \tau , n, r \right) } \right] ^{- \frac{1}{\gamma }}, \end{aligned}$$
(47)
$$\begin{aligned} {\hat{\rho }}_{1BE}^{J}&= \left[ \frac{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1 \left( \frac{1}{2}, 1, \frac{1}{2} - \gamma , 1, \tau , n, r \right) }{ \sum \nolimits _{\tau =1}^{n-1} {\mathscr {K}}_1 \left( \frac{1}{2}, 1, \frac{1}{2}, 1, \tau , n, r \right) } \right] ^{- \frac{1}{\gamma }}. \end{aligned}$$
(48)
Table 1

Maximum likelihood and Bayes estimates of \(\tau \), \(\rho \) and \(\rho _1\) for \(r=2\) with different sample size n and change point \(\tau \) with beta prior

\((n,\tau )\)

Estimate

ML

BS

BP

BE

(6, 4)

\({\hat{\tau }}\)

4.0000 (1.7005)

3.5104 (1.4930)

3.6826 (1.3209)

3.8324 (1.1711)

\({\hat{\rho }}\)

0.7490 (0.4735)

0.7683 (0.0283)

0.7764 (0.0300)

0.7837 (0.0387)

\({\hat{\rho }}_1\)

0.9360 (1.1854)

0.9275 (0.0283)

0.9293 (0.0300)

0.9309 (0.0316)

(8, 5)

\({\hat{\tau }}\)

5.0000 (1.9108)

4.5271 (2.4849)

4.8383 (2.1691)

5.0917 (1.9156)

\({\hat{\rho }}\)

0.7993 (0.4102)

0.7657 (0.0283)

0.7737 (0.0332)

0.7814 (0.0361)

\({\hat{\rho }}_1\)

0.9300 (1.1558)

0.9273 (0.0283)

0.9291 (0.0300)

0.9308 (0.0316)

(10, 6)

\({\hat{\tau }}\)

6.0000 (2.8940)

5.6265 (3.7284)

6.0324 (3.1571)

6.4112 (2.7421)

\({\hat{\rho }}\)

0.7636 (0.4769)

0.7735 (0.0686)

0.7818 (0.0663)

0.7886 (0.0671)

\({\hat{\rho }}_1\)

0.8910 (1.1624)

0.9282 (0.0500)

0.9299 (0.0458)

0.9313 (0.0479)

(20, 10)

\({\hat{\tau }}\)

10.0000 (1.0598)

9.1607 (1.2181)

11.7856 (2.6136)

12.6372 (2.6286)

\({\hat{\rho }}\)

0.7543 (0.0498)

0.7455 (1.0438)

0.7804 (0.1189)

0.7894 (0.0265)

\({\hat{\rho }}_1\)

0.8947 (1.0500)

0.9264 (0.0383)

0.9281 (0.0265)

0.9208 (0.0223)

(50, 20)

\({\hat{\tau }}\)

20.0000 (0.9313)

20.7967 (1.0018)

19.3452 (1.2238)

22.4433 (1.7340)

\({\hat{\rho }}\)

0.7648 (0.0413)

0.7504 (0.6874)

0.7517 (0.0541)

0.7823 (0.0167)

\({\hat{\rho }}_1\)

0.9000 (0.7900)

0.9026 (0.0317)

0.9006 (0.0172)

0.9009 (0.0171)

(100, 50)

\({\hat{\tau }}\)

50.0000 (0.5054)

50.9953 (0.9216)

51.2005 (1.1064)

51.4028 (1.2907)

\({\hat{\rho }}\)

0.7732 (0.0112)

0.7503 (0.0531)

0.7504 (0.0331)

0.7805 (0.0053)

\({\hat{\rho }}_1\)

0.9000 (0.5100)

0.9053 (0.0184)

0.9015 (0.0089)

0.9015 (0.0091)

Table 2

Maximum likelihood and Bayes estimates of \(\tau \), \(\rho \) and \(\rho _1\) for \(r=3\) with different sample size n and change point \(\tau \) with beta prior

\((n,\tau )\)

Estimate

ML

BS

BP

BE

(6, 4)

\({\hat{\tau }}\)

4.0000 (1.8573)

3.5107 (1.4928)

3.6826 (1.3211)

3.8321 (1.1716)

\({\hat{\rho }}\)

0.7644 (0.4382)

0.7682 (0.0283)

0.7762 (0.0332)

0.7835 (0.0388)

\({\hat{\rho }}_1\)

0.8820 (1.0490)

0.9276 (0.0283)

0.9294 (0.0300)

0.9311 (0.0317)

(8, 5)

\({\hat{\tau }}\)

5.0000 (2.0521)

4.5211 (2.4862)

4.8279 (2.1796)

5.0819 (2.9253)

\({\hat{\rho }}\)

0.7953 (0.3864)

0.7674 (0.0283)

0.7754 (0.0332)

0.7827 (0.0387)

\({\hat{\rho }}_1\)

0.8860 (1.0664)

0.9276 (0.0283)

0.9294 (0.0300)

0.9310 (0.0316)

(10,6)

\({\hat{\tau }}\)

6.0000 (2.1659)

5.5427 (3.3696)

5.9902 (3.0221)

6.3483 (2.6634)

\({\hat{\rho }}\)

0.7853 (0.3215)

0.7678 (0.0300)

0.7757 (0.0346)

0.7829 (0.0400)

\({\hat{\rho }}_1\)

0.8960 (1.0607)

0.9272 (0.0283)

0.9290 (0.0300)

0.9306 (0.0316)

(20, 10)

\({\hat{\tau }}\)

10.0000 (2.4659)

9.5226 (1.1956)

11.6893 (1.8046)

13.2304 (2.5583)

\({\hat{\rho }}\)

0.7368 (0.0629)

0.7563 (0.0170)

0.7643 (0.0184)

0.7719 (0.0196)

\({\hat{\rho }}_1\)

0.9064 (0.7700)

0.9020 (0.0084)

0.9029 (0.0087)

0.9040 (0.0085)

(50, 20)

\({\hat{\tau }}\)

20.0000 (2.3757)

19.4710 (1.6394)

19.8417 (1.1314)

22.9731 (2.1317)

\({\hat{\rho }}\)

0.7500 (0.0432)

0.7693 (0.0080)

0.7885 (0.0057)

0.7816 (0.0086)

\({\hat{\rho }}_1\)

0.8925 (0.9100)

0.9811 (0.0067)

0.9016 (0.0068)

0.9121 (0.0069)

(100, 50)

\({\hat{\tau }}\)

50.0000 (2.5416)

50.6890 (2.3354)

50.8703 (2.3180)

51.0499 (2.3007)

\({\hat{\rho }}\)

0.7474 (0.0197)

0.7804 (0.0031)

0.7804 (0.0053)

0.7805 (0.0031)

\({\hat{\rho }}_1\)

0.9000 (1.1700)

0.9015 (0.0034)

0.9020 (0.0046)

0.9116 (0.0034)

Table 3

Maximum likelihood and Bayes estimates of \(\tau \), \(\rho \) and \(\rho _1\) for \(r=5\) with different sample size n and change point \(\tau \) with beta prior

\((n,\tau )\)

Estimate

ML

BS

BP

BE

(6, 4)

\({\hat{\tau }}\)

4.0000 (2.0211)

3.5054 (1.4989)

3.6773 (1.3271)

3.8272 (1.1773)

\({\hat{\rho }}\)

0.7794 (0.4163)

0.7685 (0.0283)

0.7765 (0.0332)

0.7873 (0.0388)

\({\hat{\rho }}_1\)

0.9410 (1.0155)

0.9276 (0.0283)

0.9294 (0.0300)

0.9310 (3.0316)

(8, 5)

\({\hat{\tau }}\)

5.0000 (2.2902)

4.5265 (2.4819)

4.8342 (2.1742)

5.0887 (1.9194)

\({\hat{\rho }}\)

0.7740 (0.3714)

0.7673 (0.0300)

0.7753 (0.0332)

0.7826 (0.0387)

\({\hat{\rho }}_1\)

0.8960 (1.0243)

0.9274 (0.0283)

0.9292 (0.0300)

0.9390 (0.0316)

(10,6)

\({\hat{\tau }}\)

6.0000 (2.3738)

5.5245 (3.4895)

5.9720 (3.0419)

6.3311 (2.6820)

\({\hat{\rho }}\)

0.7824 (0.3166)

0.7673 (0.0316)

0.7752 (0.0346)

0.7824 (0.0400)

\({\hat{\rho }}_1\)

0.9060 (1.0813)

0.9277 (0.0283)

0.9295 (0.0300)

0.9311 (0.0316)

(20, 10)

\({\hat{\tau }}\)

10.0000 (1.8687)

9.6669 (2.8192)

11.6617 (1.2150)

12.4098 (1.2243)

\({\hat{\rho }}\)

0.7543 (0.0645)

0.7707 (0.0394)

0.7312 (0.0046)

0.7826 (0.0414)

\({\hat{\rho }}_1\)

0.9000 (0.2900)

0.6922 (0.0407)

0.9276 (0.1792)

0.9284 (0.0233)

(50, 20)

\({\hat{\tau }}\)

20.0000 (2.2585)

20.5122 (1.2169)

18.6127 (1.1056)

22.1659 (1.4154)

\({\hat{\rho }}\)

0.7500 (0.0205)

0.7749 (0.0236 )

0.7804 (0.0045)

0.7811 (0.0054)

\({\hat{\rho }}_1\)

0.8959 (0.7100)

0.9824 (0.0089)

0.9829 (0.0961)

0.9833 (0.0070)

(100, 50)

\({\hat{\tau }}\)

50.0000 (3.6470)

49.4401 (3.7720)

49.2488 (2.9835)

52.5099 (3.6022)

\({\hat{\rho }}\)

0.7499 (0.0175)

0.7765 (0.0093)

0.7852 (0.0018)

0.7733 (0.0024)

\({\hat{\rho }}_1\)

0.9000 (1.0900)

0.9201 (0.0061)

0.9202 (0.0082)

0.9204 (0.0063)

4 Empirical Bayes estimator

In this section, we obtain empirical Bayes estimators of traffic intensities \(\rho \) and \(\rho _1\) under different loss functions discussed in the previous section. We apply the method of moments to estimate the hyper-parameters a, b, \(a_1\) and \(b_1\) [see, for instance, Chowdhury (2010, Page.49)]. We take conditional distribution approach and equate both conditional expectations and conditional variances with the corresponding sample moments.

From (2), i.e., \(X|\rho \sim NB\left( r, \frac{r}{r+\rho }\right) \), we get,
$$\begin{aligned} E_{X|\rho }(X|\rho )=\rho \end{aligned}$$
(49)
$$\begin{aligned} var(X|\rho )=\frac{\rho (\rho +r)}{r}. \end{aligned}$$
(50)
So,
$$\begin{aligned} E(X)=E_{\rho }E_{X|\rho }(X|\rho )=E(\rho )=\frac{a}{a+b}, \end{aligned}$$
(51)
and,
$$\begin{aligned} var(X)&=E_{\rho }[var(X|\rho )]+var[E_{X|\rho }(X|\rho )], \nonumber \\&=E\left[ \frac{\rho (\rho +r)}{r}\right] +var(\rho )\nonumber \\&=E(\rho )[1-E(\rho )]+\frac{r+1}{r}E(\rho ^2) \nonumber \\&=\frac{[ra(a+b)+a^2](a+b+1)+ab(r+1)}{r(a+b)^2(a+b+1)}. \end{aligned}$$
(52)
Suppose the first-order sample raw moment and second-order central moment are \(m_{11}^{'}\) and \(m_{21}\) for the sample \((x_1, x_2,\ldots ,x_{\tau })\). Then equating \(m_{11}^{'}\) and \(m_{21}\) with (51) and (52), respectively, we get,
$$\begin{aligned} m_{11}^{'}=\frac{a}{a+b}, \end{aligned}$$
(53)
and,
$$\begin{aligned} m_{21}=\frac{[ra(a+b)+a^2](a+b+1)+ab(r+1)}{r(a+b)^2(a+b+1)}. \end{aligned}$$
(54)
Solving (53) and (54) simultaneously, we get,
$$\begin{aligned} {\hat{b}}=\frac{{\hat{a}}(1-m_{11}^{'})}{m_{11}^{'}}. \end{aligned}$$
(55)
Here, \(m_{11}^{'}< 1\), since \({\hat{b}}\) is positive. And \({\hat{a}}\) is the positive root of the cubic equation,
$$\begin{aligned} Aa^3+Ba^2+C=0, \end{aligned}$$
(56)
in which,
$$\begin{aligned} A&=\frac{r}{m_{11}^{'2}}+\frac{1}{m_{11}^{'}} \\ B&=\frac{2r}{m_{11}^{'2}}+\frac{1}{m_{11}^{'}}-r \\ C&=-m_{21}. \end{aligned}$$
If the first-order sample raw moment and second-order central moment are \(m_{12}^{'}\) and \(m_{22}\) for the sample \((x_{\tau +1}, x_{\tau +2},\ldots ,x_n)\), then in same fashion we have,
$$\begin{aligned} {\hat{b}}_1=\frac{{\hat{a}}_1\left( 1-m_{12}^{'}\right) }{m_{12}^{'}} \end{aligned}$$
(57)
with \(m_{11}^{'}< 1\) and \({\hat{a}}_1\) is root of the equation,
$$\begin{aligned} A_1a^3_1+B_1a^2_1+C_1=0, \end{aligned}$$
(58)
where,
$$\begin{aligned} A_1&=\frac{r}{m_{12}^{'2}}+\frac{1}{m_{12}^{'}} \\ B_1&=\frac{2r}{m_{12}^{'2}}+\frac{1}{m_{12}^{'}}-r \\ C_1&=-m_{22}. \end{aligned}$$
Putting \({\hat{a}}\), \({\hat{b}}\), \({\hat{a}}_1\) and \({\hat{b}}_1\) in place of a, b, \(a_1\) and \(b_1\) in (27)–(29), (35)–(37) and (43)–(45), we get empirical Bayes estimators, i.e., \({\hat{\tau }}^{E}\), \({\hat{\rho }}^{E}\) and \({\hat{\rho }}_1^{E}\) of \(\tau \), \(\rho \) and \(\rho _1\).
Table 4

Bayes estimates of \(\tau \), \(\rho \) and \(\rho _1\) for \(r=2\) with different sample size n and change point \(\tau \) for Jeffreys prior

\((n,\tau )\)

Estimate

BS

BP

BE

(6, 4)

\({\hat{\tau }}\)

3.5143 (2.2882)

3.6829 (2.8104)

3.8305 (1.0350)

\({\hat{\rho }}\)

0.7503 (0.0945)

0.7564 (0.0586)

0.7611 (0.0376)

\({\hat{\rho }}_1\)

0.8919 (0.1957)

09115 (0.1351)

0.9124 (0.0985)

(8, 5)

\({\hat{\tau }}\)

4.5392 (1.3191)

4.847 (1.8656)

5.1072 (0.9736)

\({\hat{\rho }}\)

0.7522 (0.0834)

0.7580 (0.0518)

0.7624 (0.0335)

\({\hat{\rho }}_1\)

0.9076 (0.1819)

0.8739 (0.1265)

0.9211 (0.0931)

(10, 6)

\({\hat{\tau }}\)

5.6197 (1.1716)

6.4758 (1.1705)

6.7294 (0.9206)

\({\hat{\rho }}\)

0.7526 (0.0856)

0.7632 (0.0483)

0.7664 (0.0727)

\({\hat{\rho }}_1\)

0.8966 (0.1501)

0.8968 (0.1904)

0.9136 (0.1095)

(20, 10)

\({\hat{\tau }}\)

10.1745 (0.8761)

9.8925 (1.0676)

10.3488 (0.6923)

\({\hat{\rho }}\)

0.7471 (0.0602)

0.7556 (0.0517)

0.7616 (0.0288)

\({\hat{\rho }}_1\)

0.8768 (0.0932)

0.9062 (0.1006)

0.9235 (0.0075)

(50, 20)

\({\hat{\tau }}\)

20.1753 (0.7321)

19.9398 (0.8524)

20.6558 (0.4221)

\({\hat{\rho }}\)

0.7502 (0.0456)

0.7595 (0.0460)

0.7653 (0.0089)

\({\hat{\rho }}_1\)

0.9005 (0.0082)

0.8878 (0.0080)

0.9016 (0.0058)

(100, 50)

\({\hat{\tau }}\)

50.0917 (0.6779)

50.2960 (0.7282)

50.4974 (0.2591)

\({\hat{\rho }}\)

0.7529 (0.0190)

0.7530 (0.0090)

0.7530 (0.0050)

\({\hat{\rho }}_1\)

0.8932 (0.0070)

0.9062 (0.0067)

0.9102 (0.0060)

Table 5

Bayes estimates of \(\tau \), \(\rho \) and \(\rho _1\) for \(r=3\) with different sample size n and change point \(\tau \) for Jeffreys prior

\((n,\tau )\)

Estimate

BS

BP

BE

(6, 4)

\({\hat{\tau }}\)

3.5164 (2.2582)

3.6852 (2.0463)

3.8329 (2.4259)

\({\hat{\rho }}\)

0.7510 (0.0901)

0.7570 (0.0561)

0.7616 (0.0363)

\({\hat{\rho }}_1\)

0.8998 (0.1909)

0.8678 (0.1319)

0.8617 (0.0963)

(8, 5)

\({\hat{\tau }}\)

4.5286 (2.3319)

4.8352 (1.8013)

5.0900 (1.8258)

\({\hat{\rho }}\)

0.7535 (0.0753)

0.7592 (0.0463)

0.7634 (0.0297)

\({\hat{\rho }}_1\)

0.8837 (0.1696)

0.8877 (0.1175)

0.9032 (0.0862)

(10,6)

\({\hat{\tau }}\)

5.5178 (2.1622)

5.9843 (1.4728)

6.3574 (1.2989)

\({\hat{\rho }}\)

0.7550 (0.0685)

0.7604 (0.0423)

0.7644 (0.0275)

\({\hat{\rho }}_1\)

0.8824 (0.1463)

0.8916 (0.1020)

0.9210 (0.0753)

(20, 10)

\({\hat{\tau }}\)

9.8639 (1.3783)

10.5230 (1.4323)

10.8829 (1.1104)

\({\hat{\rho }}\)

0.7472 (0.0746)

0.7555 (0.0507)

0.7615 (0.0491)

\({\hat{\rho }}_1\)

0.8997 (0.1027)

0.9321 (0.0942)

0.9456 (0.0380)

(50, 20)

\({\hat{\tau }}\)

20.1861 (0.9302)

20.2915 (0.9534)

20.8192 (0.9327)

\({\hat{\rho }}\)

0.7499 (0.0498)

0.7587 (0.0406)

0.7645 (0.0428)

\({\hat{\rho }}_1\)

0.8869 (0.0106)

0.9019 (0.0103)

0.9229 (0.0197)

(100, 50)

\({\hat{\tau }}\)

50.8886 (0.3161)

51.0704 (0.2986)

51.2496 (0.2815)

\({\hat{\rho }}\)

0.7529 (0.0090)

0.7529 (0.0090)

0.7630 (0.0090)

\({\hat{\rho }}_1\)

0.8992 (0.0087)

0.9032 (0.0087)

0.9081 (0.0086)

Table 6

Bayes estimates of \(\tau \), \(\rho \) and \(\rho _1\) for \(r=5\) with different sample size n and change point \(\tau \) for Jeffreys prior

\((n,\tau )\)

Estimate

BS

BP

BE

(6, 4)

\({\hat{\tau }}\)

3.5153 (2.2862)

3.6824 (1.8123)

3.8289 (1.4392)

\({\hat{\rho }}\)

0.7521 (0.0857)

0.7580 (0.0534)

0.7624 (0.0346)

\({\hat{\rho }}_1\)

0.8750 (0.1888)

0.8757 (0.1307)

0.8761 (0.0955)

(8, 5)

\({\hat{\tau }}\)

4.5536 (2.2350)

4.8621 (1.7970)

5.1176 (1.3779)

\({\hat{\rho }}\)

0.7551 (0.0701)

0.7605 (0.0432)

0.7646 (0.0280)

\({\hat{\rho }}_1\)

0.9350 (0.1596)

0.8975 (0.1100)

0.8764 (0.0805)

(10,6)

\({\hat{\tau }}\)

5.5452 (2.0381)

6.0006 (1.4402)

6.3661 (1.3128)

\({\hat{\rho }}\)

0.7557 (0.0688)

0.7609 (0.0431)

0.7648 (0.0284)

\({\hat{\rho }}_1\)

0.8956 (0.1410)

0.8961 (0.0977)

0.8866 (0.0720)

(20, 10)

\({\hat{\tau }}\)

9.8243 (1.9504)

10.2897 (1.0464)

10.9609 (1.6908)

\({\hat{\rho }}\)

0.7435 (0.0479)

0.7695 (0.0366)

0.7623 (0.0356)

\({\hat{\rho }}_1\)

0.8964 (0.1360)

0.9045 (0.1754)

0.9302 (0.1151)

(50, 20)

\({\hat{\tau }}\)

20.2816 (1.5693)

20.1785 (0.9097)

20.5243 (1.0702)

\({\hat{\rho }}\)

0.7505 (0.0465)

0.7590 (0.0428)

0.7649 (0.0433)

\({\hat{\rho }}_1\)

0.8809 (0.1027)

0.9156 (0.1090)

0.9341 (0.0876)

(100, 50)

\({\hat{\tau }}\)

50.6512 (0.6505)

50.7733 (0.4891)

50.9911 (0.2902)

\({\hat{\rho }}\)

0.75573 (0.0401)

0.76392 (0.0383)

0.76945 (0.0373)

\({\hat{\rho }}_1\)

0.8956 (0.0701)

0.8941 (0.0645)

0.9014 (0.0693)

5 Numerical example

In this section, the estimates of \(\tau \), \(\rho \) and \(\rho _1\) are computed under SELF, PLF and GELF for \(\lambda =3\), \(\lambda _1=3.6\) and \(\mu =4\), i.e., \(\rho =0.75\) and \(\rho _1=0.9\). Since we have assumed shape parameter of Erlang distribution r to be known, the effect of r has also been studied. We have used R Software to carry out simulation study.

For given \(\rho =0.75\), \(\rho _1=.9\), \(r=2, 3 ~ {\text {and}} ~ 5\), random sample of sizes \(n=6\), 8, 10, 20, 50, 100 are generated from negative binomial distribution. Since \(E(\rho ) \ge 0.5\), commonly come across in queueing, we have chosen hyper-parameters of beta distribution in such a way that \(a \ge b\) and \(a_1 \ge b_1\). For these chosen combinations of hyper-parameters (ab) and \((a_1, b_1)\), viz. (10, 3) and (18, 1.4) and for given \(\gamma =-3\), change points \(\tau =4, 5, 6, 10, 20, 50\), sample size n, Bayes estimates of change point \(\tau \), traffic intensities before change \((\rho )\) and after change \((\rho _1)\) are found out. The procedure is repeated 1000 times and the estimators obtained for different r are tabulated in Tables 1, 2 and 3. Different Bayes estimates of \(\tau \), \(\rho \) and \(\rho _1\) are calculated using Jeffreys prior and demonstrated in Tables 4, 5 and 6. Empirical Bayes estimates of traffic intensities and change point are computed and summarized in Tables 7, 8 and 9. The values in the parentheses represent mean square error (MSE) of estimates in tables.
Table 7

Empirical Bayes estimates of \(\rho \) and \(\rho _1\) for \(r=2\) with different sample size n and change point \(\tau \)

\((n,\tau )\)

Estimate

BS

BP

BE

(6, 4)

\({\hat{\tau }}\)

3.9075 (0.0143)

3.9055 (0.0150)

3.9041 (0.0156)

\({\hat{\rho }}\)

0.7317 (0.0621)

0.7315 (0.0621)

0.7514 (0.0621)

\({\hat{\rho }}_1\)

0.8784 (0.0099)

0.8784 (0.0099)

0.8984 (0.0098)

(8, 5)

\({\hat{\tau }}\)

4.8220 (0.0196)

4.8170 (0.0208)

4.8136 (0.0218)

\({\hat{\rho }}\)

0.7303 (0.0428)

0.7503 (0.0331)

0.7603 (0.0332)

\({\hat{\rho }}_1\)

0.8876 (0.0078)

0.8972 (0.0074)

0.8970 (0.0077)

(10, 6)

\({\hat{\tau }}\)

5.9234 (0.0069)

5.9226 (0.0071)

5.9221 (0.0072)

\({\hat{\rho }}\)

0.7382 (0.0262)

0.7351 (0.0167)

0.7581 (0.0169)

\({\hat{\rho }}_1\)

0.9045 (0.0052)

0.8940 (0.0059)

0.9038 (0.0063)

(20, 10)

\({\hat{\tau }}\)

9.6153 (0.0059)

9.7593 (0.0068)

10.8483 (0.0049)

\({\hat{\rho }}\)

0.7697 (0.0160)

0.7501 (0.0104)

0.7603 (0.0094)

\({\hat{\rho }}_1\)

0.8895 (0.0032)

0.8872 (0.0028)

0.9086 (0.0026)

(50, 20)

\({\hat{\tau }}\)

19.9377 (0.0042)

20.0193 (0.0047)

20.0463 (0.0060)

\({\hat{\rho }}\)

0.7559 (0.0059)

0.7579 (0.0076)

0.7624 (0.0071)

\({\hat{\rho }}_1\)

0.8910 (0.0043)

0.8965 (0.0019)

0.8918 (0.0016)

(100, 50)

\({\hat{\tau }}\)

50.7255 (0.0024)

50.9318 (0.0035)

51.1352 (0.0032)

\({\hat{\rho }}\)

0.7591 (0.0008)

0.7590 (0.0011)

0.7695 (0.0013)

\({\hat{\rho }}_1\)

0.9094 (0.0015)

0.9094 (0.0016)

0.9190 (0.0017)

Table 8

Empirical Bayes estimates of \(\rho \) and \(\rho _1\) for \(r=3\) with different sample size n and change point \(\tau \)

\((n,\tau )\)

Estimate

BS

BP

BE

(6, 4)

\({\hat{\tau }}\)

3.9545 (0.0091)

3.9535 (0.0097)

3.9527 (0.0102)

\({\hat{\rho }}\)

0.7560 (0.0622)

0.7505 (0. 0622)

0.7605 (0.0622)

\({\hat{\rho }}_1\)

0.8768 (0.0528)

0.8868 (0.0485)

0.8953 (0.0492)

(8, 5)

\({\hat{\tau }}\)

4.9138 (0.0070)

4.9110 (0.0073)

4.9089 (0.0076)

\({\hat{\rho }}\)

0.7514 (0.0657)

0.7564 (0.0676)

0.7615 (0.0686)

\({\hat{\rho }}_1\)

0.8784 (0.0526)

0.8884 (0.0584)

0.9084 (0.0614)

(10, 6)

\({\hat{\tau }}\)

6.3464 (0.0055)

6.3254 (0.0059)

6.3115 (0.0061)

\({\hat{\rho }}\)

0.7586 (0.0060)

0.7594 (0.0062)

0.7687 (0.0064)

\({\hat{\rho }}_1\)

0.8851 (0.0046)

0.8947 (0.0052)

0.9045 (0.0056)

(20, 10)

\({\hat{\tau }}\)

10.7677 (0.0054)

10.5489 (0.0063)

10.4186 (0.0057)

\({\hat{\rho }}\)

0.7576 (0.0528)

0.7543 (0.0485)

0.7632 (0.0492)

\({\hat{\rho }}_1\)

0.8872 (0.0033)

0.8926 (0.0035)

0.9062 (0.0036)

(50, 20)

\({\hat{\tau }}\)

19.9049 (0.0038)

20.3204 (0.0036)

19.9137 (0.0043)

\({\hat{\rho }}\)

0.7560 (0.0026)

0.7491 (0.0029)

0.7632 (0.0031)

\({\hat{\rho }}_1\)

0.9086 (0.0018)

0.9010 (0.0019)

0.9163 (0.0019)

(100, 50)

\({\hat{\tau }}\)

50.5358 (0.0023)

50.7201 (0.0021)

50.9023 (0.0040)

\({\hat{\rho }}\)

0.7593 (0.0018)

0.7599 (0.0018)

0.7673 (0.0019)

\({\hat{\rho }}_1\)

0.9095 (0.0010)

0.9095 (0.0011)

0.9187 (0.0010)

Table 9

Empirical Bayes estimates of \(\rho \) and \(\rho _1\) for \(r=5\) with different sample size n and change point \(\tau \)

\((n,\tau )\)

Estimate

BS

BP

BE

(6, 4)

\({\hat{\tau }}\)

3.9302 (0.0110)

3.9288 (0.0115)

3.9678 (0.0120)

\({\hat{\rho }}\)

0.7506 (0.0522)

0.7489 (0.0468)

0.7604 (0.0508)

\({\hat{\rho }}_1\)

0.8768 (0.0430)

0.8860 (0.0206)

0.8962 (0.0207)

(8, 5)

\({\hat{\tau }}\)

4.9133 (0.0092)

4.9105 (0.0098)

4.9086 (0.0103)

\({\hat{\rho }}\)

0.7572 (0.0518)

0.7569 (0.0496)

0.7685 (0.0481)

\({\hat{\rho }}_1\)

0.8819 (0.0283)

0.8914 (0.0157)

0.9012 (0.0094)

(10, 6)

\({\hat{\tau }}\)

6.0763 (0.0053)

6.0828 (0.0044)

6.0882 (0.0097)

\({\hat{\rho }}\)

0.7565 (0.0081)

0.7504 (0.0098)

0.7603 (0.0092)

\({\hat{\rho }}_1\)

0.8870 (0.0186)

0.8968 (0.0109)

0.9068 (0.0084)

(20, 10)

\({\hat{\tau }}\)

9.7357 (0.0079)

10.2786 (0.0028)

10.2985 (0.0082)

\({\hat{\rho }}\)

0.7331 (0.0039)

0.7556 (0.0088)

0.7578 (0.0088)

\({\hat{\rho }}_1\)

0.8935 (0.0082)

0.8924 (0.0090)

0.8980 (0.0067)

(50, 20)

\({\hat{\tau }}\)

19.8329 (0.0046)

20.7505 (0.0030)

20.0088 (0.0067)

\({\hat{\rho }}\)

0.7519 (0.0029)

0.7501 (0.0032)

0.7636 (0.0033)

\({\hat{\rho }}_1\)

0.8900 (0.0038)

0.9035 (0.0038)

0.9144 (0.0037)

(100, 50)

\({\hat{\tau }}\)

50.1398 (0.0041)

49.8319 (0.0018)

50.7694 (0.0061)

\({\hat{\rho }}\)

0.7533 (0.0024)

0.7595 (0.0028)

0.7655 (0.0030)

\({\hat{\rho }}_1\)

0.9035 (0.0030)

0.9032 (0.0033)

0.9168 (0.0034)

6 Aircraft arrival times

For the data analysis purpose, we use the data on aircraft arrival times collected from a low-altitude transitional control sector for the period from noon through 8 p.m. on April 30, 1969. We have taken this data set from Hsu (1979). The data set consists of 213 arrival times, say \(T_1, T_2,\ldots ,T_{213}\) with in this period. Hsu (1979) has showed that the data are exponential with independent observations and conclude that there was no change point in the data. For the illustration purpose, we induce a change point in the following way. We have calculated the 213 interarrival times \(t_i=T_{i+1}-T_{i}\), \(i=1,2,\ldots ,213\). It is assumed that the initial customer arrives at time \(t=0\), so that the first interarrival time is \(t_1=T_1-0\). The interarrival times \(t_{101}, t_{102},\ldots ,t_{213}\) are multiplied by a constant factor 0.3. Hence, we have,
$$\begin{aligned} a_i(t)= {\left\{ \begin{array}{ll} 1- \exp (-\lambda t), &\quad \mathrm{\text {if}} ~~ i = 1,2,\ldots ,100 \\ 1- \exp (-\lambda _1 t), & \quad {\text {if}} ~~ i= 101,102,\ldots ,213 \end{array}\right. }, \end{aligned}$$
where \(\lambda _1=0.3\times \lambda \).
From Fig. 1, it is seen that the change occurs at interarrival time \(\tau =97\). The posterior probability and posterior mean are high at \(\tau =97\) which is clear from Fig. 2.
Fig. 1

Plot of interarrival time

Fig. 2

Posterior means and posterior probabilities of a change

7 Concluding remarks

We discussed the estimation of change point and the traffic intensity parameter before and after change in \(M/E_r/1\) queueing model using Bayesian approach. The informative beta prior as well as non-informative Jeffreys prior are taken for traffic intensities, and uniform prior considered for change point. Also empirical Bayes estimation of parameters is proposed. As a real-life example, a data analysis on aircraft arrival time is provided. From tables, it is observed that the estimates obtained using beta and Jeffreys priors as well as empirical Bayes procedure perform better than the maximum likelihood estimate in terms of MSE. Empirical Bayes estimates seem to be better than other estimates in terms of MSE. While it can be seen that all the estimates approach the true values of the parameters, MSE for all the cases decreases with the increase in sample sizes. This indicates that the estimation procedure is quite sensitive so that one could detect the change in \(\rho \) at a very early stage. This would be helpful in taking precautionary measures as necessary.

As a matter of further investigation one could consider single-server queueing systems such as M / G / 1, \(E_r/M/1\), G / M / 1 and GI / G / 1, and multi-server queues such as M / G / c, \(E_r/M/c\), G / M / c and GI / G / c.

Notes

Acknowledgements

The authors are thankful to the anonymous referees for their precious comments which led to significant improvement in the paper.

References

  1. Abramowitz, M., & Stegun, L. A. (1964). Handbook of Mathematical Functions. New York: Dover Publ.zbMATHGoogle Scholar
  2. Acharya, S. K., & Villarreal, C. E. (2013). Change point estimation of service rate in \(M/M/1\) queue. International Journal of Mathematics in Operational Research, 5(1), 110–120.MathSciNetCrossRefzbMATHGoogle Scholar
  3. Almeida, M.A.C. & Cruz, F.R.B. (2017). A note on Bayesian estimation of traffic intensity in single-server Markovian queues. Communications in Statistics – Simulation and Computation.  https://doi.org/10.1080/03610918.2017.1353614.
  4. Armero, C. (1994). Bayesian inference in Markovian Queues. Queueing Systems, 15, 419–426.MathSciNetCrossRefzbMATHGoogle Scholar
  5. Barry, D., & Hartigan, J. A. (1993). A Bayesian analysis for change point problems. Journal of the American Statistical Association, 88(421), 309–319.MathSciNetzbMATHGoogle Scholar
  6. Broemeling, L. D. (1972). Bayesian procedure for detecting a change in a sequence of random variables. Metron, 30, 214–227.zbMATHGoogle Scholar
  7. Calabria, R., & Pulcini, G. (1994). An engineering approach to Bayes estimation for the Weibull distribution. Microelectronics Reliability, 34, 789–802.CrossRefzbMATHGoogle Scholar
  8. Chernoff, H., & Zacks, S. (1964). Estimating the current mean of a normal distribution which is subject to change in time. The Annals of Mathematical Statistics, 35(3), 999–1018.CrossRefzbMATHGoogle Scholar
  9. Chowdhury, S., & Maiti, S. (2014). Bayesian Estimation of Traffic Intensity in an \(M/E_r/1\) Queueing model. Research & Reviews: Journal of Statistics, 1, 99–106.Google Scholar
  10. Chowdhury, S. (2010). Estimation in queueing models. Ph.d. Thesis. Doctoral Thesis at Calcutta University. http://hdl.handle.net/10603/159874.
  11. Guttman, I., & Menzefriche, U. (1982). On the use of loss-functions in the change point problem. Annals of the Institute of Statistical Mathematics, 34, 319–326.MathSciNetCrossRefGoogle Scholar
  12. Hinkley, D. V. (1970). Inference about the change-point in a sequence of random variables. Biometrika, 57, 1–17.MathSciNetCrossRefzbMATHGoogle Scholar
  13. Hsu, D. A. (1979). Detecting shifts of parameter in gamma sequences with applications to stock prices and air traffic flow analysis. Journal of the American Statistical Association, 74, 31–40.CrossRefGoogle Scholar
  14. Jain, S. (1995). Estimating changes in traffic intensity for \(M/M/1\) queueing systems. Microelectronics Reliability, 35(11), 1395–1400.CrossRefGoogle Scholar
  15. Jain, S. (2001). Estimating the change point of Erlang interarrival time distribution. INFOR, 39(2), 200–207.Google Scholar
  16. Lee, C. B. (1998). Bayesian analysis of a change point in exponential families with application. Computational Statistics & Data Analysis, 27, 195–208.MathSciNetCrossRefzbMATHGoogle Scholar
  17. McGrath, M. F., Gross, D., & Singpurwalla, N. D. (1987). A subjective Bayesian approach to the theory of queues \(I-\)Modelling. Queueing System, 1, 317–333.CrossRefzbMATHGoogle Scholar
  18. McGrath, M. F., & Singpurwalla, N. D. (1987). A subjective Bayesian approach to the theory of queues \(II-\)Inference and Information in \(M/M/1\) queues. Queueing System, 1, 335–353.MathSciNetCrossRefzbMATHGoogle Scholar
  19. Muddapur, M. V. (1972). Bayesian Estimation of parameters in some queueing models. Institute of Mathematical Statistics, 24, 327–331.MathSciNetCrossRefzbMATHGoogle Scholar
  20. Norstrom, J. G. (1996). The use of precautionary loss function in risk analysis. IEEE Transactions on Reliability, 45(3), 400–403.CrossRefGoogle Scholar
  21. Raftery, A. E., & Akman, V. E. (1986). Bayesian analysis of a Poisson process with a change-point. Biometrika, 73, 85–89.MathSciNetCrossRefGoogle Scholar
  22. Smith, A. F. M. (1975). A Bayesian approach to inference about a change-point in a sequence of random variables. Biometrika, 62, 407–416.MathSciNetCrossRefzbMATHGoogle Scholar
  23. Thiruvaiyaru, D., & Basawa, I. V. (1992). Empirical Bayes estimation for queueing systems and networks. Queueing Systems, 11, 179–202.MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Japanese Federation of Statistical Science Associations 2018

Authors and Affiliations

  1. 1.P. G. Department of StatisticsSambalpur UniversityOdishaIndia

Personalised recommendations