Keywords

1 Introduction

Consider the activity of an organization having branch network. Let these branches operate in different regional centers, under control of one common center and follow common strategy. One of the problems of control consists in comparison of several branches activity efficiency. This information is necessary, in particular, to select a rational method of resources distribution, namely, into which branches resources should be allocated.

Solution of this problem can be based on comparison of numerical indicators of these branches efficiency. In the present paper, such indicator is the quantity of sales (or consumer demand) on corresponding product in different branches. We assume, that the quantities of sales are random variables, distribution of which depends on some unknown parameters. Let the comparison between parameters define the preferences of branches in efficiency. The main problem now is to construct a rule of ranking of branches on the base of set of samples of a small volume.

We consider such problem from mathematical statistic point of view. Our solution is based on the Lehmann’s theory of multiple decision statistical procedure [1] and tests of the Neyman structure [2]. Lehmann’s theory of multiple decision statistical procedures is based on three points: choice of the generating hypothesis, compatibility of the family of tests for the generating hypothesis with decision space for the original hypothesis, and additivity of the loss function.

In the present paper, we formulate the problem of comparison of branch activity efficiency as a multiple decision problem. Then we discuss the choice of generating hypothesis and compatibility condition, and investigate the condition of additivity of the loss function and condition of unbiasedness.

The paper is organized as follow. In Sect. 2, we give an example of real trading (distributive) network that we will use in the paper to illustrate the theoretical findings. In Sect. 3, mathematical formulation of the problem is presented. In Sect. 4, multiple decision procedure for the solution of the stated problem is described. In Sect. 5, the condition of additivity of the loss function and condition of unbiasedness are analyzed. In the last Sect. 6, we summarize the main results of the paper.

2 Example of Real Distribution Network

Let’s consider activity of a trading network with the main office in the center of region and with branches in different regional cities. We suppose this trading network merchandises same expensive product, for example, cars. In this case, the ratio between number of sales and total number of customers will be an efficiency indicator. Let’s assume that such network has worked for some years (as a whole structure) and the management team of the network has a problem of choice of rational strategy of the network development. Comparison of results of branches efficiency is necessary to choose such strategy.

The numbers of sales and total number of customers are presented in Table 1. Some of branches started to work later, namely the branches 5, 6, 7 started to work in 2001 and the branch 8 started to work in 2002. String NUM corresponds to the total number of customers.

Table 1 The numbers of sales and total number of customers

Suppose that the numbers of sales are random variables X i (i=1,…,8). In this case, the data in the table \(x_{j}^{i}\ (i=1,\ldots,N;j=1,\ldots,m_{j})\) are observations of these random variables. As indicator of branch efficiency we will consider the ratio between number of those who make a purchase to potentially possible number of buyers in corresponding city.

3 Mathematical Model and Formulation of the Problem

Let N be the number of branches, \(n^{i}_{j}\) be the potential number of buyers in the city i for the year j, m i be the number of observations for the city i, i=1,2,…,N. Define the random variables with Bernoulli distributions

$$ \xi_{kj}^{i}=\begin{cases} 0, &P(\xi_{kj}^{i}=0)=q_{kj}^{i}\\ 1, &P(\xi_{kj}^{i}=1)=p_{kj}^{i} \end{cases} $$
(1)

\(p_{kj}^{i}+q_{kj}^{i}=1; k=1,\ldots,n^{i}; i=1,\ldots,N; j=1,\ldots,m_{i}\). In our setting, the random variable \(\xi_{kj}^{i}\) describes behavior of buyer number k of the city i for the year j, \(p_{kj}^{i}\) is the probability of the fact that the buyer k from the city i make a purchase in year j. Therefore, the random variable

$$X_{j}^{i}=\sum_{k=1}^{n^i}\xi_{kj}^{i} $$

describes number of sales in the city i for the year j. We use the following notations \(a_{j}^{i}=E(X_{j}^{i})=\sum_{i=1}^{n}p_{kj}^{i}\), \(X^{i}=(X^{i}_{1},X^{i}_{2},\ldots,X^{i}_{m_{i}})\), \(x^{i}=(x^{i}_{1},x^{i}_{2},\ldots,x^{i}_{m_{i}})\). The problem is investigated in the paper under the following assumptions:

  • \(n_{j}^{i}=n^{i},\ \forall j=1,\ldots,m_{i}\), where m i —number of years of observations for the branch i. Let n i be known (it can be a fixed percent of the city population).

  • \(a_{j}^{i}=a^{i}\ (j=1,\ldots,m_{i})\).

  • The random variables \(X_{j}^{i}\) are independent for j=1,…,m i ;i=1,…,N.

  • The random variable \(X_{j}^{i}\) has a normal distribution from the class \(N(a^{i},\sigma^{i^{2}})\ \forall j=1,\ldots,m_{i}\).

  • Indicator of efficiency (consumer demand) for the city i is calculated by:

    $$p^{i}=\frac{a^{i}}{n^{i}}=\sum_{k=1}^{n^i}\frac{p_{k}^{i}}{n^{i}} $$
  • Relation between parameters is:

    $$\frac{{\sigma^i}^2}{{\sigma^j}^2}=\frac{n^{i}}{n^{j}} $$

Discussion of this assumption is given in [3].

The problem of ranking of the branches efficiency can be formulated as multiple decision problem of choice from L hypothesis:

$$ \begin{array}{l} H_{1}{:}\ p^{1}=p^{2}=\cdots=p^{N}\\ H_2{:}\ p^{1}>p^{2}=\cdots=p^{N}\\ H_3{:}\ p^{1}<p^{2}=\cdots=p^{N}\\ H_4{:}\ p^{1}=p^{2}>p^3=\cdots=p^{N}\\ \vdots\\ H_{L}{:}\ p^{1}<p^{2}<\cdots<p^{N} \end{array} $$
(2)

Hypothesis H 1 means that efficiencies for all branches are equal. Hypothesis H 2 means that the branch 1 is the most efficient and all other branches have equal efficiencies and so on. Hypothesis H L means that the branches are ranked in efficiency by their ordinal number.

Note, that the total number of hypothesis is:

$$ L=\sum_{r=1}^{N}\sum _{k_{1}+k_{2}+\ldots+k_{r}=N}\frac{N!}{k_{1}!k_{2}!\cdots k_{r}!} $$
(3)

This number is increasing very fast. If N=2 then L=3, if N=3 then L=13, if N=4 then L=75, if N=5 then L=541 and so on.

4 Compatibility Conditions and Multiple Decision Statistical Procedure

The problem of choice of one of L hypotheses can be studied in the framework of the theory of statistical decision functions [4, 5]. In [1], a constructive way for the solution is given. Research in this and similar directions is proceeded till now. The detailed bibliography is presented in [6, 7]. An application of the multiple decision theory to the market graph construction is given in [8].

Main objective of the present paper is application of the method from [1] for the solution of the problem of branch ranking and illustration of adequacy of this method to this type of problems.

To construct the test with L decisions for the problem (2) we can apply the method described in [1], where the test with L decisions is constructed from the tests with K<L decisions. One can use a natural tests with K=3, therefore the problem with L decisions is reduced to set of problems with three decisions. The last problem can be reduced to two usual testing hypothesis problems [2].

Consider the following sets of three decisions problems (three-decisions generating problems for (2)):

$$ H_{1}^{ij}{:}\ p^{i}<p^{j},\quad H_{2}^{ij}{:}\ p^{i}=p^{j},\quad H_{3}^{ij}{:}\ p^{i}>p^{j} $$
(4)

where p i, p j are indicators of efficiency for the pair of branches (i,j). According to [1], the following test for the problem (4) can be applied (see [3] for more details):

$$ \delta_{ij}\bigl(x^{i},x^{j} \bigr)=\left \{\begin{array}{l@{\quad }l} d_{1}^{ij},&\mbox{if}\ t_{ij}(x^{i},x^{j})<c_{1}\\ d_{2}^{ij},&\mbox{if}\ c_{1}\leq t_{ij}(x^{i},x^{j})\leq c_{2}\\ d_{3}^{ij},&\mbox{if}\ t_{ij}(x^{i},x^{j})>c_{2}\\ \end{array} \right . $$
(5)

where the statistic of test t ij has the form:

$$ t_{ij}\bigl(x^{i},x^{j}\bigr)= \frac{(\frac{\overline{x^{i}}}{n^{i}}-\frac{\overline{x^{j}}}{n^{j}})/\sqrt{\frac{1}{m_{i}n^{i}}+ \frac{1}{m_{j}n^{j}}}}{\sqrt{(\sum_{i=1}^N\frac{1}{n^{i}}\sum_{l=1}^{m_{i}}(x_{l}^{i}-\overline{x^{i}})^{2})/(\sum_{i=1}^Nm_{i}-N)}} $$
(6)

Here \(d_{k}^{ij}\) is the decision of acceptance of \(H^{ij}_{k}\), k=1,2,3; \(\overline{x^{i}}=\frac{1}{m_{i}}\sum_{j=1}^{m_{i}}x_{j}^{i}\). The constants c 1,c 2 are defined from the equations:

$$ \begin{aligned} P \bigl(t_{ij}\bigl(X^{i},X^{j}\bigr)<c_{1}|p^{i}=p^{j} \bigr)&=\alpha_{ij} \\ P\bigl(t_{ij}\bigl(X^{i},X^{j}\bigr)>c_{2}|p^{i}=p^{j} \bigr)&=\alpha_{ji} \end{aligned} $$
(7)

The problem (4) is a three-decision problem. To construct the test (5), we use the following generating hypotheses (see details in [3]):

$$ \begin{aligned} &h_{ij}{:}\ p^{i}\geq p^{j}\quad \mbox{vs.}\quad k_{ij}{:}\ p^{i}<p^{j} \\ &h_{ji}{:}\ p^{j}\geq p^{i}\quad \mbox{vs.}\quad k_{ji}{:}\ p^{j}<p^{i} \end{aligned} $$
(8)

and combine two well known unbiased tests for them.

The test (5) is valid under the additional condition of compatibility of the tests for generating hypotheses which reads in our case as

$$P_{\theta}\bigl(\delta_{ij}(x)=d_{1}\bigr)+P_{\theta}\bigl(\delta_{ij}(x)=d_{2}\bigr)+P_{\theta}\bigl(\delta_{ij}(x)=d_{3}\bigr)=1,\quad \theta=p^{i}-p^{j} $$

This is equivalent to c 1<c 2 i.e. α ij +α ji <1 (see [1], [8] where the compatibility condition is discussed). For the case α ij =α ji we have c 2=−c 1. Now we consider the compatibility conditions for the problem (2) with generating hypotheses (4). According to Wald decision theory [4], a nonrandomized multiple decision statistical procedure for the problem (2) is a partition of the sample space into L regions. The construction of the test for the L-decision problem (2) from the tests (5) faces the problem of compatibility which does not have a solution in our case. Indeed consider the problem (2) for the case N=3 and α ij =α, ∀i,j. Then we have 13 regions in parametric space (13 hypothesis):

$$ \begin{array}{l@{\quad }l@{\quad }l} H_1{:}\ p^1=p^2=p^3, & H_2{:}\ p^1>p^2=p^3, & H_3{:}\ p^1<p^2=p^3 \\ H_4{:}\ p^1=p^2>p^3, & H_5{:}\ p^1=p^2<p^3, & H_6{:}\ p^1=p^3<p^2 \\ H_7{:}\ p^1=p^3>p^2, & H_8{:}\ p^1<p^2<p^3, & H_9{:}\ p^1<p^3<p^2 \\ H_{10}{:}\ p^2<p^1<p^3, & H_{11}{:}\ p^2<p^3<p^1, & H_{12}{:}\ p^3<p^1<p^2 \\ H_{13}{:}\ p^3<p^2<p^1 \\ \end{array} $$
(9)

If we combine the tests (5), then the sample space is divided onto 33=27 regions, which are (we put c=c 2=−c 1):

$$\begin{aligned} &\begin{array}{l@{\quad }l@{\quad }l}D_1=\left \{ \begin{array}{l} t_{12}<-c\\ t_{13}<-c\\ t_{23}<-c\\ \end{array} \right \};& D_2=\left \{ \begin{array}{l} t_{12}<-c\\ t_{13}<-c\\ |t_{23}|\leq c\\ \end{array} \right \};& D_3=\left \{ \begin{array}{l} t_{12}<-c\\ t_{13}<-c\\ t_{23}>c\\ \end{array} \right \}\\ D_4=\left \{ \begin{array}{l} t_{12}<-c\\ |t_{13}|\leq c\\ t_{23}<-c\\ \end{array} \right \};& D_5=\left \{ \begin{array}{l} t_{12}<-c\\ |t_{13}|\leq c\\ |t_{23}|\leq c\\ \end{array} \right \};& D_6=\left \{ \begin{array}{l} t_{12}<-c\\ |t_{13}|\leq c\\ t_{23}>c\\ \end{array} \right \}\\ D_7=\left \{ \begin{array}{l} t_{12}<-c\\ t_{13}>c\\ t_{23}<-c\\ \end{array} \right \};& D_8=\left \{ \begin{array}{l} t_{12}<-c\\ t_{13}>c\\ |t_{23}|\leq c\\ \end{array} \right \};& D_9=\left \{ \begin{array}{l} t_{12}<-c\\ t_{13}>c\\ t_{23}>c\\ \end{array} \right \}\\ D_{10}=\left \{ \begin{array}{l} |t_{12}|\leq c\\ t_{13}<-c\\ t_{23}<-c\\ \end{array} \right \};& D_{11}=\left \{ \begin{array}{l} |t_{12}|\leq c\\ t_{13}<-c\\ |t_{23}|\leq c\\ \end{array} \right \};& D_{12}=\left \{ \begin{array}{l} |t_{12}|\leq c\\ t_{13}<-c\\ t_{23}>c\\ \end{array} \right \}\\ D_{13}=\left \{ \begin{array}{l} |t_{12}|\leq c\\ |t_{13}|\leq c\\ t_{23}<-c\\ \end{array} \right \};& D_{14}=\left \{ \begin{array}{l} |t_{12}|\leq c\\ |t_{13}|\leq c\\ |t_{23}|\leq c\\ \end{array} \right \};& D_{15}=\left \{ \begin{array}{l} |t_{12}|\leq c\\ |t_{13}|\leq c\\ t_{23}>c\\ \end{array} \right \}\\ D_{16}=\left \{ \begin{array}{l} |t_{12}|\leq c\\ t_{13}>c\\ t_{23}<-c\\ \end{array} \right \};& D_{17}=\left \{ \begin{array}{l} |t_{12}|\leq c\\ t_{13}>c\\ |t_{23}|\leq c\\ \end{array} \right \};& D_{18}=\left \{ \begin{array}{l} |t_{12}|\leq c\\ t_{13}>c\\ t_{23}>c\\ \end{array} \right \}\\ D_{19}=\left \{ \begin{array}{l} t_{12}>c\\ t_{13}<-c\\ t_{23}<-c\\ \end{array} \right \};& D_{20}=\left \{ \begin{array}{l} t_{12}>c\\ t_{13}<-c\\ |t_{23}|\leq c\\ \end{array} \right \};& D_{21}=\left \{ \begin{array}{l} t_{12}>c\\ t_{13}<-c\\ t_{23}>c\\ \end{array} \right \}\\ D_{22}=\left \{ \begin{array}{l} t_{12}>c\\ |t_{13}|\leq c\\ t_{23}<-c\\ \end{array} \right \};& D_{23}=\left \{ \begin{array}{l} t_{12}>c\\ |t_{13}|\leq c\\ |t_{23}|\leq c\\ \end{array} \right \};& D_{24}=\left \{ \begin{array}{l} t_{12}>c\\ |t_{13}|\leq c\\ t_{23}>c\\ \end{array} \right \}\\ D_{25}=\left \{ \begin{array}{l} t_{12}>c\\ t_{13}>c\\ t_{23}<-c\\ \end{array} \right \};& D_{26}=\left \{ \begin{array}{l} t_{12}>c\\ t_{13}>c\\ |t_{23}|\leq c\\ \end{array} \right \};& D_{27}=\left \{ \begin{array}{l} t_{12}>c\\ t_{13}>c\\ t_{23}>c\\ \end{array} \right \} \end{array} \end{aligned}$$

One has from (6) under additional assumption that m i =m, n i=n, i=1,2,…,N:

$$ \begin{array}{l} t_{13}=t_{12}+t_{23}\\ t_{12}=t_{13}+t_{32}\\ t_{23}=t_{21}+t_{13}\\ \end{array} $$
(10)

Therefore, the sample regions D 4,D 7,D 8,D 12,D 16,D 20,D 21,D 24 are empty. This means that decision function induced by (5) consists of 27−8=19 decisions. On the other side, decision function for the problem (9) has to have 13 decisions only. Note, that the regions D 5,D 11,D 13,D 15,D 17,D 23 in the sample space does not have a corresponding nonempty regions in the parametric space. For example, the region D 5 in the sample space corresponds to the empty region p 1<p 2;p 1=p 3;p 2=p 3 in the parametric space. At the same time, one has (it is true in general case too)

$$P\bigl(|t_{ij}|\leq c,|t_{jk}|\leq c,|t_{ik}|>c\bigr)>0 $$

which means that the probability to accept the decision p i=p j;p j=p k but p ip k is not equal to zero.

To handle the problem of incompatibility, we reformulate the initial ranking problem (2) in the following way. First, we introduce the notations

$$\begin{aligned} & p^{i}\stackrel{\varDelta }{=}p^{j} \quad \Longleftrightarrow \quad \bigl|p^{i}-p^{j}\bigr|\leq \varDelta \\ & p^{i}\stackrel{\varDelta }{<}p^{j} \quad \Longleftrightarrow\quad p^{i}+\varDelta <p^{j} \\ &p^{i}\stackrel{\varDelta }{>}p^{j} \quad \Longleftrightarrow\quad p^{i}>p^{j}+\varDelta \end{aligned}$$

where Δ is fixed positive number. Next, we formulate new multiple decision problem of the choice from M hypotheses:

$$ \begin{array}{l} H^\prime_{1}{:}\ p^{i}\stackrel{\varDelta }{=}p^{j},\quad \forall i,j=1,\ldots,N\\ H^\prime_2{:}\ p^{1}\stackrel{\varDelta }{>}p^{i},\quad i=2,\ldots,N;\quad p^{i}\stackrel{\varDelta }{=}p^{j},\quad \forall i,j=2,\ldots,N\\ H^\prime_3{:}\ p^{1}\stackrel{\varDelta }{>}p^{2};p^{2}\stackrel{\varDelta }{>}p^{i},\quad i=3,\ldots, N;\quad p^{i}\stackrel{\varDelta }{=}p^{j},\quad \forall i,j=3,\ldots,N \\ \vdots\\ H^\prime_{M}{:}\ p^{1}\stackrel{\varDelta }{<}p^{2};\quad p^2\stackrel{\varDelta }{<}p^3;\quad \ldots,\quad p^{N-1}\stackrel{\varDelta }{<}p^N\\ \end{array} $$
(11)

To solve the problem (11), we introduce the following set of three-decisions generating problems

$$ \begin{array}{l} H^{\prime{(ij)}}_{1}{:}\ p^{i}\stackrel{\varDelta }{<}p^{j} \\ H^{\prime{(ij)}}_{2}{:}\ p^{i}\stackrel{\varDelta }{=}p^{j} \\ H^{\prime{(ij)}}_{3}{:}\ p^{i}\stackrel{\varDelta }{>}p^{j} \\ \end{array} $$
(12)

The test (5) can be applied for the problem (12) with the constants c 1,c 2 defined from the equations:

$$ \begin{array}{l} P\bigl(T_{ij}\bigl(X^{i},X^{j}\bigr)<c_{1}|p^{i}+\varDelta =p^{j}\bigr)=\alpha_{ij}\\ P\bigl(T_{ij}\bigl(X^{i},X^{j}\bigr)>c_{2}|p^{i}=p^{j}+\varDelta \bigr)=\alpha_{ji}\\ \end{array} $$
(13)

In this setting, there is a one to one correspondence between associated partition regions in the parameters and sample spaces. For example for the case N=3, the problem (11) can be written as:

$$ \begin{array}{l@{\quad }l@{\quad }l} H^\prime_1{:}\ p^1\stackrel{\varDelta }{<} p^2\stackrel{\varDelta }{<}p^3& H^\prime_2{:}\ p^1\stackrel{\varDelta }{<}p^2\stackrel{\varDelta }{=}p^3 &H^\prime_3{:}\ p^1\stackrel{\varDelta }{<}p^3\stackrel{\varDelta }{<}p^2\\ H^\prime_5{:}\ p^1\stackrel{\varDelta }{\leq}p^3\stackrel{\varDelta }{\leq}p^2 & H^\prime_6{:}\ p^1\stackrel{\varDelta }{=}p^3\stackrel{\varDelta }{<}p^2 & H^\prime_9{:}\ p^3\stackrel{\varDelta }{<}p^1\stackrel{\varDelta }{<}p^2\\ H^\prime_{10}{:}\ p^1\stackrel{\varDelta }{=}p^2\stackrel{\varDelta }{<}p^3 & H^\prime_{11}{:}\ p^1\stackrel{\varDelta }{\leq}p^2\stackrel{\varDelta }{\leq}p^3 & H^\prime_{13}{:}\ p^2\stackrel{\varDelta }{\leq}p^1 \stackrel{\varDelta }{\leq}p^3\\ H^\prime_{14}{:}\ p^1\stackrel{\varDelta }{=}p^3\stackrel{\varDelta }{=}p^2 & H^\prime_{15}{:}\ p^3\stackrel{\varDelta }{\leq}p^1\stackrel{\varDelta }{\leq}p^2 & H^\prime_{17}{:}\ p^3\stackrel{\varDelta }{\leq}p^2\stackrel{\varDelta }{\leq}p^1\\ H^\prime_{18}{:}\ p^3\stackrel{\varDelta }{<}p^1\stackrel{\varDelta }{=}p^2 & H^\prime_{19}{:}\ p^2\stackrel{\varDelta }{<}p^1\stackrel{\varDelta }{<}p^3 & H^\prime_{22}{:}\ p^2\stackrel{\varDelta }{<}p^1\stackrel{\varDelta }{=}p^3\\ H^\prime_{23}{:}\ p^2\stackrel{\varDelta }{\leq}p^3\stackrel{\varDelta }{\leq}p^1 & H^\prime_{25}{:}\ p^2\stackrel{\varDelta }{<}p^3\stackrel{\varDelta }{<}p^1 & H^\prime_{26}{:}\ p^2\stackrel{\varDelta }{=}p^3\stackrel{\varDelta }{<}p^1\\ H^\prime_{27}{:}\ p^3\stackrel{\varDelta }{<}p^2\stackrel{\varDelta }{<}p^1 & &\\ \end{array} $$
(14)

where \(p^{i}\stackrel{\varDelta }{\leq}p^{j}\stackrel{\varDelta }{\leq}p^{k}\) means |p ip k|<Δ, |p jp k|<Δ and p i+Δ<p j. It is easy to see that there exists one-to-one correspondence \(D_{i} \longleftrightarrow H'_{i}\), between partition of the sample space (see above) and partition of the parametric space (14).

Note that number M of hypothesis in (11) is larger than the number of hypothesis in (2).

We illustrate our findings by the practical example.

Example

Results of application of the multiple decision problem (11) for the data from Table 1, Δ=10−6 are given in Table 2.

Table 2 The results of testing the three-decisions generating problems (12)

According to (11), accepted decisions are:

$$ \begin{aligned} &p^{1}>p^{i}+ \varDelta ,\quad i=3,4,5,6; \\ &p^{2}>p^{i}+\varDelta , \quad i=3,5,6; \\ &p^{4}>p^{3}+\varDelta ; \\ &p^6>p^3+\varDelta ; \\ &p^{7}>p^{i}+\varDelta ,\quad i=1,2,3,4,5,6,8; \\ &\bigl|p^{i}-p^{j}\bigr|<\varDelta , \quad (i,j)=(1,2),(1,8),(2,4),(2,8),(3,5),(4,5), \\ &\quad \phantom{|p^{i}-p^{j}|<\varDelta , \ (i,j)= } (4,6),(4,8),(5,6),(5,8),(6,8) \end{aligned} $$
(15)

The general conclusion about indicators of branches efficiency can be written as follows:

$$ p^{3}\stackrel{\varDelta }{\leq}p^{5} \stackrel{\varDelta }{\leq}p^{6}\stackrel{\varDelta }{\leq}p^{4} \stackrel{\varDelta }{\leq}p^{8}\stackrel{\varDelta }{\leq}p^{2} \stackrel{\varDelta }{\leq}p^{1}\stackrel{\varDelta }{<}p^{7} $$
(16)

5 Statistical Optimality of Ranking

In this section, we discuss some properties of the constructed multiple decision statistical procedure. In particular, we show that this procedure is optimal in the class of unbiased multiple decision statistical procedures. To prove this fact, we follow the method proposed in [1].

First, we show that the test (5) with constant defined by (13) is optimal in the class of unbiased statistical procedure for the problem (12).

Generating hypothesis for the problem (12) are:

$$ \begin{array}{l} h^\prime_{ij}{:}\ p^i\leq p^j+\varDelta \quad \mbox{vs.}\quad k^\prime_{ij}{:}\ p^i>p^j+\varDelta \\ h^\prime_{ji}{:}\ p^j\leq p^i+\varDelta \quad \mbox{vs.}\quad k^\prime_{ji}{:}\ p^j>p^i+\varDelta \\ \end{array} $$
(17)

Uniformly most powerful unbiased tests for problems (17) are [2]:

$$\begin{aligned} \delta^\prime_{ij} \bigl(x^{i},x^{j}\bigr)&=\left \{\begin{array}{l@{\quad }l} d^\prime_{ij},&\mbox{if}\ t_{ij}(x^{i},x^{j})>c_{2}\\ d_{ij},&\mbox{if}\ t_{ij}(x^{i},x^{j})<c_{2}\\ \end{array} \right . \end{aligned}$$
(18)
$$\begin{aligned} \delta^\prime_{ji} \bigl(x^{i},x^{j}\bigr)&=\left \{\begin{array}{l@{\quad }l} d^\prime_{ji},&\mbox{if}\ t_{ji}(x^{i},x^{j})>c_{1}\\ d_{ji},&\mbox{if}\ t_{ji}(x^{i},x^{j})<c_{1}\\ \end{array} \right . \end{aligned}$$
(19)

where \(d^{\prime}_{ij}(d_{ij})\)—decision of rejection (acceptance) of hypothesis \(h^{\prime}_{ij}\) and constant c 1,c 2 are defined by (13). If α ij +α ji ≤1, then the compatibility condition is satisfied (see Sect. 4). Note that power functions of tests (18)–(19) are continuous.

Consider the loss function for the three decision test (5). To simplify our arguments, we drop the index (i,j) in the notations. Let w lk be the loss from the acceptance of decision d k when \(H'_{l}\) is true, l,k=1,2,3, w ll =0, a 1,a 2 be the loss from the rejection of \(h^{\prime}_{ij}\) and \(h^{\prime}_{ji}\) when they are true, b 1,b 2 be the loss from the acceptance of \(h^{\prime}_{ij}\) and \(h^{\prime}_{ji}\) when \(k^{\prime}_{ij}\) and \(k^{\prime}_{ji}\) are true. We can evaluate the losses as follows. Suppose the company has a fund s to be invested in the development of branches. The investment strategy is to invest in the most efficient branch and divide the investment if they are equal in efficiency. In this case, if the hypothesis \(H'_{1}\) is true then the losses from decisions d 1,d 2,d 3 are w 11=0, w 1,2=s/2, w 13=s. If the hypothesis \(H'_{2}\) is true, then the losses from decisions d 1,d 2,d 3 are w 21=s/2, w 2,2=0, w 23=s/2. If the hypothesis \(H'_{3}\) is true, then the losses from decisions d 1,d 2,d 3 are w 31=s, w 3,2=s/2, w 33=0. Therefore, the following relations take place:

$$ w_{12}=b_2;\quad w_{13}=a_1+b_2;\quad w_{23}=a_1;\quad w_{21}=a_2;\quad w_{31}=a_2+b_1;\quad w_{32}=b_1 $$
(20)

This is exactly the additivity conditions for the loss function in [1]. The additivity conditions imply

$$ w_{13}=w_{12}+w_{23};\quad w_{31}=w_{32}+w_{21} $$
(21)

Now we state that compatibility condition for generating hypotheses testing is satisfied and additivity of the loss function takes place and power functions of tests (18)–(19) are continuous. Therefore, combining most powerful unbiased tests with two decisions we get a optimal statistical procedure in the class of unbiased statistical procedures with three decisions.

Next step is to consider the general problem (11)–(12). It was shown that tests (5) for the problem (11) are compatible and are optimal statistical procedure in the class of unbiased statistical procedures with three decisions. Condition of additivity of the loss function is:

$$ w(\varTheta, \delta)=\sum_{i<j}w(\varTheta,\delta_{ij}) $$
(22)

where δ ij is the statistical procedure (5) for the problem (12) with constants defined by (13), Θ=(p 1,…,p N). This condition means that the total loss is a sum of losses from statistical procedures for generating three decision problems.

To illustrate the relations (22), we consider the case N=3; the general case can be treated in the same way. The multiple decision problem (11) for the case N=3 is (14) where M=19.

Case 1. :

True decision and taken decision are different in one pair of branches. One adjacent hypotheses error. For example, suppose the hypothesis \(H^{\prime}_{14}\) is true, but the decision \(d^{\prime}_{11}\) is accepted. In this case one has \(w_{14,11}=w_{21}^{13}\). This is the loss from the acceptance of wrong decision \(p^{1}\stackrel{\varDelta }{<}p^{3}\) when \(p^{1}\stackrel{\varDelta }{=}p^{3}\) is true. The intersection of closures of parametric domains for this two hypotheses is not empty. We call this type of error as adjacent hypotheses error. This type of error can be coded by 1–2 (hypotheses H 1 and H 2), 2–1 (hypotheses H 2 and H 1), 2–3 (hypotheses H 2 and H 3), 3–2 (hypotheses H 3 and H 2).

Case 2. :

True decision and taken decision are different in one pair of branches. One separated hypotheses error. For example, suppose the hypothesis \(H^{\prime}_{1}\) is true, but the decision \(d^{\prime}_{3}\) is accepted. In this case, one has \(w_{1,3}=w_{13}^{23}\). This is the loss from the acceptance of wrong decision \(p^{3}\stackrel{\varDelta }{<}p^{2}\) when \(p^{2}\stackrel{\varDelta }{<}p^{3}\) is true. The intersection of closures of parametric domains for this two hypotheses is empty. We call this type of error as separated hypotheses error. This type of error can be coded by 3–1 (hypotheses H 3 and H 1), 1–3 (hypotheses H 1 and H 3). Note that this type of error is more serious than adjacent hypotheses error.

Case 3. :

True decision and taken decision are different in two pairs of branches. Two adjacent hypotheses errors. For example, suppose the hypothesis \(H^{\prime}_{1}\) is true, but the decision \(d^{\prime}_{11}\) is taken. In this case, one has from additivity condition \(w_{1,11}=w_{12}^{12}+w_{12}^{23}\). This means that we have the errors of the type 1–2 in the comparison of branches 1 and 2 and the error of the type 1–2 in the comparison of branches 2 and 3.

Case 4. :

True decision and taken decision are different in two pairs of branches. Two separated hypotheses errors. For example, suppose the hypothesis \(H^{\prime}_{2}\) is true, but the decision \(d^{\prime}_{26}\) is taken. In this case, one has \(w_{2,26}=w_{13}^{13}+w_{13}^{21}\). This means that we have the errors of the type 1–3 in the comparison of branches 1 and 3 and the error of the type 1–3 in the comparison of branches 2 and 1.

Case 5. :

True decision and taken decision are different in two pairs of branches. One adjacent hypotheses and one separated hypotheses errors. For example, suppose the hypothesis \(H^{\prime}_{2}\) is true, but the decision \(d^{\prime}_{23}\) is taken. In this case, one has \(w_{2,23}=w_{13}^{12}+w_{12}^{13}\). This means that we have the errors of the type 1–3 in the comparison of branches 1 and 2 and the error of the type 1–2 in the comparison of branches 1 and 3.

Case 6. :

True decision and taken decision are different in three pairs of branches. Two adjacent hypotheses and one separated hypotheses errors. For example, suppose the hypothesis \(H^{\prime}_{1}\) is true, but the decision \(d^{\prime}_{23}\) is taken. In this case one has \(w_{1,23}=w_{13}^{12}+w_{12}^{13}+w_{12}^{23}\). This means that we have the errors of the type 1–3 in the comparison of branches 1 and 2, the error of the type 1–2 in the comparison of branches 1 and 3 and the error of the type 1–2 in the comparison of branches 2 and 3.

Case 7. :

True decision and taken decision are different in three pairs of branches. Three separated hypotheses errors. For example, suppose the hypothesis \(H^{\prime}_{1}\) is true, but the decision \(d^{\prime}_{27}\) is taken. In this case, one has \(w_{1,27}=w_{13}^{12}+w_{13}^{13}+w_{13}^{23}\). This means that we have the errors of the type 1–3 in the comparison of branches 1 and 2, the error of the type 1–3 in the comparison of branches 1 and 3 and the error of the type 1–3 in the comparison of branches 2 and 3.

Condition of additivity of the loss function in our problem means that the larger weight is attached to the losses resulting from taken decision being far from the true decision. Now we state that compatibility condition for generating hypotheses testing for the problem (11) is satisfied and additivity of the loss function takes place. Therefore, combining optimal unbiased statistical procedures with three decisions we get a optimal statistical procedure in the class of unbiased statistical procedures with M decisions.

We end this section by some discussion on the unbiasedness in three decision case. Following [2], we call the statistical procedure δ(x) unbiased, if for any θ,θ′∈Ω

$$ E_{\theta}w\bigl(\theta',\delta(x)\bigr)\geq E_{\theta}w\bigl(\theta,\delta(x)\bigr) $$
(23)

In our case θ=p ip j. For the statistical procedure (5), conditional risk is:

$$ E_{\theta}\bigl(\theta,\delta(x)\bigr)=\begin{cases} w_{12}P_{\theta}(\delta(x)=d_{2})+w_{13}P_{\theta}(\delta(x)=d_{3}), &\mbox{if } p^i\stackrel{\varDelta }{<}p^j\\ w_{21}P_{\theta}(\delta(x)=d_{1})+w_{23}P_{\theta}(\delta(x)=d_{3}), &\mbox{if } p^i\stackrel{\varDelta }{=}p^j\\ w_{31}P_{\theta}(\delta(x)=d_{1})+w_{32}P_{\theta}(\delta(x)=d_{2}), &\mbox{if } p^i\stackrel{\varDelta }{>}p^j \end{cases} $$
(24)

Therefore, from (21) and the relation

$$P_{\theta}\bigl(\delta(x)=d_{1}\bigr)+P_{\theta}\bigl(\delta(x)=d_{2}\bigr)+P_{\theta}\delta\bigl((x)=d_{3}\bigr)=1 $$

the statistical procedure δ(x) for the problem (12) is unbiased if and only if:

$$ \begin{cases} P_{\theta}(\delta(x)=d_{1})\geq \frac{w_{12}}{w_{12}+w_{21}}=\alpha_{i,j}, &\mbox{if } p^{i}\stackrel{\varDelta }{<}p^{j}\\ P_{\theta}(\delta(x)=d_{3})\geq \frac{w_{32}}{w_{23}+w_{32}}=\alpha_{j,i}, &\mbox{if }p^{i}\stackrel{\varDelta }{>}p^{j} \\ P_{\theta}(\delta(x)=d_{1})\leq \frac{w_{12}}{w_{12}+w_{21}}=\alpha_{i,j}, &\mbox{if }p^{i}\stackrel{\varDelta }{=}p^{j} \\ P_{\theta}(\delta(x)=d_{3})\leq \frac{w_{32}}{w_{23}+w_{32}}=\alpha_{j,i}, &\mbox{if }p^{i}\stackrel{\varDelta }{=}p^{j} \\ P_{\theta}(\delta(x)=d_{2})+\frac{w_{31}+w_{13}}{w_{21}+w_{12}}P_{\theta}(\delta(x)=d_{3})\leq \frac{w_{31}}{w_{12}+w_{21}} &\mbox{if }p^{i}\stackrel{\varDelta }{<}p^{j} \\ P_{\theta}(\delta(x)=d_{2})+\frac{w_{31}+w_{13}}{w_{23}+w_{32}}P_{\theta}(\delta(x)=d_{1})\leq \frac{w_{13}}{w_{32}+w_{23}} &\mbox{if }p^{i}\stackrel{\varDelta }{>}p^{j} \\ \end{cases} $$
(25)

First four conditions (25) are usual restrictions on probability of wrong decisions. Last two conditions (25) are restrictions for linear combinations of probabilities of wrong decisions if hypothesis \(H_{1}^{\prime(ij)}\) or \(H_{3}^{\prime(ij)}\) are true.

6 Conclusions

Problem of ranking of the branch efficiency is considered as a multiple decision problem. Solution of this problem is given on the base of multiple decision theory. As a result, a corresponding multiple decision statistical procedure is constructed. This multiple decision statistical procedure is taken as combination of three-decision statistical procedures. It is shown that this three-decision statistical procedures are optimal in the class of unbiased statistical procedures and as consequence the multiple decision statistical procedure is optimal in the class of unbiased multiple decision statistical procedures.