12.1. Introduction

We will use the same notations as in the previous chapters. Lower-case letters x, y, … will denote real scalar variables, whether mathematical or random. Capital letters X, Y, … will be used to denote real matrix-variate mathematical or random variables, whether square or rectangular matrices are involved. A tilde will be placed on top of letters such as \(\tilde {x},\tilde {y},\tilde {X},\tilde {Y}\) to denote variables in the complex domain. Constant matrices will for instance be denoted by A, B, C. A tilde will not be used on constant matrices unless the point is to be stressed that the matrix is in the complex domain. The determinant of a square matrix A will be denoted by |A| or det(A) and, in the complex case, the absolute value or modulus of the determinant of A will be denoted as |det(A)|. When matrices are square, their order will be taken as p × p, unless specified otherwise. When A is a full rank matrix in the complex domain, then AA is Hermitian positive definite where an asterisk designates the complex conjugate transpose of a matrix. Additionally, dX will indicate the wedge product of all the distinct differentials of the elements of the matrix X. Thus, letting the p × q matrix X = (x ij) where the x ij’s are distinct real scalar variables, \(\mathrm {d}X=\wedge _{i=1}^p\wedge _{j=1}^q\mathrm {d}x_{ij}\). For the complex matrix \(\tilde {X}=X_1+iX_2,\ i=\sqrt {(-1)}\), where X 1 and X 2 are real, \(\mathrm {d}\tilde {X}=\mathrm {d}X_1\wedge \mathrm {d}X_2\).

Historically, classification problems arose in anthropological studies. By taking a set of measurements on skeletal remains, anthropologists wanted to classify them as belonging to a certain racial group such as being of African or European origin. The measurements might have been of the following type: x 1 =  width of the skull, x 2 =  volume of the skull, x 3 =  length of the thigh bone, x 4 =  width of the pelvis, and so on. Let the measurements be represented by a p × 1 vector X, with X′ = (x 1, …, x p) where a prime denotes the transpose. Nowadays, classification procedures are employed in all types of problems occurring in various contexts. For example, consider the situation of a battery of tests in an entrance examination to admit students into a professional program such as medical sciences, law studies, engineering science or management studies. Based on the p × 1 vector of test scores, a statistician would like to classify an applicant as to whether or not he/she belongs to the group of applicants who will successfully complete a given program. This is a 2-group situation. If a third category is added such as those who are expected to complete the program with flying colors, this will become a 3-group situation. In general, one will have a k-group situation when an individual is classified into one of k classes.

Table 12.1: Cost of misclassification C(i|j)

Let us begin with the 2-group situation. The problem consists of classifying the p × 1 vector X into one of two, groups, classes or categories. Let the categories be denoted by population π 1 and population π 2. This means X will either belong to π 1 or to π 2, no other options being considered. The p × 1 vector X may be taken as a point in a p-space R p or p-dimensional Euclidean space \(\Re ^p\). In a two-group situation when it is decided that the candidate either belongs to the population π 1 or the population π 2, two subspaces A 1 and A 2 within the p-space R p are determined: A 1 ⊂ R p and A 2 ⊂ R p, with A 1 ∩ A 2 = O (the empty set) or a decision rule can be symbolically written as A = (A 1, A 2). If X falls in A 1, the candidate is classified into π 1 and if X falls in A 2, then the candidate is classified into π 2. In other words, X ∈ A 1 means the individual is classified into population π 1 and X ∈ A 2 means that the individual is classified into population π 2. The regions A 1 and A 2 or the rule A = (A 1, A 2) are not known beforehand. These are to be determined by employing certain decision rules. Criteria for determining A 1 and A 2 will be subsequently put forward. Let us now consider the consequences. When a decision is made to classify X as coming from π 1, either the decision is correct or the decision is erroneous. If the population is actually π 1 and the decision rule classifies X into π 1, then the decision is correct. If X is classified into π 2 when in reality the population is π 1, then a mistake has been committed or a misclassification occurred. Misclassification will involve penalties, costs or losses. Let such a penalty, cost or loss of classifying an individual into group i when he/she actually belongs to group j, be denoted by C(i|j). In a 2-group situation, i and j can only equal 1 or 2. That is, C(1|2) > 0 and C(2|1) > 0 are the costs of misclassifying, whereas C(1|1) = 0 and C(2|2) = 0 since there is no cost or penalty associated with correct decisions. The following table summarizes this discussion:

12.2. Probabilities of Classification

The vector random variable corresponding to the observation vector X may have its own probability/density function. The real scalar variables as well as the observations on them will be denoted by the lower-case letters x 1, …, x p. When dealing with the probability/density function of X, X is taken as vector random variable, whereas when looked upon as a point in the p-space, R p, X is deemed to be an observation vector. The p × 1 vector X may have a probability/density function P(X). In a 2-group or two classes situation, P(X) is either P 1(X), the population density of π 1 or P 2(X), the population density of π 2. For convenience, it will be assumed that X of the continuous type, the derivations in the discrete case being analogous. In the 2-group situation, P(X) can only be P 1(X) or P 2(X). What is then the probability of achieving a correct classification under the rule A = (A 1, A 2)? If the sample point X falls in A 1, we classify the candidate as belonging to π 1, and if the true population is also π 1, then a correct decision is made. In that instance, the corresponding probability is

$$\displaystyle \begin{aligned} Pr\{1|1,A\}=\int_{A_1}P_1(X)\mathrm{d}X{} \end{aligned} $$
(12.2.1)

where dX = dx 1 ∧dx 2 ∧… ∧dx p, A = (A 1, A 2) denoting one decision rule or one given set of subspaces of the p-space R p. The probability of misclassification in this case is

$$\displaystyle \begin{aligned} Pr\{2|1,A\}=\int_{A_2}P_1(X)\mathrm{d}X.{} \end{aligned} $$
(12.2.2)

Similarly, the probabilities of correctly selecting and misclassifying P 2(X) are respectively given by

$$\displaystyle \begin{aligned} Pr\{2|2,A\}=\int_{A_2}P_2(X)\mathrm{d}X{} \end{aligned} $$
(12.2.3)

and

$$\displaystyle \begin{aligned} Pr\{1|2,A\}=\int_{A_1}P_2(X)\mathrm{d}X.{} \end{aligned} $$
(12.2.4)

In a Bayesian setting, there is a prior probability q 1 of selecting the population π 1 and q 2 of selecting the population π 2, with q 1 + q 2 = 1. Then, what will be the probability of drawing an observation from π 1 and misclassifying it as belonging to π 2? It is \(q_1\times Pr\{2|1,A\}=q_1\int _{A_2}P_1(X)\mathrm {d}X\) and, similarly, the probability of drawing an observation from π 2 and misclassifying it as coming from π 1 is \(q_2\times Pr\{1|2,A\}=q_2\int _{A_1}P_2(X)\), with the respective costs of misclassifications being C(2|1) = C(2|1, A) and C(1|2) = C(1|2, A). What is then the expected cost of misclassification? It is the sum of the costs multiplied by the corresponding probabilities. Thus,

$$\displaystyle \begin{aligned} \mbox{the expected cost }=q_1\,C(2|1)Pr\{2|1,A\}+q_2\,C(1|2)Pr\{1|2,A\}.{} \end{aligned} $$
(12.2.5)

So, an advantageous criterion to rely on, when setting up A 1 and A 2 would consist in minimizing the expected cost as given in (12.2.5). A rule could be devised for determining A 1 and A 2 accordingly. In this regard, this actually corresponds to Bayes’ rule. How can one interpret this expected cost? For example, in the case of admitting students to a particular program of study based on a vector X of test scores, it is the cost of admitting potentially incompetent students or students who would not have successfully completed the program of study and training them, plus the projected cost of losing good students who would have successfully completed the program of study.

If prior probabilities q 1 and q 2 are not involved, then the expected cost of misclassifying an observation from π 1 as coming from π 2 is

$$\displaystyle \begin{aligned} C(2|1)Pr\{2|1,A\}\equiv E_1(A),{} \end{aligned} $$
(12.2.6)

and the expected cost of misclassifying an observation from π 2 as coming from π 1 is

$$\displaystyle \begin{aligned} C(1|2)Pr\{1|2,A\}\equiv E_2(A).{} \end{aligned} $$
(12.2.7)

We would like to have E 1(A) and E 2(A) as small as possible. In this case, a procedure, rule or criterion A = (A 1, A 2) corresponds to determining suitable subspaces A 1 and A 2 in the p-space R p. If there is another procedure \(A^{(j)}=(A_1^{(j)},A_2^{(j)})\) such that E 1(A) ≤ E 1(A (j)) and E 2(A) ≤ E 2(A (j)), then procedure A is said to be as good as A (j), and if at least one of the inequalities above is a strict inequality, that is <  , then A is preferable to A (j). If procedure A is preferable to all other available procedures A (j), j = 1, 2, …, A is said to be admissible. We are seeking an admissible class {A} of procedures.

12.3. Two Populations with Known Distributions

Let π 1 and π 2 be the two populations. Let P 1(X) and P 2(X) be the known p-variate probability/density functions associated with π 1 and π 2, respectively. That is, P 1(X) and P 2(X) are two p-variate probability/density functions which are fully known in the sense that all their parameters are known in addition to their functional forms. Consider the Bayesian situation where it is assumed that the prior probabilities q 1 and q 2 of selecting π 1 and π 2, respectively, are known. Suppose that a particular p-vector X is at hand. What is the probability that this given X is an observation from π 1? This probability is q 1 P 1(X) if X is discrete or q 1 P 1(X)dX if X is continuous. What is the probability that the given vector X is an observation vector either from π 1 or from π 2? This probability is q 1 P 1(X) + q 2 P 2(X) or [q 1 P 1(X) + q 2 P 2(X)]dX. What is then the probability that the vector X at hand is from P 1(X), given that it is an observation vector from π 1 or π 2? As this is a conditional statement, it is given by the following in the discrete or continuous case:

$$\displaystyle \begin{aligned} \frac{q_1P_1(X)}{q_1P_1(X)+q_2P_2(X)} \ \ \mbox{ or }\ \ &\frac{q_1P_1(X)\mathrm{d}X}{[q_1P_1(X)+q_2P_2(X)]\mathrm{d}X} \\ &=\frac{q_1P_1(X)}{q_1P_1(X)+q_2P_2(X)}{} \end{aligned} $$
(12.3.1)

where dX, which is the wedge product of differentials and positive in this case, cancels out. If the conditional probability that a given X is an observation from π 1 is larger than or equal to the conditional probability that the given vector X is an observation from π 2 and if we assign X to π 1, then the chance of misclassification is reduced. Our main objective is to minimize the probability of misclassification and then come up with a decision rule. This statement is equivalent to the following: If

$$\displaystyle \begin{aligned} \frac{q_1P_1(X)}{q_1P_1(X)+q_1P_2(X)}\ge \frac{q_2P_2(X)}{q_1P_1(X)+q_2P_2(X)}\Rightarrow q_1P_1(X)\ge q_2P_2(X){} \end{aligned} $$
(12.3.2)

then we assign X to π 1, meaning that our subspace A 1 is specified by the following rule:

$$\displaystyle \begin{aligned} A_1: q_1P_1(X)\ge q_2P_2(X)&\Rightarrow \frac{P_1(X)}{P_2(X)}\ge \frac{q_2}{q_1} \\ A_2: q_1P_1(X)<q_2P_2(X)&\Rightarrow \frac{P_1(X)}{P_2(X)}<\frac{q_2}{q_1}.{} \end{aligned} $$
(12.3.3)

Note that if q 1 P 1(X) = q 2 P 2 X), then X can be assigned to either π 1 or π 2; however, we have assigned it to π 1 for convenience. Observe that, it is assumed that q 1 P 1(X) + q 2 P 2(X)≠0, q 1 > 0, q 2 > 0 and q 1 + q 2 = 1 in (12.3.2). The conditional statement made in (12.3.2), which can also be written as

$$\displaystyle \begin{aligned} \frac{q_iP_i(X)}{q_1P_1(X)+q_2P_2(X)}\!=\!\frac{\eta_iP_i(X)}{\eta_1P_1(X)+\eta_2P_2(X)},\,\eta_i> 0,\,\eta_1+\eta_2=\eta>0,\,\frac{\eta_i}{\eta}=q_i, \, i=1,2, \end{aligned}$$

holds for some weight functions η i, i = 1, 2.

If the observation is from π 1 : P 1(X), then the expected cost of misclassification is q 1 P 1(X)C(2|1) + q 2 P 2(X)C(2|2) = q 1 P 1(X)C(2|1) since C(i|i) = 0, i = 1, 2. Similarly, the expected cost of misclassifying of the observation X from π 2 : P 2(X) is q 2 P 2(X)C(1|2). If P 1(X) is our preferred distribution, then we would like the associated expected cost of misclassification to be the lesser one, that is,

$$\displaystyle \begin{aligned} q_1P_1(X)C(2|1)&<q_2P_2(X)C(1|2)\mbox{ in }A_2\Rightarrow \\ \frac{P_1(X)}{P_2(X)}&<\frac{q_2C(1|2)}{q_1C(2|1)}\mbox{ in }A_2\mbox{ or } \\ \frac{P_1(X)}{P_2(X)}&\ge\frac{q_2C(1|2)}{q_1C(2|1)}\mbox{ in }A_1,{} \end{aligned} $$
(12.3.4)

which is the same rule as in (12.3.3) where q 1 is replaced by q 1 C(2|1) and q 2, by q 2 C(1|2).

12.3.1. Best procedure

It can be established that the procedure A = (A 1, A 2) in (12.3.3) is the best one for minimizing the probability of misclassification. To this end, consider any other procedure \(A^{(j)}=(A_1^{(j)},A_2^{(j)}),\ j=1,2,\ldots .\) The probability of misclassification under the procedure A (j) is the following:

$$\displaystyle \begin{aligned} q_1&\int_{A_2^{(j)}}P_1(X)\mathrm{d}X+q_2\int_{A_1^{(j)}}P_2(X)\mathrm{d}X \\ &=\int_{A_2^{(j)}}[q_1P_1(X)-q_2P_2(X)]\mathrm{d}X+q_2\int_{A_1^{(j)}\cup A_2^{(j)}}P_2(X)\mathrm{d}X.{}\end{aligned} $$
(12.3.5)

If \(A_1^{(j)}\cup A_2^{(j)}=R_p\), then \(\int _{A_1^{(j)}\cup A_2^{(j)}}P_2(X)\mathrm {d}X=1\); it is otherwise a given positive constant. However, q 1 P 1(X) − q 2 P 2(X) can be negative, zero or positive, whereas the left-hand side of (12.3.5) is a positive probability. Accordingly, the left-hand side is minimum if

$$\displaystyle \begin{aligned} q_1P_1(X)-q_2P_2(X)<0\Rightarrow \frac{P_1(X)}{P_2(X)}< \frac{q_2}{q_1}, \end{aligned}$$
(i)

which actually is the rejection region A 2 of the procedure A = (A 1, A 2). Hence, the procedure A = (A 1, A 2) minimizes the probabilities of misclassification; in other words, it is the best procedure. If cost functions are also involved, then (i) becomes the following:

$$\displaystyle \begin{aligned} \frac{P_1(X)}{P_2(X)}<\frac{C(1|2)\,q_2}{C(2|1)\,q_1}. \end{aligned}$$
(ii)

The region where q 1 P 1(X) − q 2 P 2(X) = 0 or q 1 C(2|1)P 1(X) − q 2 C(1|2)P 2(X) = 0 need not be empty and the probability over this set need not be zero. If

$$\displaystyle \begin{aligned} Pr\left\{\frac{P_1(X)}{P_2(X)}=\frac{q_2\,C(1|2)}{q_1\,C(2|1)}\Big|\pi_i\right\}=0,\ i=1,2,{} \end{aligned} $$
(12.3.6)

it can also be shown that the above Bayes procedure A = (A 1, A 2) is unique. This is stated as a theorem:

Theorem 12.3.1

Let q 1 be the prior probability of drawing an observation X from the population π 1 with probability/density function P 1(X) and let q 2 be the prior probability of selecting an observation X from the population π 2 with probability/density function P 2(X). Let the cost or loss associated with misclassifying an observation from π 1 as coming from π 2 be C(2|1) and the cost of misclassifying an observation from π 2 as originating from π 1 be C(1|2). Letting

$$\displaystyle \begin{aligned} Pr\left\{\frac{P_1(X)}{P_2(X)}=\frac{C(1|2)\,q_2}{C(2|1)\,q_1}\Big|\pi_i\right\}=0,\ i=1,2, \end{aligned}$$

the classification rule given by A = (A 1, A 2) of (12.3.4) is unique and best in the sense that it minimizes the probabilities of misclassification.

Example 12.3.1

Let π 1 and π 2 be two univariate exponential populations whose parameters are θ 1 and θ 2 with θ 1θ 2. Let the prior probability of drawing an observation from π 1 be \(q_1=\frac {1}{2}\) and that of selecting an observation from π 2 be \(q_2=\frac {1}{2}\). Let the costs or loss associated with misclassifications be C(2|1) = C(1|2). Compute the regions and probabilities of misclassification if (1): a single observation x is drawn; (2): iid observations x 1, …, x n are drawn.

Solution 12.3.1

(1). In this case, one observation is drawn and the populations are

$$\displaystyle \begin{aligned}P_i(x)=\frac{1}{\theta_i}{\mathrm{e}}^{-\frac{x}{\theta_i}},\ x\ge 0,\ \theta_i>0,\ i=1,2. \end{aligned}$$

Consider the following inequality on the support of the density:

$$\displaystyle \begin{aligned}\frac{P_1(x)}{P_2(x)}\ge \frac{C(1|2)\,q_2}{C(2|1)\,q_1}=1, \end{aligned}$$

or equivalently,

$$\displaystyle \begin{aligned}\frac{\theta_2}{\theta_1}{\mathrm{e}}^{-x(\frac{1}{\theta_1}-\frac{1}{\theta_2})}\ge 1\Rightarrow {\mathrm{e}}^{-x(\frac{1}{\theta_1}-\frac{1}{\theta_2})}\ge \frac{\theta_1}{\theta_2}.\end{aligned}$$

On taking logarithms, we have

$$\displaystyle \begin{aligned} -x\Big(\frac{1}{\theta_1}-\frac{1}{\theta_2}\Big)\ge \ln \frac{\theta_1}{\theta_2}&\Rightarrow x\Big(\frac{1}{\theta_2}-\frac{1}{\theta_1}\Big)\ge \ln\frac{\theta_1}{\theta_2}\\ &\Rightarrow x\ge \frac{\theta_1\theta_2}{\theta_1-\theta_2}\ln \frac{\theta_1}{\theta_2}\ \,\mbox{ for }\theta_1>\theta_2.\end{aligned} $$

Letting θ 1 > θ 2, the steps in the case θ 1 < θ 2 being parallel, we have

$$\displaystyle \begin{aligned}x\ge k,\ \ k=\frac{\theta_1\theta_2}{\theta_1-\theta_2}\ln \frac{\theta_1}{\theta_2}.\end{aligned}$$

Accordingly,

$$\displaystyle \begin{aligned}A_1: x\ge k \mbox{ and }A_2: x<k. \end{aligned}$$

The probabilities of misclassification are:

$$\displaystyle \begin{aligned} P(2|1)&=\int_{A_2}P_1(x){\mathrm{d}}x=\int_{x=0}^{k}\frac{1}{\theta_1}{\mathrm{e}}^{-\frac{x}{\theta_1}}{\mathrm{d}}x=1-{\mathrm{e}}^{-\frac{k}{\theta_1}}\\ P(1|2)&=\int_{x=k}^{\infty}\frac{1}{\theta_2}{\mathrm{e}}^{-\frac{x}{\theta_2}}{\mathrm{d}}x={\mathrm{e}}^{-\frac{k}{\theta_2}}\,.\end{aligned} $$

Solution 12.3.1

(2). In this case, X′ = (x 1, …, x n) and

$$\displaystyle \begin{aligned}P_i(X)=\prod_{j=1}^n\frac{1}{\theta_i}{\mathrm{e}}^{-\frac{x_j}{\theta_i}}=\frac{1}{\theta_i^n}{\mathrm{e}}^{-\frac{u}{\theta_i}},\ i=1,2, \end{aligned}$$

where \(u=\sum _{j=1}^nx_j\) is gamma distributed with the parameters (n, θ i), i = 1, 2. The density of u is then given by

$$\displaystyle \begin{aligned}g_i(u)=\frac{1}{\theta_i^n\varGamma(n)}u^{n-1}{\mathrm{e}}^{-\frac{u}{\theta_i}},\ i=1,2. \end{aligned}$$

Proceeding as above, for θ 1 > θ 2, A 1 : u ≥ k 1 and \( A_2: u<k_1,\ k_1=\frac {\theta _1\theta _2}{\theta _1-\theta _2}\ln [\frac {\theta _1}{\theta _2}]^n=nk\) where k is as given in Solution 12.3.1(1). Consequently, the probabilities of misclassification are as follows:

$$\displaystyle \begin{aligned} P(2|1)&=\int_{u=0}^{k_1}\frac{u^{n-1}}{\theta_1^n\varGamma(n)}{\mathrm{e}}^{-\frac{u}{\theta_1}}{\mathrm{d}}u=\int_0^{\frac{k_1}{\theta_1}}\frac{u^{n-1}}{\varGamma(n)}{\mathrm{e}}^{-u}{\mathrm{d}}u\\ P(1|2)&=\int_{k_1}^{\infty}\frac{u^{n-1}}{\theta_2^n\varGamma(n)}{\mathrm{e}}^{-\frac{u}{\theta_2}}{\mathrm{d}}u=\int_{\frac{k_1}{\theta_2}}^{\infty}\frac{u^{n-1}}{\varGamma(n)}{\mathrm{e}}^{-u}{\mathrm{d}}u\end{aligned} $$

where the integrals can be expressed in terms of incomplete gamma functions or determined by using integration by parts.

Example 12.3.2

Assume that no prior probabilities or costs are involved. Suppose that in a certain clinic, the waiting time before a customer is attended to, depends upon the manager on duty. If manager M 1 is on duty, the expected waiting time is 10 minutes, and if manager M 2 is on duty, the expected waiting time is 5 minutes. Assume that the waiting times are exponentially distributed with expected waiting time equal to θ i, i = 1, 2. On a particular day (1): a customer had to wait 6 minutes before she was attended to, (2): three customers had to wait 6, 6 and 8 minutes, respectively. Who between M 1 and M 2 was likely to be on duty on that day?

Solution 12.3.2

(1). In this case, θ 1 = 10, θ 2 = 5 and the populations are exponential with parameters θ 1 and θ 2, respectively. Thus, \(k=\frac {\theta _1\theta _2}{\theta _1-\theta _2}\ln \frac {\theta _1}{\theta _2}=\frac {(10)(5)}{10-5}\ln \frac {10}{5}=10\ln 2\), \(\frac {k}{\theta _1}=\frac {10\ln 2}{10}=\ln 2, \ \frac {k}{\theta _2}=2\ln 2=\ln 4\), \({\mathrm{e}}^{-\frac {k}{\theta _1}}={\mathrm{e}}^{-\ln 2}=\frac {1}{2}=0.5\), and \({\mathrm{e}}^{-\frac {k}{\theta _2}}\) \(={\mathrm{e}}^{-\ln 4}=\frac {1}{4}=0.25\). In (1): the observed value of \(x=6<10(\ln 2)=10(0.69314718056)\approx 6.9315\). Accordingly, we classify x to M 2, that is, the manager M 2 was likely to be on duty. Thus,

$$\displaystyle \begin{aligned} P(2|2,A) &=\mbox{The probability of making a correct decision }\\ &=\int_{x<k}P_2(x){\mathrm{d}}x=\int_0^kP_2(x){\mathrm{d}}x =\int_0^k\frac{1}{5}{\mathrm{e}}^{-\frac{x}{5}}{\mathrm{d}}x\\ &=1-{\mathrm{e}}^{-\ln 4}=1-\frac{1}{4}=0.75;\\ P(2|1,A)&=\mbox{Probability of misclassification or making an incorrect decision}\\ &=\int_0^kP_1(x){\mathrm{d}}x=\int_0^k\frac{1}{10}{\mathrm{e}}^{-\frac{x}{10}}{\mathrm{d}}x =1-{\mathrm{e}}^{-\frac{k}{10}}=1-{\mathrm{e}}^{-\ln 2}=\frac{1}{2}=0.5.\end{aligned} $$

Solution 12.3.2

(2). Here, u = 6 + 6 + 8 = 20, n = 3 and \(k_1=\frac {\theta _1\theta _2}{\theta _1-\theta _2}\,n\,\ln \frac {\theta _1}{\theta _2}=\frac {(10)(5)}{10-5}3\ln \frac {10}{5}=30\ln 2\). Since \(30\ln 2\approx 20.795\) and the observed value of u is 20, u < k 1, and we assign the sample to π 2 or to P 2(X) or M 2, with \(\frac {k_1}{\theta _2}=\frac {30\ln 2}{5}=6\ln 2\) and \(\frac {k_1}{\theta _1}=\frac {30\ln 2}{10}=3\ln 2\). Thus,

$$\displaystyle \begin{aligned} P(2|2,A)&=\mbox{Probability of making a correct classification decision }\\ &= Pr\{u<k_1|P_2(X)\} =\int_0^{k_1}\frac{u^{n-1}}{\theta_2^{n}\varGamma(n)}{\mathrm{e}}^{-\frac{u}{\theta_2}}{\mathrm{d}}u\\ &=\int_0^{6\ln 2}\frac{v^2{\mathrm{e}}^{-v}}{\varGamma(3)}{\mathrm{d}}v,\ \ {\mathrm{with}}\ \varGamma(3)=2!=2.\end{aligned} $$

Integrating by parts,

$$\displaystyle \begin{aligned}\int v^2{\mathrm{e}}^{-v}{\mathrm{d}}v=-[v^2+2v+2]{\mathrm{e}}^{-v}.\end{aligned}$$

Then,

$$\displaystyle \begin{aligned} \frac{1}{2}\int_0^{6\ln 2}v^2{\mathrm{e}}^{-v}{\mathrm{d}}v&=-\{[{2}v^2/2+v+1]{\mathrm{e}}^{-v}\}_0^{6\ln 2}=1-\frac{1}{64}[(6\ln 2)^2/2+(6\ln 2)+1]\\ &\approx 1-\frac{1}{64}[13.797]\approx 0.785,\ {\mathrm{and}}\end{aligned} $$
$$\displaystyle \begin{aligned} P(2|1,A)&=\mbox{Probability of misclassification }\\ &=\int_0^{k_1}P_1(X){\mathrm{d}}X =\frac{1}{2}\int_0^{3\ln 2}v^2{\mathrm{e}}^{-v}{\mathrm{d}}v =-[\tfrac{1}{2}v^2+v+1]{\mathrm{e}}^{-v}\big|{}_{0}^{3\ln 2}\\ &=1-\frac{1}{2^3}[(3\ln 2)^2/2+(3\ln 2)+1] \approx 0.485.\end{aligned} $$

Example 12.3.3

Let the two populations π 1 and π 2 be univariate normal with mean values μ 1 and μ 2, respectively, and the same variance σ 2, that is, P 1(x) : N 1(μ 1, σ 2) and P 2(x) : N 1(μ 2, σ 2). Let the prior probabilities of drawing an observation from these populations be \(q_1=\frac {1}{2}\) and \(q_2=\frac {1}{2}\), respectively, and the costs or loss involved with misclassification be C(1|2) = C(2|1). Determine the regions of misclassification and the corresponding probabilities of misclassification if (1): a single observation x is available; (2): iid observations x 1, …, x n are available, from π 1 or π 2.

Solution 12.3.3

(1). If one observation is available,

$$\displaystyle \begin{aligned}P_i(x)=\frac{1}{\sigma\sqrt{2\pi}} {\mathrm{e}}^{-\frac{(x-\mu_i)^2}{2\sigma^2}},\ -\infty<x<\infty,\ -\infty<\mu_i<\infty,\ \sigma>0. \end{aligned}$$

Consider regions

$$\displaystyle \begin{aligned} A_1: \frac{P_1(x)}{P_2(x)}\ge \frac{C(1|2)\,q_2}{C(2|1)\,q_1}=1&\Rightarrow {\mathrm{e}}^{-\frac{1}{2\sigma^2}[(x-\mu_1)^2-(x-\mu_2)^2]}\ge 1\\ &\Rightarrow -\Big[\frac{1}{2\sigma^2}[(x-\mu_1)^2-(x-\mu_2)^2\Big]\ge 0.\end{aligned} $$

Now, note that

$$\displaystyle \begin{aligned} -[(x-\mu_1)^2-(x-\mu_2)^2]&=2x(\mu_1-\mu_2)-(\mu_1^2-\mu_2^2)\ge 0\Rightarrow\\ x&\ge \frac{1}{2}\frac{(\mu_1^2-\mu_2^2)}{(\mu_1-\mu_2)}=\frac{1}{2}(\mu_1+\mu_2)\mbox{ for }\mu_1>\mu_2\Rightarrow\\ A_1:x&\ge \frac{1}{2}(\mu_1+\mu_2)\mbox{ and }A_2: x<\frac{1}{2}(\mu_1+\mu_2).\end{aligned} $$

The probabilities of misclassification are the following for \(k=\frac {1}{2}(\mu _1+\mu _2)\):

$$\displaystyle \begin{aligned} P(2|1)&=\int_{-\infty}^k\frac{1}{\sigma\sqrt{2\pi}}{\mathrm{e}}^{-\frac{(x-\mu_1)^2}{2\sigma^2}}{\mathrm{d}}x=\varPhi\Big(\frac{k-\mu_1}{\sigma}\Big)\\ P(1|2)&=\int_k^{\infty}\frac{1}{\sigma\sqrt{2\pi}}{\mathrm{e}}^{-\frac{(x-\mu_2)^2}{2\sigma^2}}{\mathrm{d}}x=1-\varPhi\Big(\frac{k-\mu_2}{\sigma}\Big)\end{aligned} $$

where Φ(⋅) is the distribution function of a univariate standard normal density and \(k=\frac {1}{2}(\mu _1+\mu _2)\).

Solution 12.3.3

(2). In this case, x 1, …, x n are iid and X′ = (x 1, …, x n). The multivariate densities are

$$\displaystyle \begin{aligned}P_i(X)=\frac{1}{\sigma^n(\sqrt{2\pi})^n}{\mathrm{e}}^{-\frac{1}{2\sigma^2}\sum_{j=1}^n(x_j-\mu_i)^2}=\frac{{\mathrm{e}}^{-\frac{1}{2\sigma^2}(\sum_{j=1}^n(x_j-\bar{x})^2+n(\bar{x}-\mu_i)^2)}}{\sigma^n(\sqrt{2\pi})^n},\ i=1,2, \end{aligned}$$

where \(\bar {x}=\frac {1}{n}\sum _{j=1}^nx_j\). Hence for μ 1 > μ 2,

$$\displaystyle \begin{aligned}A_1: \frac{P_1(X)}{P_2(X)}\ge 1\Rightarrow {\mathrm{e}}^{-\frac{n}{2\sigma^2}[(\bar{x}-\mu_1)^2-(\bar{x}-\mu_2)^2]}\ge 1. \end{aligned}$$

Taking logarithms and simplifying, we have

$$\displaystyle \begin{aligned} -\frac{n}{2\sigma^2}[(\bar{x}-\mu_1)^2-(\bar{x}-\mu_2)^2]&\ge 0\Rightarrow\\ \bar{x}&\ge \frac{\mu_1^2-\mu_2^2}{2(\mu_1-\mu_2)}=\frac{1}{2}(\mu_1+\mu_2)\mbox{ for }\mu_1>\mu_2\end{aligned} $$

where

$$\displaystyle \begin{aligned}\bar{x}\sim N_1\Big(\mu_i,\frac{\sigma^2}{n}\Big),\ i=1,2.\end{aligned}$$

Therefore the probabilities of misclassification are the following:

$$\displaystyle \begin{aligned} P(2|1)&=\int_{-\infty}^k\frac{\sqrt{n}}{\sigma\sqrt{2\pi}}{\mathrm{e}}^{-\frac{n}{2\sigma^2}(\bar{x}-\mu_1)^2}{\mathrm{d}}\bar{x}=\varPhi\Big(\frac{\sqrt{n}(\mu_2-\mu_1)}{2\sigma}\Big)\\ P(1|2)&=\int_k^{\infty}\frac{\sqrt{n}}{\sigma\sqrt{2\pi}}{\mathrm{e}}^{-\frac{n}{2\sigma^2}(\bar{x}-\mu_2)^2}{\mathrm{d}}\bar{x}=1-\varPhi\Big(\frac{\sqrt{n}(\mu_1-\mu_2)}{2\sigma}\Big)\end{aligned} $$

where \(k=\frac {1}{2}(\mu _1+\mu _2)\) and Φ(⋅) is the distribution function of a univariate standard normal random variable.

Example 12.3.4

Assume that no prior probabilities or costs are involved. A tuber crop called tapioca is planted by farmers. While farmer F 1 applies a standard fertilizer to the soil to enhance the growth of the tapioca plants, farmer F 2 does not apply any fertilizer and let the plants grow naturally. At harvest time, a tapioca plant is pulled up with all its tubers attached to the bottom of the stem. The upper part of the stem is cut off and the lower part with its tubers is put out for sale. Tuber yield per plant, x, is measured by weighing the lower part of the stem with the tubers attached. It is known from past experience that x is normally distributed with mean value μ 1 = 5 and variance σ 2 = 1 for F 1 type farms, that is, x ∼ N 1(μ 1 = 5, σ 2 = 1)|F 1 and that for F 2 type farms, x ∼ N 1(μ 2 = 3, σ 2 = 1)|F 2, the weights being measured in kilograms. A road-side vendor is selling tapioca and his collection is either from F 1 type farms or F 2 type farms, but not both. A customer picked (1): one stem with its tubers attached weighing 4.2 kg (2) a random sample of four stems respectively weighing 6, 4, 3 and 5 kg. To which type of farms will you classify the observations in (1) and (2)?

Solution 12.3.4

(1). The decision is based on \(k=\frac {1}{2}(\mu _1+\mu _2)=\frac {1}{2}(5+3)=4\). In this case, the decision rule A = (A 1, A 2) is such that A 1 : x ≥ k and A 2 : x < k for μ 1 > μ 2. Note that \(\frac {k-\mu _1}{\sigma }=k-\mu _1=4-5=-1\) and \(\frac {k-\mu _2}{\sigma }=(4-3)=1\). As the observed x is 4.2 > 4 = k, we classify x into P 1(X) : N 1(μ 1, 1). Moreover,

$$\displaystyle \begin{aligned} P(1|1,A)&=\mbox{Probability of making a correct classification decision }\\ &=Pr\{x\ge k|P_1(x)\} =\int_k^{\infty}\frac{{\mathrm{e}}^{-\frac{1}{2}(x-\mu_1)^2}}{\sqrt{(2\pi)}}{\mathrm{d}}x\\ &=\int_{-1}^{\infty}\frac{{\mathrm{e}}^{-\frac{1}{2}u^2}}{\sqrt{(2\pi)}} =0.5+\int_0^1\frac{{\mathrm{e}}^{-\frac{1}{2}u^2}}{\sqrt{(2\pi)}}{\mathrm{d}}x \approx 0.84, \end{aligned} $$

and

$$\displaystyle \begin{aligned} P(1|2,A)&=\mbox{Probability of misclassification }\\ &=Pr\{x\ge k|P_2(x)\} =\int_k^{\infty}\frac{{\mathrm{e}}^{-\frac{1}{2}(x-\mu_2)^2}}{\sqrt{(2\pi)}}{\mathrm{d}}x =\int_1^{\infty}\frac{{\mathrm{e}}^{-\frac{1}{2}u^2}}{\sqrt{(2\pi)}}{\mathrm{d}}x\approx 0.16.\end{aligned} $$

Solution 12.3.4

(2). In this case, \(\bar {x}=\frac {1}{4}(6+4+3+5)=4.5,\ n=4\), \(\bar {x}\sim N(\mu _i,\frac {1}{n}),\ i=1,2\), \(\frac {(k-\mu _1)}{\sigma /\sqrt {n}}=2(4-5)=-2 \) and \( \frac {(k-\mu _2)}{\sigma /\sqrt {n}}=2(4-3)=2\). Since the observed \(\bar {x}\) is 4.5 > 4 = k, we assign the sample to P 1(X) : N(μ 1, 1), the criterion being \(A_1:\bar {x}\ge k\) and \(A_2:\bar {x}<k\). Additionally,

$$\displaystyle \begin{aligned} P(1|1,A)&=\mbox{Probability of a correct classification }\\ &=Pr\{\bar{x}\ge k|P_1(X)\} =\int_k^{\infty}\frac{{\mathrm{e}}^{-\frac{n}{2}(\bar{x}-\mu_1)^2}}{\sqrt{(2\pi)}}{\mathrm{d}}\bar{x} =\int_{-2}^{\infty}\frac{{\mathrm{e}}^{-\frac{1}{2}u^2}}{\sqrt{(2\pi)}}{\mathrm{d}}u\\ &=0.5+\int_0^2\frac{{\mathrm{e}}^{-\frac{1}{2}u^2}}{\sqrt{(2\pi)}}{\mathrm{d}}u \approx 0.98,\end{aligned} $$

and

$$\displaystyle \begin{aligned} P(1|2,A)&=\mbox{Probability of misclassification }\\ &=Pr\{\bar{x}\ge k|P_2(X)\} =\int_k^{\infty}\frac{{\mathrm{e}}^{-\frac{n}{2}(\bar{x}-\mu_2)^2}}{\sqrt{(2\pi)}}{\mathrm{d}}\bar{x} =\int_2^{\infty}\frac{{\mathrm{e}}^{-\frac{1}{2}u^2}}{\sqrt{(2\pi)}}{\mathrm{d}}u\approx 0.023.\end{aligned} $$

Example 12.3.5

Let π 1 and π 2 be two p-variate real nonsingular normal populations sharing the same covariance matrix, π 1 : N p(μ (1), Σ), Σ > O, and π 2 : N p(μ (2), Σ), Σ > O, whose mean values are such that μ (1)μ (2). Let the prior probabilities be q 1 = q 2 and the cost functions be C(1|2) = C(2|1). Consider a single p-vector X to be classified into π 1 or π 2. Determine the regions of misclassification and the corresponding probabilities.

Solution 12.3.5

The p-variate real normal densities are the following:

$$\displaystyle \begin{aligned} P_i(X)=\frac{1}{(2\pi)^{\frac{p}{2}}|\varSigma|{}^{\frac{1}{2}}}\mathrm{e}^{-\frac{1}{2}(X-\mu^{(i)})'\varSigma^{-1}(X-\mu^{(i)})} \end{aligned}$$
(i)

for i = 1, 2, Σ > O, μ (1)μ (2). Consider the inequality

$$\displaystyle \begin{aligned} \frac{P_1(X)}{P_2(X)}&\ge \frac{C(1|2)\, q_2}{C(2|1)\, q_1}=1\Rightarrow\\ &\mathrm{e}^{-\tfrac{1}{2}[(X-\mu^{(1)})'\varSigma^{-1}(X-\mu^{(1)})-(X-\mu^{(2)})'\varSigma^{-1}(X-\mu^{(2)})]}\ge 1.\end{aligned} $$

Taking logarithms, we have

$$\displaystyle \begin{aligned} -\tfrac{1}{2}[(X-\mu^{(1)})'\varSigma^{-1}(X-\mu^{(1)})-(X-\mu^{(2)})'\varSigma^{-1}(X-\mu^{(2)})]&\ge 0\Rightarrow\\ (\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}X-\tfrac{1}{2}(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(1)}+\mu^{(2)})&\ge 0.\end{aligned} $$

Let

$$\displaystyle \begin{aligned} u=(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}X-\tfrac{1}{2}(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(1)}+\mu^{(2)}).{} \end{aligned} $$
(12.3.7)

Then, u has a univariate normal distribution since it is a linear function of the components of X, which is a p-variate normal. Thus,

$$\displaystyle \begin{aligned} \mathrm{Var}(u)&=\mathrm{Var}[(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}X]\\ &=(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}\mathrm{Cov}(X)\varSigma^{-1}(\mu^{(1)}-\mu^{(2)})\\ &=(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(1)}-\mu^{(2)})=\varDelta^2{} \end{aligned} $$
(12.3.8)

where Δ 2 is Mahalanobis’ distance. The mean values of u under π 1 and π 2 are respectively,

$$\displaystyle \begin{aligned} E(u)|\pi_1&=(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}E(X)|\pi_1-\tfrac{1}{2}(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(1)}+\mu^{(2)})\\ &=\tfrac{1}{2}(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(1)}-\mu^{(2)})=\tfrac{1}{2}\varDelta^2,{} \end{aligned} $$
(12.3.9)
$$\displaystyle \begin{aligned} E(u)|\pi_2&=(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}\mu^{(2)}-\tfrac{1}{2}(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(1)}+\mu^{(2)})\\ &=\tfrac{1}{2}(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(2)}-\mu^{(1)})=-\tfrac{1}{2}\varDelta^2,{} \end{aligned} $$
(12.3.10)

so that

$$\displaystyle \begin{aligned} u&\sim N_1(\tfrac{1}{2}\varDelta^2,\varDelta^2)\mbox{ under }\pi_1,\\ u&\sim N_1(-\tfrac{1}{2}\varDelta^2,\varDelta^2)\mbox{ under }\pi_2.{} \end{aligned} $$
(12.3.11)

Accordingly, the regions of misclassification are

$$\displaystyle \begin{aligned} A_2: u<0|\pi_1 :u\sim N_1(\tfrac{1}{2}\varDelta^2,\varDelta^2)\ \ \mathrm{and} \ \ A_1: u\ge 0|\pi_2 :u\sim N_1(-\tfrac{1}{2}\varDelta^2,\varDelta^2),{} \end{aligned} $$
(12.3.12)

and the probabilities of misclassification are as follows:

$$\displaystyle \begin{aligned} P(2|1)&=\int_{-\infty}^0\frac{1}{\varDelta\sqrt{2\pi}}\mathrm{e}^{-\frac{1}{2\varDelta^2}(u-\frac{1}{2}\varDelta^2)^2}\mathrm{d}u \\ &=\int_{-\infty}^{\frac{0-\frac{1}{2}\varDelta^2}{\varDelta}}\frac{1}{\sqrt{2\pi}}\mathrm{e}^{-\frac{t^2}{2}}\mathrm{d}t=\varPhi(-\tfrac{1}{2}\varDelta) \end{aligned} $$
(ii)
$$\displaystyle \begin{aligned} P(1|2)&=\int_0^{\infty}\frac{1}{\varDelta\sqrt{2\pi}}\mathrm{e}^{-\frac{1}{2\varDelta^2}(u+\frac{1}{2}\varDelta^2)}\mathrm{d}u \\ &=\int_{\frac{0+\frac{1}{2}\varDelta^2}{\varDelta}}^{\infty}\frac{1}{\sqrt{2\pi}}\mathrm{e}^{-\frac{t^2}{2}}\mathrm{d}t=1-\varPhi(\tfrac{1}{2}\varDelta) \end{aligned} $$
(iii)

where Φ(⋅) denotes the distribution function of a univariate standard normal variable.

Note 12.3.1

If no conditions are imposed on the prior probabilities, q 1 and q 2, or on the costs of misclassification, C(2|1) and C(1|2), then the regions are determined as \(A_1: u\ge k, \ k=\ln \frac {C(1|2)\, q_2}{C(2|1)\, q_1},\) and A 2 : u < k. In this case, the probabilities of misclassification will be \(\varPhi \big (\frac {k-\frac {1}{2}\varDelta ^2}{\varDelta }\big )\) and \(1-\varPhi \big (\frac {k+\frac {1}{2}\varDelta ^2}{\varDelta }\big ),\) respectively.

Note 12.3.2

If the prior probabilities q 1 and q 2 are not known, we may assume that the two populations π 1 and π 2 are equally likely to be chosen or equivalently that \(q_1=q_2=\frac {1}{2}\), in which instance \(k=\ln \frac {C(1|2)}{C(2|1)}\). Then, the correct decisions are to assign the vector X at hand to π 1 in the region A 1 and to π 2 in the region A 2, where A 1 : u ≥ k and \( A_2: u<k, \ k=\ln \frac {q_2\ C(1|2)}{q_1\,C(2|1)}\) with q 1, q 2, C(2|1) and C(1|2) assumed to be known and

$$\displaystyle \begin{aligned} u=(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}X-\tfrac{1}{2}(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(1)}+\mu^{(2)}) \end{aligned}$$

whose first term, namely (μ (1) − μ (2))′Σ −1 X, is known as the linear discriminant function, which is utilized to discriminate or to separate two p-variate populations, not necessarily normally distributed, having mean value vectors μ (1) and μ (2) and sharing the same covariance matrix Σ > O.

Example 12.3.6

Assume that no prior probabilities or costs are involved. Applicants to a certain training program are given tests to evaluate their aptitude for languages and aptitude for science. Let the test scores be denoted by x 1 and x 2, respectively. Let X be the bivariate vector . After completing the training program, their aptitudes are tested again. Let \({X^{(1)}}'=[x_1^{(1)},x_2^{(1)}]\) be the score vector in the group of successful trainees and let \({X^{(2)}}'=[x_1^{(2)},x_2^{(2)}]\) be the score vector in the group of unsuccessful trainees. From previous experience of conducting such tests over the years, it is known that X (1) ∼ N 2(μ (1), Σ), Σ > O, and X (2) ∼ N 2(μ (2), Σ), Σ > O, where

Then (1): one applicant taken at random before the training program started obtained the test scores ; (2): three applicants chosen at random before the training program started had the following scores:

In (1), classify X 0 to π 1 or π 2 and in (2), classify the entire sample of three vectors into π 1 or π 2.

Solution 12.3.6

Let us compute certain quantities which are needed to answer the questions:

Hence,

$$\displaystyle \begin{aligned} u&=(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}X-\frac{1}{2}(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(1)}+\mu^{(2)})\\ &=2x_1-2x_2-4;\end{aligned} $$
$$\displaystyle \begin{aligned} u|\pi_1~\sim N_1(\tfrac{1}{2}\varDelta^2,\varDelta^2),~ u|\pi_2~\sim N_1(-\tfrac{1}{2}\varDelta^2,\varDelta^2); \end{aligned}$$
$$\displaystyle \begin{aligned} A_1:u \ge 0, A_2:u<0. \end{aligned}$$

Since, in (1), the observed , the observed u is u = 2x 1 − 2x 2 − 4 = 8 − 2 − 4 = 2 > 0 and we classify the observed X 0 into \(\pi _1: N_1(\frac {1}{2}\varDelta ^2,\varDelta ^2)\), the criterion being A 1 : u ≥ 0 and A 2 : u < 0. Thus,

$$\displaystyle \begin{aligned} P(1|1,A)&=\mbox{Probability of making a correct classification decision}\\ &=Pr\{u\ge 0|\pi_1\} =\int_0^{\infty}\frac{\mathrm{e}^{-\frac{1}{2\varDelta^2}(u-\frac{1}{2}\varDelta^2)^2}}{\varDelta\sqrt{(2\pi)}}\mathrm{d}u=\int_{-\frac{\varDelta}{2}}^{\infty}\frac{\mathrm{e}^{-\frac{1}{2}v^2}}{\sqrt{(2\pi)}}\mathrm{d}v\\ &=\int_{-1}^{\infty}\frac{\mathrm{e}^{-\frac{1}{2}v^2}}{\sqrt{(2\pi)}}\mathrm{d}v=0.5+\int_0^1\frac{\mathrm{e}^{-\frac{1}{2}v^2}}{\sqrt{(2\pi)}}\mathrm{d}v \approx 0.841;\end{aligned} $$
$$\displaystyle \begin{aligned} P(1|2,A)&=\mbox{Probability of misclassification }\\ &=\int_0^{\infty}\frac{\mathrm{e}^{-\frac{1}{2\varDelta^2}(u+\frac{1}{2}\varDelta^2)^2}}{\varDelta\sqrt{(2\pi)}}\mathrm{d}u =\int_1^{\infty}\frac{\mathrm{e}^{-\frac{1}{2}v^2}}{\sqrt{(2\pi)}}\mathrm{d}v\approx 0.159.\end{aligned} $$

When solving (2), the entire sample is to be classified. Proceeding as in the derivation of the criterion u in case (1), it is seen that for the problem at hand, X 0 will be replaced by \(\bar {X}\), the average of the sample vectors or the sample mean value vector, and then u will become \(u_1=2\bar {x}_1-2\bar {x}_2-4\) where \(\bar {X}'=[\bar {x}_1,\bar {x}_2]\). Thus, we require the sample average:

This means that \(\bar {x}_1=\frac {12}{3}=4,\ \bar {x}_2=\frac {4}{3}\), and the observed \(u_1=2\bar {x}_1-2\bar {x}_2-4=8-\frac {8}{3}-4>0\). Hence, we classify the whole sample to π 1 as the criterion is A 1 : u 1 ≥ 0 and A 2 : u 1 < 0. Since \(\bar {X}\) is normally distributed with \(E[\bar {X}]=\mu ^{(i)}\) and \( \mathrm {Cov}(\bar {X})=\frac {1}{n}\varSigma ,\ i=1,2,\) where n is the sample size, the densities of u 1 under π 1 and π 2 are the following:

$$\displaystyle \begin{aligned} u_1|\pi_1&\sim N_1(\tfrac{1}{2}\varDelta^2,\tfrac{1}{3}\varDelta^2),\ n=3,\\ u_1|\pi_2&\sim N_1(-\tfrac{1}{2}\varDelta^2,\tfrac{1}{3}\varDelta^2).\end{aligned} $$

Moreover,

$$\displaystyle \begin{aligned} P(1|1,A)&=\mbox{Probability of making a correct classification decision }\\ &=Pr\{u_1\ge 0|\pi_1\} =\int_0^{\infty}\frac{\sqrt{3}}{\varDelta\sqrt{(2\pi)}}\mathrm{e}^{-\frac{3}{2\varDelta^2}(u_1-\frac{1}{2}\varDelta^2)^2}\mathrm{d}u_1\\ &=\int_{-\frac{\sqrt{3}\varDelta}{2}}^{\infty}\frac{\mathrm{e}^{-\frac{1}{2}v^2}}{\sqrt{(2\pi)}}\mathrm{d}v=\int_{-\sqrt{3}}^{\infty}\frac{\mathrm{e}^{-\frac{1}{2}v^2}}{\sqrt{(2\pi)}}\mathrm{d}v =0.5+\int_0^{\sqrt{3}}\frac{\mathrm{e}^{-\frac{1}{2}v^2}}{\sqrt{(2\pi)}}\mathrm{d}v\approx 0.958\end{aligned} $$

and

$$\displaystyle \begin{aligned} P(1|2,A)&=\mbox{Probability of misclassification }\\ &=Pr\{u_1\ge 0|\pi_2\} =\int_0^{\infty}\frac{\sqrt{3}}{\varDelta\sqrt{(2\pi)}}\mathrm{e}^{-\frac{3}{2\varDelta^2}(u_1+\frac{1}{2}\varDelta^2)^2}\mathrm{d}u_1\\ &=\int_{\sqrt{3}}^{\infty}\frac{\mathrm{e}^{-\frac{1}{2}v^2}}{\sqrt{(2\pi)}}\approx 0.042.\end{aligned} $$

12.4. Linear Discriminant Function

Let X be a p × 1 vector and B a p × 1 arbitrary constant vector, B′ = (b 1, …, b p). Consider the arbitrary linear function w = B′X. Then, the mean value and variance of w are the following: E(w) = B′E(X) and Var(w) = Var(B′X) = B′Cov(X)B = B′ΣB where Σ > O is the covariance matrix of X. Suppose that the X could be from a p-variate real population π 1 with mean value vector μ (1) or from the p-variate real population π 2 with mean value vector μ (2). Suppose that both the populations π 1 and π 2 have the same covariance matrix Σ > O. Then, a measure of discrimination or separation between π 1 and π 2 is |B′μ (1) − B′μ (2)| as measured in terms of the standard deviation \(\sqrt {\mathrm {Var}(w)}\) for determining the best choice of B. Taking the squared distance, let

$$\displaystyle \begin{aligned} \delta=\frac{[B'\mu^{(1)}-B'\mu^{(2)}]^2}{B'\varSigma B}=\frac{[B'(\mu^{(1)}-\mu^{(2)}]^2}{B'\varSigma B}=\frac{B'(\mu^{(1)}-\mu^{(2)})(\mu^{(1)}-\mu^{(2)})'B}{B'\varSigma B}{} \end{aligned} $$
(12.4.1)

since the square of a scalar quantity is the scalar quantity times its transpose, B′(μ (1) − μ (2)) being a scalar quantity. Accordingly, we will maximize δ as specified in (12.4.1). This will be achieved by selecting a particular B in such a way that δ attains a maximum which corresponds to the maximum distance between π 1 and π 2. Without any loss of generality, we may assume that B′ΣB = 1, so that only the numerator in (12.4.1) need be maximized, subject to the condition B′ΣB = 1. Let λ denote a Lagrangian multiplier and

$$\displaystyle \begin{aligned} \eta=B'(\mu^{(1)}-\mu^{(2)})(\mu^{(1)}-\mu^{(2)})'B-\lambda(B'\varSigma B-1). \end{aligned}$$

Let us take the partial derivative of η with respect to the vector B and equate the result to a null vector (the reader may refer to Chap. 1 for the derivative of a scalar variable with respect to a vector variable):

$$\displaystyle \begin{aligned} \frac{\partial \eta}{\partial B}=O&\Rightarrow 2(\mu^{(1)}-\mu^{(2)})(\mu^{(1)}-\mu^{(2)})'B-2\lambda\varSigma B=O \\ &\Rightarrow \varSigma^{-1}(\mu^{(1)}-\mu^{(2)})(\mu^{(1)}-\mu^{(2)})'B=\lambda B. \end{aligned} $$
(i)

Note that (μ (1) − μ (2))′B ≡ α is a scalar quantity and B is a specific vector coming from (i) and hence we may write (i) as

$$\displaystyle \begin{aligned} B=\frac{\alpha}{\lambda}\varSigma^{-1}(\mu^{(1)}-\mu^{(2)})\equiv c\,\varSigma^{-1}(\mu^{(1)}-\mu^{(2)}) \end{aligned}$$
(ii)

where c is a real scalar quantity. Observe that δ as given in (12.4.1) will remain the same if B is multiplied by any scalar quantity. Thus, we may take c = 1 in (ii) without any loss of generality. The linear discriminant function then becomes

$$\displaystyle \begin{aligned} B'X=(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}X,{} \end{aligned} $$
(12.4.2)

and when B′X is as given in (12.4.2), δ as defined in (12.4.1), can be expressed as follows:

$$\displaystyle \begin{aligned} \delta&=\frac{(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(1)}-\mu^{(2)})(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(1)}-\mu^{(2)})} {(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(1)}-\mu^{(2)})} \\ &=(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(1)}-\mu^{(2)})=\varDelta^2\equiv\mbox{ Mahalanobis' distance} \\ &=\mathrm{Var}(w)=\mbox{ Variance of the discriminant function}.{} \end{aligned} $$
(12.4.3)

This δ is also the generalized squared distance between the vectors μ (1) and μ (2) or the squared distance between the vectors \(\varSigma ^{-\frac {1}{2}}\mu ^{(1)}\) and \(\varSigma ^{-\frac {1}{2}}\mu ^{(2)}\) in the mathematical sense (Euclidean distance). Hence Mahalanobis’ distance between two p-variate populations with different mean value vectors and the same covariance matrix is a measure of discrimination or separation between the populations, and the linear discriminant function is given in (12.4.2). Hence for an observed value X, if u = (μ (1) − μ (2))′Σ −1 X > 0 when μ (1), μ (2) and Σ are known, then we choose population π 1 with mean value μ (1), and if u < 0, then we select population π 2 with mean value μ (2). When u = 0, both π 1 and π 2 are equally favored.

Example 12.4.1

In a small township, there is only one grocery store. The town is laid out on the East and West sides of the sole main road. We will refer to the villagers as East-enders and West-enders. These townspeople shop only once a week for groceries. The grocery store owner found that the East-enders and West-enders have somewhat different buying habits. Consider the following items: x 1 =  grain items in kilograms, x 2 =  vegetable items in kilograms, x 3 =  dairy products in kilograms, and let [x 1, x 2, x 3] = X′ where X is the vector of weekly purchases. Then, the expected quantities bought by the East-enders and West-enders are E(X) = μ (1) and E(X) = μ (2), respectively, with the common covariance matrix Σ > O. From past history, the grocery store owner determined that

Consider the following situations: (1) A customer walked in and bought x 1 = 1 kg of grain items, x 2 = 2 kg of vegetable items, and x 3 = 1 kg of dairy products. Is she likely to be an East-ender or West-ender? (2): Another customer bought the three types of items in the quantities (10, 1, 1), respectively. Is she more likely to be an East-ender than a West-ender?

Solution 12.4.1

The inverse of the covariance matrix, μ (1) − μ (2), as well as other relevant quantities are the following:

In (1), X′ = (1, 2, 1) and since

we classify this customer as a West-ender from her buying pattern. In (2),

so that, given her purchases, this customer is classified as an East-ender.

12.5. Classification When the Population Parameters are Unknown

We now consider the classification problem involving two populations π 1 and π 2 for which the parameters of the corresponding densities are unknown. Since the structure of the parameters in these general densities P 1(X) and P 2(X) is not known, we will present a specific example: Consider the two p-variate normal populations of Example 12.3.3. Let π 1 : N p(μ (1), Σ) and π 2 : N p(μ (2), Σ), which share the same positive definite covariance matrix Σ. Suppose that we have a single observation vector X to be classified into π 1 or π 2. When the parameters μ (1), μ (2) and Σ are unknown, we will have to estimate them from some training samples. But, for a problem such as classifying skeletal remains, one does not have samples from the respective ancestral groups. Nevertheless, one can obtain training samples from living racial groups, and so, secure estimates of the parameters involved. Assume that we have simple random samples of sizes n 1 and n 2 from N p(μ (1), Σ) and N p(μ (2), Σ), respectively. Denote the sample values by \(X_1^{(1)},\ldots ,X_{n_1}^{(1)},\) and \(X_1^{(2)},\ldots ,X_{n_2}^{(2)}\), and let \(\bar {X}^{(1)}\) and \(\bar {X}^{(2)}\) be the sample averages. That is,

(12.5.1)

Let the sample matrices be denoted by bold-faced letters where the p × n 1 matrix X (1) and the p × n 2 matrix X (2) are the sample matrices and let \(\bar {\mathbf {X}}^{\mathbf {(1)}}\) and \(\bar {\mathbf {X}}^{\mathbf {(2)}}\) be the matrices of sample means. Thus, we have

(12.5.2)

Then, the sample sum of products matrices are

$$\displaystyle \begin{aligned} S_i&=({\mathbf{X}}^{\mathbf{(i)}}-\bar{\mathbf{X}}^{\mathbf{(i)}})({\mathbf{X}}^{\mathbf{(i)}} - \bar{\mathbf{X}}^{\mathbf{(i)}})',\ i=1,2; \\ S_m&=(s_{ij}^{(m)}),\ s_{ij}^{(m)}=\sum_{k=1}^{n_m}(x_{ik}^{(m)}-\bar{x}_i^{(m)})(x_{jk}^{(m)}-\bar{x}_j^{(m)}),\ m=1,2,\ S=S_1+S_2.{} \end{aligned} $$
(12.5.3)

The unbiased estimators of μ (1), μ (2) and Σ are respectively \(\bar {X}^{(1)},\bar {X}^{(2)}\) and \(\frac {S}{n_{(2)}}=\frac {S_1+S_2}{n_{(2)}}, \ n_{(2)}=n_1+n_2-2\). The criteria for classification, the regions, the statistic, and so on, are available from Example 12.3.3. That is,

$$\displaystyle \begin{aligned} A_1: u\ge k,~ A_2: u<k,~ k=\ln\frac{C(1|2)q_2}{C(2|1)q_1}, \end{aligned}$$

where

$$\displaystyle \begin{aligned} u=X'\varSigma^{-1}(\mu^{(1)}-\mu^{(2)})-\frac{1}{2}(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(1)}+\mu^{(2)}). \end{aligned}$$

Note that q 1 and q 2 are the prior probabilities of selecting the populations π 1 and π 2 and C(1|2) and C(2|1) are the costs or loss associated with misclassification. We will assume that q 1, q 2, C(1|2) and C(2|1) are all known but the parameters μ (1), μ (2) and Σ are estimated by their unbiased estimators. Denoting the estimator of u as v, we obtain the following criterion, assuming that we have one p-vector X to be classified into π 1 or π 2:

$$\displaystyle \begin{aligned} A_1:v&\ge k,~A_2:v<k,~k=\ln\frac{q_2C(1|2)}{q_1C(2|1)}, \\ v&=n_{(2)}X'S^{-1}(\bar{X}^{(1)}-\bar{X}^{(2)})-n_{(2)}\tfrac{1}{2}(\bar{X}^{(1)}-\bar{X}^{(2)})'S^{-1}(\bar{X}^{(1)}+\bar{X}^{(2)})\\ &=n_{(2)}[X-\tfrac{1}{2}(\bar{X}^{(1)}+\bar{X}^{(2)})]'S^{-1}(\bar{X}^{(1)}-\bar{X}^{(2)}).{} \end{aligned} $$
(12.5.4)

As it turns out, it already proves quite challenging to obtain the exact distribution of v as given in (12.5.4) where X is a single p-vector either from π 1 or from π 2.

12.5.1. Some asymptotic results

Before considering asymptotic properties of u and v as defined in Sect. 12.4, let us recall certain results obtained in earlier chapters. Let the p × 1 vectors Y j, j = 1, …, n, be iid vectors from some population for which E[Y j] = μ and Cov(Y j) = Σ > O, j = 1, …, n. Let the sample matrix, the matrix of sample means wherein the sample mean \(\bar {Y}=\frac {1}{n}\sum _{j=1}^nY_j\) and the sample sum of products matrix S be the as follows:

$$\displaystyle \begin{aligned} \mathbf{Y}&=[Y_1,\ldots,Y_n],\ \bar{\mathbf{Y}}=[\bar{Y},\ldots,\bar{Y}],\ S=(s_{ij}), \ s_{ij}=\sum_{k=1}^n(y_{ik}-\bar{y}_i)(y_{jk}-\bar{y}_j), \\ S&=[\mathbf{Y}-\bar{\mathbf{Y}}][\mathbf{Y}-\bar{\mathbf{Y}}]'= \mathbf{Y}\,[\,I_n-JJ'/n\,]\,\mathbf{Y},\ Y_j^{\prime}=[y_{1j},y_{2j},\ldots,y_{pj}], \end{aligned} $$
(i)

where J is a n × 1 vector of unities. Since a matrix of the form \(\mathbf {Y}-\bar {\mathbf {Y}}\) is present, we may let μ = O without any loss of generality in the following computations since \(Y_j-\bar {Y}=(Y_j-\mu )-(\bar {Y}-\mu )\). Note that \(B=B'=I_n-\frac {1}{n}JJ'=B^2\) and hence, B is idempotent and of rank n − 1. Since B = B′, there exists an orthonormal matrix Q such that Q′BQ = diag(1, …, 1, 0) = D, QQ′ = I, Q′Q = I, the diagonal elements being 1’s and 0 since B = B 2 and of rank n − 1. Then,

$$\displaystyle \begin{aligned} S&=\mathbf{Y}\,Q\,\mathrm{diag}(1,\ldots,1,0)\,Q'\,\mathbf{Y}'=\mathbf{Y}QDD'Q'\mathbf{Y}', \\ D&=\mathrm{diag}(1,\ldots,1,0).\end{aligned} $$
(ii)

Consider \(\varSigma ^{-\frac {1}{2}}S\varSigma ^{-\frac {1}{2}}\). Let \(U_j=\varSigma ^{-\frac {1}{2}}Y_j,\ j=1,\ldots ,n,\) where Y j is the j-th column of Y and it is assumed that μ = O. Observe that E[U j] = O, Cov(U j) = I p, j = 1, …, n, and the U j’s are uncorrelated. Letting U = [U 1, …, U n], (ii) implies that

$$\displaystyle \begin{aligned} \varSigma^{-\frac{1}{2}}S\varSigma^{-\frac{1}{2}}=\mathbf{U}QDDQ'\mathbf{U}'. \end{aligned}$$
(iii)

Denoting by U (j) the j-th row of U, it follows that the elements of U (j) are iid uncorrelated real scalar variables with mean value zero and variance 1. Consider the transformation V (j) = U (j) Q; then E[V (j)] = O and Cov[V (j)] = I n, j = 1, …, p, the V (j)’s being the uncorrelated. Let V be the p × n matrix whose rows are V (j), j = 1, …, p. Let the columns of V be V j, j = 1, …, n, that is, V = [V 1, …, V n]. Then, (iii) implies the following:

$$\displaystyle \begin{aligned} \varSigma^{-\frac{1}{2}}S\varSigma^{-\frac{1}{2}}&=\mathbf{V}DD'\mathbf{V}' = \{[V_1,\ldots,V_n]D\}\{[V_1,\ldots,V_n]D\}' \\ &=[V_1,\ldots,V_{n-1},O][V_1,\ldots,V_{n-1},O]'=V_1V_1^{\prime}+\cdots+V_{n-1}V_{n-1}^{\prime}\Rightarrow\\ E[\varSigma^{-\frac{1}{2}}S\varSigma^{-\frac{1}{2}}]&=E[V_1V_1^{\prime}]+\cdots+E[V_{n-1}V_{n-1}^{\prime}]=I_p+\cdots+I_p=(n-1)I_p\Rightarrow\\ E[S]&=(n-1)\varSigma\ \mbox{or }\ E\Big[\frac{S}{n-1}\Big]=\varSigma.\end{aligned} $$
(iv)

Additionally,

$$\displaystyle \begin{aligned} \mathrm{Cov}(\bar{Y})&=\frac{1}{n^2}\mathrm{Cov}[Y_1+\cdots+Y_n]=\frac{1}{n^2}[\mathrm{Cov}(Y_1)+\cdots+\mathrm{Cov}(Y_n)]\\ &=\frac{1}{n^2}[\varSigma+\cdots+\varSigma]=\frac{n}{n^2}\varSigma=\frac{\varSigma}{n}\to O\mbox{ as }n\to\infty,\end{aligned} $$
(v)

when Σ is finite with respect to any norm of Σ, namely ∥Σ∥ < . Appealing to the extended Chebyshev inequality, this shows that the unbiased estimator of μ, namely \(\bar {Y}\), converges to μ in probability, that is,

$$\displaystyle \begin{aligned} Pr(\bar{Y}\to\mu)\to 1\mbox{ when }n\to\infty\mbox{ or }\lim_{n\to\infty}Pr(\bar{Y}\to \mu)=1. \end{aligned}$$
(vi)

An unbiased estimator of Σ is \(\hat {\varSigma }=\frac {S}{n-1}\) with \(E[\hat {\varSigma }]=\varSigma \). Will \(\hat {\varSigma }\) also converge to Σ in probability when n →? In order to establish this, we require the covariance structure of the elements in S. For arbitrary populations, it is somewhat difficult to verify this result; however, it is rather straightforward for normal populations. We will examine this aspect next.

12.5.2. Another method

Let the p × 1 vectors X j, j = 1, …, n, be a simple random sample of size n from a population having a real N p(μ, Σ), Σ > O, distribution. Letting S denote the sample sum of products matrix, S will be distributed as a Wishart matrix with m = n − 1 degrees of freedom and Σ > O as its parameter matrix, whose density is

$$\displaystyle \begin{aligned} f(S)=\frac{1}{2^{\frac{mp}{2}}|\varSigma|{}^{\frac{m}{2}}\varGamma_p(\frac{m}{2})}|S|{}^{\frac{m}{2}-\frac{p+1}{2}}\mathrm{e}^{-\frac{1}{2}\mathrm{tr}(\varSigma^{-1}S)},\ S>O,\ m\ge p; \end{aligned}$$
(i)

the reader may also refer to real matrix-variate gamma density discussed in Chap. 5. This is usually written as S ∼ W p(m, Σ), Σ > O. Letting \(S_{(*)}=\varSigma ^{-\frac {1}{2}}S\varSigma ^{-\frac {1}{2}}\), S (∗) ∼ W p(m, I). Consider the transformation S (∗) = TT′ where T = (t ij) is a lower triangular matrix whose diagonal elements are positive, that is, t ij = 0, i < j, and t jj > 0, j = 1, …, p. It was explained in Chaps. 1 and 3 that the t ij’s are mutually independently distributed with the t ij’s such that i > j distributed as standard normal variables and \(t_{jj}^2,\) as a chisquare variable having m − (j − 1) degrees of freedom. The j-th diagonal element of TT′ is of the form \(t_{j1}^2+\cdots +t_{jj-1}^2+t_{jj}^2\) where \(t_{jk}^2\sim \chi ^2_1\), for k = 1, …, j − 1, that is, the square of a real standard normal variable. Thus, the j-th diagonal element is distributed as \(\chi ^2_1+\cdots +\chi ^2_1+\chi ^2_{m-(j-1)}\sim \chi ^2_m\) since all the individual chisquare variables are independently distributed, in which case the resulting number of degrees of freedom is the sum of the degrees of freedom of the chisquares. Now, noting that for a \(\chi ^2_{\nu }\),

$$\displaystyle \begin{aligned} E[\chi^2_{\nu}]=\nu\mbox{ and }\mathrm{Var}(\chi^2_{\nu})=2\,\nu, \end{aligned}$$
(ii)

the expected value of each of the diagonal elements in TT′, which are the diagonal elements in S (∗), will be m = n − 1. The non-diagonal elements in TT′ result from a sum of terms of the form t ik t ii, k < i, whose expected value is E[t ik t ii] = E[t ik]E[t jj]; but since E[t ik] = 0, i > k, all the non-diagonal elements will have zero as their expected values. Accordingly,

$$\displaystyle \begin{aligned} E[S_{(*)}]=\mathrm{diag}(m,\ldots,m)\Rightarrow E\Big[\frac{S_{(*)}}{m}\Big]=I\Rightarrow E\Big[\frac{S}{m}\Big]=\varSigma\, ,\ m=n-1, \end{aligned}$$
(iii)

and the estimator \(\hat {\varSigma }=\frac {S}{m}\) is unbiased for Σ, m being equal to n − 1. Now, let us examine the covariance structure of S (∗). Let W denote a single vector comprising all the distinct elements of S (∗) = TT′ and consider its covariance structure. In this vector of order \(\frac {p(p+1)}{2}\times 1\), convert all the original t ij’s and t jj’s in terms of standard normal and chisquare variables. Let \(z_1,\ldots ,z_{\frac {p(p-1)}{2}}\) be the standard normal variables and y 1, …, y p denote the chisquare variables. Then, each element of Cov(W) = [W − E(W)][W − E(W)] will be a sum of terms of the type

$$\displaystyle \begin{aligned}{}[\mathrm{Var}(y_k)][\mathrm{Var}(z_j)]=\mathrm{Var}(y_k)=[\mbox{twice the number of degrees of freedom of }y_k], \end{aligned}$$
(iv)

which happens to be a linear function of m. Our estimator being \(\hat {\varSigma }=\frac {S}{m}=\varSigma ^{\frac {1}{2}}\frac {S_{(*)}}{m}\varSigma ^{\frac {1}{2}}\), the covariance structure of \(\frac {S_{(*)}}{m}\) which is \(\frac {1}{m^2}\mathrm {Cov}(W)\) tends to O when m →, since each element of Cov(W) is of the form a m + b where a and b are real scalars, so that \(\frac {a\,m+b}{m^2}\to 0\) as m →, or equivalently, as n → since m = n − 1. Thus, it follows from an extended version of Chebyshev’s inequality that

$$\displaystyle \begin{aligned} Pr\Big(\frac{S}{m}\to\varSigma\Big)\to 1\mbox{ as }m\to\infty\mbox{ or as } n\to\infty\ {\mathrm{since}\ } m=n-1. \end{aligned}$$
(v)

These last two results are stated next as a theorem.

Theorem 12.5.1

Let the p × 1 vectors X j, j = 1, …, n, be iid with E[X j] = μ and Cov(X j) = Σ, j = 1, …, n. Assume that Σ is finite in the sense thatΣ∥ < ∞. Then, letting \(\bar {x}=\frac {1}{n}\sum _{j=1}^nx_j\) denote the sample mean,

$$\displaystyle \begin{aligned} Pr(\bar{X}\to\mu) \to 1\mathit{\mbox{ as }}n\to\infty.{} \end{aligned} $$
(12.5.5)

Further, letting X j ∼ N p(μ, Σ), Σ > O,

$$\displaystyle \begin{aligned} Pr\Big(\hat{\varSigma}=\frac{S}{m}\to \varSigma\Big)\to 1\mathit{\mbox{ as }}m\to\infty\mathit{\mbox{ or as }}n\to\infty\ {\mathrm{since}\ } m=n-1.{} \end{aligned} $$
(12.5.6)

Let us now examine the criterion in (12.5.4). In this case, we can obtain an asymptotic distribution of the criterion v for large n (2) or when n (2) → in the sense that n 1 → and n 2 →. When n (2) →, we have \(\bar {X}^{(1)}\to \mu ^{(1)}, \ \bar {X}^{(2)}\to \mu ^{(2)}\) and \(\frac {S}{n_{(2)}}\to \varSigma \), so that the criterion v in (12.5.4) becomes

$$\displaystyle \begin{aligned} u&=(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}X-\tfrac{1}{2}(\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}(\mu^{(1)}+\mu^{(2)})\\ &=[X-\tfrac{1}{2}(\mu^{(1)}-\mu^{(2)})]'\varSigma^{-1}(\mu^{(1)}-\mu^{(2)}){}, \end{aligned} $$
(12.5.7)

which is nothing but u as specified in (12.3.7) with the densities \(N_1(\frac {1}{2}\varDelta ^2,\varDelta ^2)\) in π 1 and \(N_1(-\frac {1}{2}\varDelta ^2,\varDelta ^2)\) in π 2. Hence, the following result:

Theorem 12.5.2

When n 1 →∞ and n 2 →, the criterion v provided in (12.5.4) becomes u as specified in (12.5.7) with the univariate normal densities \(N_1(\frac {1}{2}\varDelta ^2,\varDelta ^2)\) in π 1 and \(N_1(-\frac {1}{2}\varDelta ^2,\varDelta ^2)\) in π 2, where Δ 2 is Mahalanobis’ distance given in (12.3.8). We classify X, the observation vector at hand, to π 1 when X  A 1 and, to π 2 when X  A 2 where A 1  : u  k and A 2  : u < k with \(k=\ln \frac {C(1|2)\,q_2}{C(2|1)\,q_1}\) , q 1 and q 2 being the prior probabilities of selecting the populations π 1 and π 2 , respectively, and C(2|1) and C(1|2) denoting the costs or loss associated with misclassification.

In a practical situation, when n 1 and n 2 are large, we may replace Δ 2 in Theorem 12.5.2 by the corresponding sample value \(n_{(2)}(\bar {X}^{(1)}-\bar {X}^{(2)})'S^{-1}(\bar {X}^{(1)}-\bar {X}^{(2)})\) where S = S 1 + S 2 and n (2) = n 1 + n 2 − 2 and utilize the criterion u as specified in (12.5.7) to classify the given vector X into π 1 and π 2. It is assumed that q 1, q 2, C(2|1) and C(1|2) are available.

12.5.3. A new sample from π 1 or π 2

As in Examples 12.3.1 and 12.3.2, suppose that a simple random sample of size n 3 is available either from π 1 : N p(μ (1), Σ) or from π 2 : N p(μ (2), Σ), Σ > O. Letting the new sample be \(X_1^{(3)},\ldots ,X_{n_3}^{(3)}\), the p × n 3 sample matrix, the sample mean \(\bar {X}^{(3)}=\frac {1}{n_3}\sum _{j=1}^{n_3}X_j^{(3)}\), the p × n 3 matrix of sample means and the sample sum of products matrix are the following:

$$\displaystyle \begin{aligned} {\mathbf{X}}^{\mathbf{(3)}}&=[X_1^{(3)},\ldots,X_{n_3}^{(3)}],\ \bar{\mathbf{X}}^{\mathbf{(3)}}=[\bar{X}^{(3)},\bar{X}^{(3)},\ldots,\bar{X}^{(3)}],\\ S_3&=[{\mathbf{X}}^{\mathbf{(3)}}-\bar{\mathbf{X}}^{\mathbf{(3)}}][{\mathbf{X}}^{\mathbf{(3)}}-\bar{\mathbf{X}}^{\mathbf{(3)}}]'=(s_{ij}^{(3)}),\\ s_{ij}^{(3)}&=\sum_{k=1}^{n_3}(x_{ik}^{(3)}-\bar{x}_i^{(3)})(x_{jk}^{(3)}-\bar{x}_j^{(3)}).{} \end{aligned} $$
(12.5.8)

An unbiased estimate from this third sample is \(\hat {\varSigma }=\frac {S_3}{n_3-1},\) as \(E[\hat {\varSigma }]=\varSigma \). A pooled estimate of Σ obtained from the three samples is

$$\displaystyle \begin{aligned} \frac{S_1+S_2+S_3}{n_1+n_2+n_3-3}\equiv\frac{S}{n_{(3)}},\ S=S_1+S_2+S_3,\ n_{(3)}=n_1+n_2+n_3-3.{} \end{aligned} $$
(12.5.9)

Then, the criterion corresponding to (12.3.4) changes to:

$$\displaystyle \begin{aligned} A_1\!:w\ge k \ \mathrm{and}\ A_2\!:w<k,~k=\ln\frac{C(1|2)\,q_2}{C(2|1)\,q_1},{} \end{aligned} $$
(12.5.10)

where

$$\displaystyle \begin{aligned} w=n_{(3)}[\bar{X}^{(3)}-\tfrac{1}{2}(\bar{X}^{(1)}+\bar{X}^{(2)})]'S^{-1}(\bar{X}^{(1)}-\bar{X}^{(2)}){} \end{aligned} $$
(12.5.11)

with S = S 1 + S 2 + S 3, n (3) = n 1 + n 2 + n 3 − 3 and \(\bar {X}^{(3)}\) being the sample average from the third sample, which either comes from π 1 : N p(μ (1), Σ) or π 2 : N p(μ (2), Σ), Σ > O. Thus, the classification rule is the following:

$$\displaystyle \begin{aligned} A_1\!:w\ge k\ \ \mathrm{and}\ ~A_2\!:w<k,~k=\ln\frac{C(1|2)\,q_2}{C(2|1)\,q_1}\,,{} \end{aligned} $$
(12.5.12)

w being as defined in (12.5.11). That is, classify the new sample into π 1 if w ≥ k and, into π 2 if w < k.

As was explained in Sect. 12.5.2, as n j →, j = 1, 2, \(\bar {X}^{(i)}\to \mu ^{(i)},\ i=1,2,\) and although n 3 usually remains finite, as n 1 → and n 2 →, we have n (3) → and \(\frac {S}{n_{(3)}}\to \varSigma \). Accordingly, the criterion w as given in (12.5.11) converges to w 1 for large values of n 1 and n 2, where

$$\displaystyle \begin{aligned} w_1=[\bar{X}^{(3)}-\tfrac{1}{2}(\mu^{(1)}+\mu^{(2)})]'\varSigma^{-1}(\mu^{(1)}-\mu^{(2)}).{} \end{aligned} $$
(12.5.13)

Compared to u as specified in (12.3.7), the only difference is that X associated with u is replaced by \(\bar {X}^{(3)}\) in w 1. Hence, the variance in u will be multiplied by \(\frac {1}{n_3}\), and the asymptotic distributions will be as follows:

$$\displaystyle \begin{aligned} w_1|\pi_1\sim N_1\Big(\frac{1}{2}\varDelta^2,\frac{1}{n_3}\varDelta^2\Big)\ \ \mathrm{and}\ \ w_1|\pi_2\sim N_1\Big(-\frac{1}{2}\varDelta^2,\frac{1}{n_3}\varDelta^2\Big),{} \end{aligned} $$
(12.5.14)

as n 1 → and n 2 →.

Theorem 12.5.3

Consider two populations π 1 : N p(μ (1), Σ) and π 2 : N p(μ (2), Σ), Σ > O, and simple random samples of respective sizes n 1 and n 2 from these two populations. Suppose that a simple random sample of size n 3 is available, either from π 1 or π 2 . For classifying the third sample into π 1 or π 2 , the criterion to be utilized is w as given in (12.5.11). Then, the asymptotic distribution of w, when n i →, i = 1, 2, is that of w 1 specified in (12.5.13) and the regions of classification are as given in (12.5.12).

In a practical situation, when the sample sizes n 1 and n 2 are large, one may replace Δ 2 by its sample analogue, and then use (12.5.14) to reach a decision. As it turns out, it proves quite difficult to derive the exact density of w.

Example 12.5.1

A certain milk collection and distribution center collects and sells the milk supplied by local farmers to the community, the balance, if any, being dispatched to a nearby city. In that locality, there are two types of cows. Some farmers only keep Jersey cows and others, only Holstein cows. Samples of the same quantities of milk are taken and the following characteristics are evaluated: x 1, the fat content, x 2, the glucose content, and x 3, the protein content. It is known that X′ = (x 1, x 2, x 3) is normally distributed as X ∼ N 3(μ (1), Σ), Σ > O, for Jersey cows, and X ∼ N 3(μ (2), Σ), Σ > O, for Holstein cows, with μ (1)μ (2), the covariance matrices Σ being assumed identical. These parameters which are not known, are estimated on the basis of 100 milk samples from Jersey cows and 102 samples from Holstein cows, all the samples being of equal volume. The following are the summarized data with our standard notations, where S 1 and S 2 are the sample sums of products matrices:

Three farmers just brought in their supply of milk and (1): a sample denoted by X 1 is collected from the first farmer’s supply and evaluated; (2) a sample, X 2, is taken from a second farmer’s supply and evaluated; (3) a set of 5 random samples are collected from a third farmer’s supply, the sample average being \(\bar {X}\). The data is

Classify, X 1, X 2 and the sample of size 5 to either coming from Jersey or Holstein cows.

Solution 12.5.1

The following preliminary calculations are needed:

Then,

$$\displaystyle \begin{aligned} (\bar{X}^{(1)}-\bar{X}^{(2)})'\Big(\frac{S}{200}\Big)^{-1}=[1,-1,0]\left[\begin{array}{rrr}6&3&-2\\ 3&2&-1\\ -2&-1&1\end{array}\right]=[3,1,-1], \end{aligned}$$
$$\displaystyle \begin{aligned} (\bar{X}^{(1)}-\bar{X}^{(2)})'\Big(\frac{S}{200}\Big)^{-1}X=[3,1,-1]X=3x_1+x_2-x_3\Rightarrow w=3x_1+x_2-x_3-4 \end{aligned}$$

where the w is given in (12.5.11). For answering (1), we substitute X 1 to X in w. That is, w at X 1 is 3(2) + (1) − (1) − 4 = 2 > 0. Hence, we assign X 1 to Jersey cows. For answering (2), we replace X in w by X 2, that is, 3(1) + (1) − (2) − 4 = −2 < 0. Thus, we assign X 2 to Holstein cows. For answering (3), we replace X in w by \(\bar {X}\). That is, 3(2) + (2) − (1) − 4 = 3 > 0. Accordingly, we classify this sample as coming from Jersey cows.

12.6. Maximum Likelihood Method of Classification

As before, let π 1 be the p-variate real normal population N p(μ (1), Σ), Σ > O, with the simple random sample \(X_1^{(1)},\ldots ,X_{n_1}^{(1)}\) of size n 1 drawn from that population, and π 2 : N p(μ (2), Σ), Σ > O, with the simple random sample \(X_1^{(2)},\ldots ,X_{n_2}^{(2)}\) of size n 2 so distributed. A p-vector X at hand is to be classified into π 1 or π 2. Let the sample means and the sample sums of products matrices be \(\bar {X}^{(1)},\ \bar {X}^{(2)},\ S_1\) and S 2. Then, the problem of classification of X into π 1 or π 2 can be stated in terms of testing a hypothesis of the following type: X and \(X_1^{(1)},\ldots ,X^{(1)}_{n_1}\) are from N p(μ (1), Σ) and \(X_1^{(2)},\ldots ,X_{n_2}^{(2)}\) are from π 2 constitutes the null hypothesis, versus, the alternative X and \(X_1^{(2)},\ldots ,X_{n_2}^{(2)}\) are from N p(μ (2), Σ) and \(X_1^{(1)},\ldots ,X_{n_1}^{(1)}\) are from N p(μ (1), Σ). Let the likelihood functions under the null and alternative hypotheses be denoted as L 0 and L 1, respectively, where

$$\displaystyle \begin{aligned} L_0&=\Big\{\prod_{j=1}^{n_1}\frac{\mathrm{e}^{-\frac{1}{2}(X_j-\mu^{(1)})'\varSigma^{-1}(X_j-\mu^{(1)})}}{(2\pi)^{\frac{p}{2}}|\varSigma|{}^{\frac{1}{2}}}\Big\}\frac{\mathrm{e}^{-\frac{1}{2}(X-\mu^{(1)})'\varSigma^{-1}(X-\mu^{(1)})}}{(2\pi)^{\frac{p}{2}}|\varSigma|{}^{\frac{1}{2}}}\\ &\ \ \ \ \times\Big\{\prod_{j=1}^{n_2}\frac{\mathrm{e}^{-\frac{1}{2}(X_j-\mu^{(2)})'\varSigma^{-1}(X_j-\mu^{(2)})}}{(2\pi)^{\frac{p}{2}}|\varSigma|{}^{\frac{1}{2}}}\Big\},\end{aligned} $$
$$\displaystyle \begin{aligned} L_0 &=\frac{\mathrm{e}^{-\frac{1}{2}\rho_1}}{(2\pi)^{\frac{(n_1+n_2+1)p}{2}}|\varSigma|{}^{\frac{n_1+n_2+1}{2}}},\ \rho_1=\nu+(X-\mu^{(1)})'\varSigma^{-1}(X-\mu^{(1)}), \end{aligned} $$
(i)
$$\displaystyle \begin{aligned} L_1&=\frac{\mathrm{e}^{-\frac{1}{2}\rho_2}}{(2\pi)^{\frac{(n_1+n_2+1)p}{2}}|\varSigma|{}^{\frac{n_1+n_2+1}{2}}},\ \rho_2=\nu+(X-\mu^{(2)})\varSigma^{-1}(X-\mu^{(2)}), \end{aligned} $$
(ii)

where

$$\displaystyle \begin{aligned} \nu&=\mathrm{tr}(\varSigma^{-1}S_1)+\frac{n_1}{2}(\bar{X}^{(1)}-\mu^{(1)})'\varSigma^{-1}(\bar{X}^{(1)}-\mu^{(1)}) \\ &\ \ \ \ +\mathrm{tr}(\varSigma^{-1}S_2)+\frac{n_2}{2}(\bar{X}^{(2)}-\mu^{(2)})'\varSigma^{-1}(\bar{X}^{(2)}-\mu^{(2)})\end{aligned} $$
(iii)

and S 1 and S 2 are the sample sums of products matrices from the samples \(X_1^{(1)},\ldots ,\) \(X_{n_1}^{(1)}\) and \(X_1^{(2)},\ldots ,X_{n_2}^{(2)}\), respectively. Referring to Chaps. 1 and 3 for vector/matrix derivatives and the maximum likelihood estimators (MLE’s) of the parameters of normal populations, the MLE’s obtained from (i) are the following, denoting the estimators/estimates with a hat: The MLE’s under L 0 are the following:

$$\displaystyle \begin{aligned} \hat{\mu}^{(1)}&=\frac{n_1\bar{X}^{(1)}+X}{n_1+1}, \ \hat{\mu}^{(2)}=\bar{X}^{(2)}, \ \hat{\varSigma}=\frac{S_1+S_2+S_3^{(1)}}{n_1+n_2+1}\equiv\hat{\varSigma}_1,\\ S_3^{(1)}&=(X-\hat{\mu}^{(1)})(X-\hat{\mu}^{(1)})'=\Big(X-\frac{n_1\bar{X}^{(1)}+X}{n_1+1}\Big)\Big(X-\frac{n_1\bar{X}^{(1)}+X}{n_1+1}\Big)'\\ &=\Big(\frac{n_1}{n_1+1}\Big)^2(X-\bar{X}^{(1)})(X-\bar{X}^{(1)})',{} \end{aligned} $$
(12.6.1)

observing that the scalar quantity

$$\displaystyle \begin{aligned} (X-\hat{\mu}^{(1)})'\varSigma^{-1}(X-\hat{\mu}^{(1)})=\mathrm{tr}(X-\hat{\mu}^{(1)})'\varSigma^{-1}(X-\hat{\mu}^{(1)})=\mathrm{tr}(\varSigma^{-1}S_3^{(1)}). \end{aligned}$$

By substituting the MLE’s in L 0, we obtain the maximum of L 0:

$$\displaystyle \begin{aligned} \max L_0&=\frac{\mathrm{e}^{-\frac{(n_1+n_2+1)p}{2}}}{(2\pi)^{\frac{(n_1+n_2+1)p}{2}}|\hat{\varSigma}_1|{}^{\frac{(n_1+n_2+1)}{2}}}\, ,\\ \hat{\varSigma}_1&=\frac{S_1+S_2+(\frac{n_1}{n_1+1})^2(X-\bar{X}^{(1)})(X-\bar{X}^{(1)})'}{n_1+n_2+1}.{} \end{aligned} $$
(12.6.2)

Under L 1, the MLE’s are

$$\displaystyle \begin{aligned} \hat{\mu}^{(2)}&=\frac{n_2\bar{X}^{(2)}+X}{n_2+1},\ \hat{\mu}^{(1)}=\bar{X}^{(1)},\ \hat{\varSigma}=\frac{S_1+S_2+S_3^{(2)}}{n_1+n_2+1}\equiv\hat{\varSigma}_2,\\ S_3^{(2)}&=\Big(\frac{n_2}{n_2+1}\Big)^2(X-\bar{X}^{(2)})(X-\bar{X}^{(2)})'.{} \end{aligned} $$
(12.6.3)

Thus,

$$\displaystyle \begin{aligned} \max L_1&=\frac{\mathrm{e}^{-\frac{(n_1+n_2+1)p}{2}}}{(2\pi)^{\frac{(n_1+n_2+1)p}{2}}|\hat{\varSigma}_2|{}^{\frac{n_1+n_2+1}{2}}},\\ \hat{\varSigma}_2&=\frac{1}{n_1+n_2+1}\Big[S_1+S_2+\Big(\frac{n_2}{n_2+1}\Big)^2(X-\bar{X}^{(2)})(X-\bar{X}^{(2)})'\Big].\end{aligned} $$

Hence,

$$\displaystyle \begin{aligned} \lambda_1&=\frac{\max L_0}{\max L_1}=\Big (\frac{|\hat{\varSigma}_2|}{|\hat{\varSigma}_1|}\Big)^{\frac{n_1+n_2+1}{2}}\Rightarrow\ \ \lambda_1^{\frac{2}{n_1+n_2+1}}=z_1=\frac{|\hat{\varSigma}_2|}{|\hat{\varSigma}_1|}, \ \ \mbox{so that}\\ z_1&=\frac{|S_1+S_2+(\frac{n_2}{n_2+1})^2(X-\bar{X}^{(2)}) (X-\bar{X}^{(2)})'|}{|S_1+S_2+(\frac{n_1}{n_1+1})^2(X-\bar{X}^{(1)})(X-\bar{X}^{(1)})'|}\ .{} \end{aligned} $$
(12.6.4)

If z 1 ≥ 1, then \(\max L_0\ge \max L_1\), which means that the likelihood of X coming from π 1 is greater than or equal to the likelihood of X originating from π 2. Hence, we may classify X into π 1 if z 1 ≥ 1 and classify X into π 2 if z 1 < 1. In other words,

$$\displaystyle \begin{aligned} A_1: z_1 \ge 1\ \ \mathrm{and}\ \ A_2: z_1<1. \end{aligned}$$
(iv)

If we let S = S 1 + S 2, then z 1 ≥ 1 ⇒

$$\displaystyle \begin{aligned} &|S+\Big(\frac{n_2}{n_2+1}\Big)^2(X-\bar{X}^{(2)})(X-\bar{X}^{(2)})'| \\ &\ge |S+\Big(\frac{n_1}{n_1+1}\Big)^2(X-\bar{X}^{(1)})(X-\bar{X}^{(1)})'|.\end{aligned} $$
(v)

We can re-express this last inequality in a more convenient form. Expanding the following partitioned determinant in two different ways, we have the following, where S is p × p and Y  is p × 1:

(vi)

observing that 1 + Y′S −1 Y  is a scalar quantity. Accordingly, z 1 ≥ 1 means that

$$\displaystyle \begin{aligned} 1+\Big(\frac{n_2}{n_2+1}\Big)^2(X-\bar{X}^{(2)})'S^{-1}(X-\bar{X}^{(2)})\ge 1+\Big(\frac{n_1}{n_1+1}\Big)^2(X-\bar{X}^{(1)})'S^{-1}(X-\bar{X}^{(1)}). \end{aligned}$$

That is,

$$\displaystyle \begin{aligned} z_2&=\Big(\frac{n_2}{n_2+1}\Big)^2(X-\bar{X}^{(2)})'S^{-1}(X-\bar{X}^{(2)})\\ &\quad -\Big(\frac{n_1}{n_1+1}\Big)^2(X-\bar{X}^{(1)})S^{-1}(X-\bar{X}^{(1)})\ge 0\ \ \Rightarrow\\ z_3&=\Big(\frac{n_2}{n_2+1}\Big)^2(X-\bar{X}^{(2)})'\Big(\frac{S}{n_1+n_2-2}\Big)^{-1}(X-\bar{X}^{(2)})\\ &\quad -\Big(\frac{n_1}{n_1+1}\Big)^2(X-\bar{X}^{(1)})'\Big(\frac{S}{n_1+n_2-2}\Big)^{-1}(X-\bar{X}^{(1)})\ge 0.{}\end{aligned} $$
(12.6.5)

Hence, the regions of classification are the following:

$$\displaystyle \begin{aligned} A_1:z_3\ge 0\ \ \mathrm{and}\ \ A_2: z_3<0. \end{aligned}$$
(vii)

Thus, classify X into π 1 when z 3 ≥ 0 and, X into π 2 when z 3 < 0. For large n 1 and n 2, some interesting results ensue. When n 1 → and n 2 →, we have \(\frac {n_i}{n_i+1}\to 1,\ i=1,2,\ \bar {X}^{(i)}\to \mu ^{(i)},\ i=1,2,\) and \(\frac {S}{n_1+n_2-2}\to \varSigma \). Then, z 3 converges to z 4 where

$$\displaystyle \begin{aligned} z_4&=\tfrac{1}{2}(X-\mu^{(2)})'\varSigma^{-1}(X-\mu^{(2)})-(X-\mu^{(1)})'\varSigma^{-1}(X-\mu^{(1)})\ge 0\\ &\Rightarrow [X-\tfrac{1}{2}(\mu^{(1)}+\mu^{(2)})]'\varSigma^{-1}(\mu^{(1)}-\mu^{(2)})\ge 0\ \Rightarrow \ u\ge 0 \end{aligned} $$
(viii)

where u is the same criterion u as that specified in (12.5.7). Hence, we have the following result:

Theorem 12.6.1

Let \(X_1^{(1)},\ldots ,X_{n_1}^{(1)}\) be a simple random sample of size n 1 from π 1 : N p(μ (1), Σ), Σ > O and \(X_1^{(2)},\ldots ,X_{n_2}^{(2)}\) be a simple random sample of size n 2 from the population π 2 : N p(μ (2), Σ), Σ > O. Letting X be a vector at hand to be classified into π 1 or π 2 , when n 1 →∞ and n 2 →∞, the likelihood ratio criterion for classification is the following: Classify X into π 1 if u ≥ 0 and, X into π 2 if u < 0 or equivalently, A 1 : u ≥ 0 and A 2 : u < 0 where \(u=[X-\frac {1}{2}(\mu ^{(1)}+\mu ^{(2)})]'\varSigma ^{-1}(\mu ^{(1)}-\mu ^{(2)})\) whose density is \(u\sim N_1(\frac {1}{2}\varDelta ^2,\varDelta ^2)\) when X is assigned to π 1 and \(u\sim N_1(-\frac {1}{2}\varDelta ^2,\varDelta ^2)\) when X is assigned to π 2 , with Δ 2 = (μ (1) − μ (2))′Σ −1(μ (1) − μ (2)) denoting Mahalanobis’ distance.

The likelihood ratio criterion for classification specified in (12.6.5) can also be given the following interpretation: For large values of n 1 and n 2, the criterion reduces to the following: (X − μ (2))′Σ −1(X − μ (2)) − (X − μ (1))′Σ −1(X − μ (1)) ≥ 0 where (X − μ (2))′Σ −1(X − μ (2)) is the generalized distance between X and μ (2), and (X − μ (1))′Σ −1(X − μ (1)) is the generalized distance between X and μ (1), which means that the generalized distance between X and μ (2) is larger than the generalized distance between X and μ (1) when u > 0. That is, X is closer to μ (1) than μ (2) and accordingly, we classify X into π 1, which is the case u > 0. Similarly, if X is closer to μ (2) when compared to the distance to μ (1), we assign X to π 2, which is the case u < 0. The case u = 0 is also included in the first inequality, but only for convenience. However, when Pr{u = 0|π i, i = 1, 2} = 0, replacing u > 0 by u ≥ 0 is fully justified.

Note 12.6.1

The reader may refer to Example 12.3.3 for an illustration of the computations involved in connection with the probabilities of misclassification. For large values of n 1 and n 2, one has the z 4 of (viii) as an approximation to the u appearing in the same equation as well as the u of (12.5.7) or that of Example 12.3.3. In order to apply Theorem 12.6.1, one needs to know the parameters μ (1), μ (2) and Σ. When they are not available, one may substitute to them the corresponding estimates \(\bar {X}^{(1)},\ \bar {X}^{(2)}\) and \(\hat {\varSigma }=\frac {S_1+S_2}{n_1+n_2-2}\) when n 1 and n 2 are large. Then, the approximate probabilities of misclassification can be determined.

Example 12.6.1

Redo the problem considered in Example 12.5.1 by making use of the maximum likelihood procedure.

Solution 12.6.1

In order to answer the questions, we need to compute

$$\displaystyle \begin{aligned} z_4&=\Big(\frac{n_2}{n_2+1}\Big)^2(X-\bar{X}^{(2)})'\Big(\frac{S}{n_1+n_2-1}\Big)^{-1}(X-\bar{X}^{(2)})\\ &\ \ \ \ -\Big(\frac{n_1}{n_1+1}\Big)^2(X-\bar{X}^{(1)})'\Big(\frac{S}{n_1+n_2-2}\Big)^2(X-\bar{X}^{(1)}).\end{aligned} $$

In this case, \(\frac {n_1}{n_1+1}=\frac {100}{101}\approx 1\) and \(\frac {n_2}{n_2+1}=\frac {102}{103}\approx 1\) and hence, the criterion z 4 is the same as w of (12.5.4) and the decisions arrived at an Example 12.5.1 will remain unchanged in this example. Since n 1 and n 2 are large, we have reasonably accurate approximations of the parameters as

$$\displaystyle \begin{aligned} \bar{X}^{(1)}\to \mu^{(1)},\ \bar{X}^{(2)}\to \mu^{(2)}\ \ \mathrm{and}\ \ \frac{S}{n_1+n_2-2}\to \varSigma, \end{aligned}$$

so that the probabilities of misclassification can be evaluated by using their estimates. The approximate distributions are then given by

$$\displaystyle \begin{aligned} w|\pi_1~\sim N_1(\tfrac{1}{2}\hat{\varDelta}^2,\hat{\varDelta}^2)\ \ \mathrm{and}\ \ w|\pi_2~\sim N_1(-\tfrac{1}{2}\hat{\varDelta}^2,\hat{\varDelta}^2) \end{aligned}$$

where \(\hat {\varDelta }^2=(\bar {X}^{(1)}-\bar {X}^{(2)})'(\frac {S}{n_1+n_2-2})^{-1}(\bar {X}^{(1)}-\bar {X}^{(2)})\). From the computations done in Example 12.5.1, we have

As well, A 1 : w ≥ 0 and A 2 : w < 0. For the data pertaining to (1) of Example 12.5.1, we have w > 0 and X 1 is assigned to π 1. Observing that w → u of (12.5.7),

$$\displaystyle \begin{aligned} P(1|1,A)&=\mbox{Probability of arriving at a correct decision }\\ &=Pr\{u>0|\pi_1\} =\int_0^{\infty}\frac{1}{\sqrt{2}\sqrt{(2\pi)}}\mathrm{e}^{-\frac{1}{4}(u-1)^2}\mathrm{d}u\\ &=\int_{\frac{0-1}{\sqrt{2}}}^{\infty}\frac{1}{\sqrt{(2\pi)}}\mathrm{e}^{-\frac{1}{2}v^2}\mathrm{d}v\approx 0.76;\\ P(1|2,A)&=\mbox{Probability of misclassification }\\ &=Pr\{u>0|\pi_2\} =\int_0^{\infty}\frac{1}{\sqrt{2}\sqrt{(2\pi)}}\mathrm{e}^{-\frac{1}{4}(u+1)^2}\mathrm{d}u\\ & =\int_{\frac{1}{\sqrt{2}}}^{\infty}\frac{1}{\sqrt{(2\pi)}}\mathrm{e}^{-\frac{1}{2}v^2}\mathrm{d}v\approx 0.24.\end{aligned} $$

In Example 12.5.1, the observed vector provided for (2) is classified into π 2 since w < 0. Thus, the probability of making the right decision is P(2|2, A) = Pr{u < 0|π 2}≈ 0.76 and the probability of misclassification is P(2|1, A) = Pr{u < 0|π 1}≈ 0.24. Given the data related to (3) of Example 12.5.1, the only difference is that the distributions in π 1 and π 2 will be slightly different, the mean values remaining the same but the variance \(\hat {\varDelta }^2\) being replaced by \(\hat {\varDelta }^2/n\) where n = 5. The computations are similar to those provided for (1), the sample mean being assigned to π 1 in this case.

12.7. Classification Involving k Populations

Consider the p-variate populations π 1, …, π k and let X be a p-vector at hand to be classified into one of these k populations. Let q 1, …, q k be the prior probabilities of selecting these populations, q j > 0, j = 1, …, k, with q 1 + ⋯ + q k = 1. Let the cost of misclassification of a p-vector belonging to π i being improperly classified into π j be C(j|i) for ij so that C(i|i) = 0, i = 1, …, k. A decision rule A = (A 1, …, A k) determines subspaces A j ⊂ R p, j = 1, …, k, with A i ∩ A j = ϕ (the empty set) for all ij. Let the probability/density functions associated with the k populations be P j(X), j = 1, …, k, respectively. Let P(j|i, A) = Pr{X ∈ A j|π i : P i(X), A} =  probability of an observation coming from or belonging to the population π i or originating from the probability/density function P i(X), being improperly assigned to π j or misclassified as coming from P j(X), and the cost associated with this misclassification be denoted by C(j|i). Under the rule A = (A 1, …, A k), the probabilities of correctly classifying and misclassifying an observed vector are the following, assuming that the P j(X)s, j = 1, …, k, are densities:

$$\displaystyle \begin{aligned} P(i|i,A)=\int_{A_i}P_i(X)\mathrm{d}X\ \ \mathrm{and}\ \ P(j|i,A)=\int_{A_j}P_i(X)\mathrm{d}X,\ i,j=1,\ldots,k, \end{aligned}$$
(i)

where P(i|i, A) is a probability of achieving a correct classification, that is, of assigning an observation X to π i when the population is actually π i, and P(j|i, A) is the probability of an observation X coming from π i being misclassified as originating from π j. Consider a p-vector X at hand. What is then the probability that this X came from P i(X), given that X is an observation vector from one of the populations π 1, …, π k? This is in fact a conditional statement involving

$$\displaystyle \begin{aligned} \frac{q_iP_i(X)}{q_1P_1(X)+q_2P_2(X)+\cdots+q_kP_k(X)}. \end{aligned}$$

Suppose that for specific i and j, the conditional probability

$$\displaystyle \begin{aligned} \frac{q_iP_i(X)}{q_1P_1(X)+\cdots+q_kP_k(X)}\ge \frac{q_jP_j(X)}{q_1P_1(X)+\cdots+q_kP_k(X)}. \end{aligned}$$
(ii)

This is tantamount to presuming that the likeliness of X originating from P i(X) is greater than or equal to that of X coming from P j(X). In this case, we would like to assign X to π i rather than π j. If (ii) holds for all j = 1, …, k, ji, then we classify X into π i. Equation (ii) for j = 1, …, k, ji, implies that

$$\displaystyle \begin{aligned} q_iP_i(X)\ge q_jP_j(X)\Rightarrow \frac{P_i(X)}{P_j(X)}\ge \frac{q_j}{q_i},\ j=1,\ldots,k,\ j\ne i.{} \end{aligned} $$
(12.7.1)

Accordingly, we adopt (12.7.1) as a decision rule A = (A 1, …, A k). This decision rule corresponds to the following: When X ∈ A 1 ⊂ R p or X falls in A 1, then X is classified into π 1, when X ∈ A 2, then X is assigned to π 2, and so on. What is the expected cost of an X belonging to π i being misclassified into π j under some decision rule B = (B 1, …, B k), B j ⊂ R p, j = 1, …, k, B i ∩ B j = O, ij, for all i and j? This is q i P i(X)C(j|i) ≡ E i(B). The expected cost of an X belonging to π j being misclassified into π i under the same decision rule B is E j(B) = q j P j(X)C(i|j). If E i(B) < E j(B), then we favor P i(X) over P j(X) as it is always desirable to minimize the expected cost in any procedure or decision. If E i(B) < E j(B) for all j = 1, …, k, ji, then P i(X) or π i is preferred over all other populations to which X could be assigned. Note that

$$\displaystyle \begin{aligned} E_i(B)<E_j(B)\Rightarrow q_iP_i(X)C(j|i)<q_jP_j(X)C(i|j)\Rightarrow \frac{P_i(X)}{P_j(X)}<\frac{q_j\,C(i|j)}{q_i\,C(j|i)}, \end{aligned}$$
(iii)

for j = 1, …, k, ji, so that (iii) is the situation resulting from the following misclassification rule: if

$$\displaystyle \begin{aligned} \frac{P_i(X)}{P_j(X)}\ge \frac{q_j\,C(i|j)}{q_i\,C(j|i)},\ j=1,\ldots,k,\ j\ne i,{} \end{aligned} $$
(12.7.2)

we classify X into π i or equivalently, X ∈ A i, which is the decision rule A = (A 1, …, A k). Thus, the decision rule B in (iii) is identical to A. Observing that when C(i|j) = C(j|i), (12.7.2) reduces to (12.7.1); the decision rule A = (A 1, …, A k) in (12.7.1) is seen to yield the maximum probability of assigning an observation X at hand to π i compared to the probability of assigning X to any other π j, j = 1, …, k, ji, when the costs of misclassification are equal. As well, it follows from (12.7.2) that the decision rule A = (A 1, …, A k) gives the minimum expected cost associated with assigning the observation X at hand to π i compared to assigning X to any other population π j, j = 1, …, k , ji.

12.7.1. Classification when the populations are real Gaussian

Let the populations be p-variate real normal, that is, π j ∼ N p(μ (j), Σ), Σ > O, j = 1, …, k, with different mean value vectors but the same covariance matrix Σ > O. Let the density of π j be denoted by P j(X) ≃ N p(μ (j), Σ), Σ > O. A vector X at hand is to be assigned to one of the π i’s, i = 1, …, k. In Sect. 12.3 or Example 12.3.3, the decision rule involves two populations. Letting the two populations be π i : P i(X) and π j : P j(X) for specific i and j, it was determined that the decision rule consists of classifying X into π i if \(\ln \frac {P_i(X)}{P_j(X)}\ge \ln \rho , \) \(\rho =\frac {q_jC(i|j)}{q_iC(j|i)},\) with ρ = 1 so that \(\ln \rho =0\) whenever C(i|j) = C(j|i) and q i = q j. When \(\ln \rho =0\), we have seen that the decision rule is to classify the p-vector X into π i or P i(X) if u ij(X) ≥ 0 and to assign X to P j(X) or π j if u ij(X) < 0, where

$$\displaystyle \begin{aligned} u_{ij}(X)&=(\mu^{(i)}-\mu^{(j)})'\varSigma^{-1}X-\tfrac{1}{2}(\mu^{(i)}-\mu^{(j)})'\varSigma^{-1}(\mu^{(i)}+\mu^{(j)}) \\ &=[X-\tfrac{1}{2}(\mu^{(i)}+\mu^{(j)})]'\varSigma^{-1}(\mu^{(i)}-\mu^{(j)}).\end{aligned} $$
(iv)

Now, on applying the result obtained in (iv) to (12.7.1) and (12.7.2), one arrives at the following decision rule:

$$\displaystyle \begin{aligned} A_i: u_{ij}(X)\ge 0 \mbox{ or }A_i:u_{ij}(X)\ge \ln k, \ k=\frac{q_j\,C(i|j)}{q_i\,C(j|i)},\ j=1,\ldots,k,\ j\ne i,{} \end{aligned} $$
(12.7.3)

with \(\ln \rho =0\) occurring when q i = q j and C(i|j) = C(j|i).

Note 12.7.1

What will interchanging i and j in u ij(X) entail? Note that, as defined, u ij(X) involves the terms (μ (i) − μ (j)) = −(μ (j) − μ (i)) and (μ (i) + μ (j)), the latter being unaffected by the interchange of μ (i) and μ (j). Hence, for all i and j,

$$\displaystyle \begin{aligned} u_{ij}(X)=-u_{ji}(X),\ i\ne j.{} \end{aligned} $$
(12.7.4)

When the underlying population is X ∼ N p(μ (i), Σ), \(E[u_{ij}(X)|\pi _i]=\frac {1}{2}\varDelta _{ij}^2\), which implies that \(E[u_{ji}|\pi _i]=-\frac {1}{2}\varDelta _{ij}^2=-E[u_{ij}(X)|\pi _i]\) where \(\varDelta _{ij}^2=(\mu ^{(i)}-\mu ^{(j)})'\varSigma ^{-1}(\mu ^{(i)}-\mu ^{(j)})\).

Note 12.7.2

For computing the probabilities of correctly classifying and misclassifying an observed vector, certain assumptions regarding the distributions associated with the populations π j, j = 1, …, k, are needed, the normality assumption being the most convenient one.

Example 12.7.1

A certain milk collection and distribution center collects and sells the milk supplied by local farmers to the community, the balance, if any, being dispatched to a nearby city. In that locality, there are three dairy cattle breeds, namely, Jersey, Holstein and Guernsey, and each farmer only keeps one type of cows. Samples are taken and the following characteristics are evaluated in grams per liter: x 1, the fat content, x 2, the glucose content, and x 3, the protein content. It has been determined that X′ = (x 1, x 2, x 3) is normally distributed as X ∼ N 3(μ (1), Σ) for Jersey cows, X ∼ N 3(μ (2), Σ) for Holstein cows and X ∼ N 3(μ (3), Σ) for Guernsey cows, with a common covariance matrix Σ > O, where

(1): A farmer brought in his supply of milk from which one liter was collected. The three variables were evaluated, the result being \(X_0^{\prime }=(2,3,4)\). (2): Another one liter sample was taken from a second farmer’s supply and it was determined that the vector of the resulting measurements was \(X_1^{\prime }=(2,2,2)\). No prior probabilities or costs are involved. Which breed of dairy cattle is each of these farmers likely to own?

Solution 12.7.1

Our criterion is based on u ij(X) where

$$\displaystyle \begin{aligned} u_{ij}(X)=(\mu^{(i)}-\mu^{(j)})'\varSigma^{-1}X-\frac{1}{2}(\mu^{(i)}-\mu^{(j)})'\varSigma^{-1}(\mu^{(i)}+\mu^{(j)}). \end{aligned}$$

Let us evaluate the various quantities of interest:

$$\displaystyle \begin{aligned} (\mu^{(1)}-\mu^{(2)})'\varSigma^{-1}X&=(1,0,-1)\varSigma^{-1}X=(\tfrac{1}{3},-1,-2)X=\tfrac{1}{3}x_1-x_2-2x_3\\ (\mu^{(1)}-\mu^{(3)})'\varSigma^{-1}X&=(0,0,-2)\varSigma^{-1}X=(0,-2,-4)X=-2x_2-4x_3\\ (\mu^{(2)}-\mu^{(3)})'\varSigma^{-1}X&=(-1,0,-1)\varSigma^{-1}X=(-\tfrac{1}{3},-1,-2)X=-\tfrac{1}{3}x_1-x_2-2x_3;\end{aligned} $$

Hence,

$$\displaystyle \begin{aligned} \ \ \ u_{12}(X)=\tfrac{1}{3}x_1-x_2-2x_3+\tfrac{11}{2};~~ u_{13}(X)=-2x_2-4x_3+14; \end{aligned}$$
$$\displaystyle \begin{aligned} \ \ \ \ \ \ \ \ \ \ \ \ u_{21}(X)=-\tfrac{1}{3}x_1+x_2+2x_3-\tfrac{11}{2};~~ u_{23}(X)= -\tfrac{1}{3}x_1-x_2-2x_3+\tfrac{17}{2}; \end{aligned}$$
$$\displaystyle \begin{aligned} u_{31}(X)=2x_2+4x_3-14;~~ u_{32}(X)=\tfrac{1}{3}x_1+x_2+2x_3-\tfrac{17}{2}. \end{aligned}$$

In order to answer (1), we substitute X 0 to X and first, evaluate u 12(X 0) and u 13(X 0) to determine whether they are ≥ 0. Since \(u_{12}(X_0)=\tfrac {1}{3}(2)-(3)-2(4)+\tfrac {11}{2}<0\), the condition is violated and hence we need not check for u 13(X 0) ≥ 0. Thus, X 0 is not in A 1. Now, consider \(u_{21}(X_0)=-\tfrac {1}{3}(2)+3+2(4)-\tfrac {11}{2}>0\) and \(u_{23}(X_0)=-\tfrac {1}{3}(2)-(3)-2(4)-\tfrac {17}{2}<0\); again the condition is violated and we deduce that X 0 is not in A 2. Finally, we verify A 3: u 31(X 0) = 2(3) + 2(4) − 14 = 0 and \(u_{32}(X_0)=\tfrac {1}{3}(2)+(3)+2(4)-\tfrac {17}{2}>0\). Thus, X 0 ∈ A 3, that is, we conclude that the sample milk came from Guernsey cows.

For answering (2), we substitute X 1 to X in u ij(X). Noting that \(u_{12}(X_1)=\tfrac {1}{3}(2)-(2)-2(2)+\tfrac {11}{2}>0\) and u 13(X 1) = −2(2) − 4(2) + 14 > 0, we can surmise that X 1 ∈ A 1, that is, the sample milk came from Jersey cows. Let us verify A 2 and A 3 to ascertain that no mistake has been made in the calculations. Since u 21(X 1) < 0, X 1 is not in A 2, and since u 31(X 0) < 0, X 1 is not in A 3. This completes the computations.

12.7.2. Some distributional aspects

For computing the probabilities of correctly classifying and misclassifying an observation, we require the distributions of our criterion u ij(X). Let the populations be normally distributed, that is, π j ∼ N p(μ (j), Σ), Σ > O, with the same covariance matrix Σ for all k populations, j = 1, …, k. Then, the probability of achieving a correct classification when X is assigned to π i is the following under the decision rule A = (A 1, …, A k):

$$\displaystyle \begin{aligned} P(i|i,A)=\int_{A_i}P_i(X)\mathrm{d}X{} \end{aligned} $$
(12.7.5)

where dX = dx 1 ∧… ∧dx p and the integral is actually a multiple integral. But A i is defined by the inequalities u i1(X) ≥ 0, u i2(X) ≥ 0, …, u ik(X) ≥ 0, where u ii(X) is excluded. This is the case when no prior probabilities and costs are involved or when the prior probabilities are equal and the cost functions are identical. Otherwise, the region is \(\{A_i: u_{ij}(X)\ge \ln k_{ij},\ k_{ij}=\tfrac {q_jC(i|j)}{q_iC(j|i)},\ j=1,\ldots ,k,\ j\ne i\}\). Integrating (12.7.5) is challenging as the region is determined by k − 1 inequalities.

When the parameters μ (j), j = 1, …, k, and Σ are known, we can evaluate the joint distributions of u ij(X), j = 1, …, k, ji, under the normality assumption for π j, j = 1, …, k. Let us examine the distributions of u ij(X) for normally distributed π i : P i(X), i = 1, …, k. In this instance, E[X]|π i = μ (i), and under π i,

$$\displaystyle \begin{aligned} E[u_{ij}(X)]|\pi_i&=(\mu^{(i)}-\mu^{(j)})'\varSigma^{-1}\mu^{(i)}-\tfrac{1}{2}(\mu^{(i)}-\mu^{(j)})'\varSigma^{-1}(\mu^{(i)}+\mu^{(j)})\\ &=\tfrac{1}{2}(\mu^{(i)}-\mu^{(j)})'\varSigma^{-1}(\mu^{(i)}-\mu^{(j)})=\tfrac{1}{2}\varDelta_{ij}^2;\\ \mathrm{Var}(u_{ij}(X))|\pi_i&=\mathrm{Var}[(\mu^{(i)}-\mu^{(j)})'\varSigma^{-1}X]=\varDelta_{ij}^2.\end{aligned} $$

Since u ij(X) is a linear function of the vector normal variable X, it is normal and the distribution of u ij(X)|π i is

$$\displaystyle \begin{aligned} u_{ij}(X)\sim N_1(\tfrac{1}{2}\varDelta_{ij}^2,\varDelta_{ij}^2), \ j=1,\ldots,k,\ j\ne i.{} \end{aligned} $$
(12.7.6)

This normality holds for each j, j = 1, …, k, ji, and for a fixed i. Then, we can evaluate the joint density of u i1(X), u i2(X), …, u ik(X), excluding u ii(X), and we can evaluate P(i|i, A) from this joint density. Observe that for j = 1, …, k, ji, the u ij(X)’s are linear functions of the same vector normal variable X and hence, they have a joint normal distribution. In that case, the mean value vector is a (k − 1)-vector, denoted by μ (ii), whose elements are \(\tfrac {1}{2}\varDelta _{ij}^2,\ j=1,\ldots ,k,\ j\ne i,\) for a fixed i, or equivalently,

$$\displaystyle \begin{aligned} \mu_{(ii)}^{\prime}=[\tfrac{1}{2}\varDelta_{i1}^2,\ldots,\tfrac{1}{2}\varDelta_{ik}^2]=E[U_{ii}^{\prime}]\ \ \mathrm{with} \ \ U_{ii}^{\prime}=[u_{i1}(X),\ldots,u_{ik}(X)], \end{aligned}$$

excluding the elements u ii(X) and \(\varDelta _{ii}^2=0\). The subscript ii in U ii indicates the region A i and the original population P i(X). The covariance matrix of U ii, denoted by Σ ii, will be a (k − 1) × (k − 1) matrix of the form Σ ii = [Cov(u ir, u it)] = (c rt), c rt = Cov(u ir(X), u it(X)). The subscript ii in Σ ii indicates the region A i and the original population P i(X). Observe that for two linear functions t 1 = C′X = c 1 x 1 + ⋯ + c p x p and t 2 = B′X = b 1 x 1 + ⋯ + b p x p, having a common covariance matrix Cov(X) = Σ, we have Var(t 1) = C′ΣC, Var(t 2) = B′ΣB and Cov(t 1, t 2) = C′ΣB = B′ΣC. Therefore,

$$\displaystyle \begin{aligned} c_{rt}=(\mu^{(i)}-\mu^{(r)})'\varSigma^{-1}(\mu^{(i)}-\mu^{(t)}),\ i\ne r,t;\ \varSigma_{ii}=(c_{rt}). \end{aligned}$$

Let the vector U ii be such that \(U_{ii}^{\prime }=(u_{i1}(X),\ldots ,u_{ik}(X))\), excluding u ii(X). Thus, for a specific i,

$$\displaystyle \begin{aligned} U_{ii}\sim N_{k-1}(\mu_{(ii)},\varSigma_{ii}),\ \varSigma_{ii}>O, \end{aligned}$$

and its density function, denoted by g ii(U ii), is

$$\displaystyle \begin{aligned} g_{ii}(U_{ii})=\frac{1}{(2\pi)^{\tfrac{k-1}{2}}|\varSigma_{ii}|{}^{\tfrac{1}{2}}}\mathrm{e}^{{-\frac{1}{2}}(U_{ii}-\mu_{(ii)})'\varSigma_{ii}^{-1}(U_{ii}-\mu_{(ii)})}. \end{aligned}$$

Then,

$$\displaystyle \begin{aligned} P(i|i,A)&=\int_{u_{ij}(X)\ge 0,\ j=1,\ldots,k,\ j\ne i}g_{ii}(U_{ii})\mathrm{d}U_{ii}\\ &=\int_{u_{i1}(X)=0}^{\infty}\cdots\int_{u_{ik}(X)=0}^{\infty}g_{ii}(U_{ii})\mathrm{d}u_{i1}(X)\wedge...\wedge\mathrm{d}u_{ik}(X), {}\end{aligned} $$
(12.7.7)

the differential du ii being absent from dU ii, which is also the case for u ii(X) ≥ 0 in the integral. If prior probabilities and cost functions are involved, then replace u ij(X) ≥ 0 in the integral (12.7.7) by \(u_{ij}(X)\ge \ln k_{ij}, \ k_{ij}=\frac {q_jC(i|j)}{q_iC(j|i)}\). Thus, the problem reduces to determining the joint density g ii(U ii) and then evaluating the multiple integrals appearing in (12.7.7). In order to compute the probability specified in (12.7.7), we standardize the normal density by letting \(V_{ii}=\varSigma _{ii}^{-\frac {1}{2}}U_{ii}\) where V ii ∼ N k−1(O, I), and with the help of this standard normal, we may compute this probability through V ii. Note that (12.7.7) holds for each i, i = 1, …, k, and thus, the probabilities of achieving a correct classification, P(i|i, A) for i = 1, …, k, are available from (12.7.7).

For computing probabilities of misclassification of the type P(i|j, A), we can proceed as follows: In this context, the basic population is \(\pi _j:P_j(X)\sim N_p(-\frac {1}{2}\varDelta _{ij}^2,\varDelta _{ij}^2)\), the region of integration being A i : {u i1(X) ≥ 0, …, u ik(X) ≥ 0}, excluding the element u ii(X) ≥ 0. Consider the vector U ij corresponding to the vector U ii. In U ij, i stands for the region A i and j, for the original population P j(X). The elements of U ij are the same as those of U ii, that is, \(U_{ij}^{\prime }=(u_{i1}(X),\ldots ,u_{ik}(X))\), excluding u ii(X). We then proceed as before and compute the covariance matrix Σ ij of U ij in the original population P j(X). The variances of u im(X), m = 1, …, k, mi, will remain the same but the covariances will be different since they depend on the mean values. Thus, U ij ∼ N k−1(μ (ij), Σ ij), and on standardizing, one has V ij ∼ N k−1(O, I), so that the required probability P(i|j, A) can be computed from the elements of V ij. Note that when the prior probabilities and costs are equal,

$$\displaystyle \begin{aligned} P(i|j,A)&=\int_{u_{i1}(X)\ge 0,\ldots,u_{ik}(X)\ge 0}g_{ij}(U_{ij})\,\mathrm{d}u_{i1}(X)\wedge\ldots\wedge\mathrm{d}u_{ik}(X)\\ &=\int_{u_{i1}(X)=0}^{\infty}\cdots\int_{u_{ik}(X)=0}^{\infty}g_{ij}(U_{ij})\,\mathrm{d}U_{ij},{}\end{aligned} $$
(12.7.8)

excluding u ii(X) in the integral as well as the differential du ii(X). Thus, dU ij = du i1(X) ∧… ∧du ik(X), excluding du ii(X).

Example 12.7.2

Given the data provided in Example 12.7.1, what is the probability of correctly assigning X to π 1? That is, compute the probability P(1|1, A).

Solution 12.7.2

Observe that the joint density of u 12(X) and u 13(X) is that of a bivariate normal distribution since u 12(X) and u 13(X) are linear functions of the same vector X where X has a multivariate normal distribution. In order to compute the joint bivariate normal density, we need E[u 1j(X)], Var(u 1j(X)), j = 2, 3 and Cov(u 12(X), u 13(X)). The following quantities are evaluated from the data given in Example 12.7.1:

Hence, the covariance matrix of , denoted by Σ 11, is the following:

where

The bivariate normal density of U 11 is the following:

(12.7.9)

with Σ 11 and \(\varSigma _{11}^{-1}=B'B\) as previously specified. Letting Y = B(U 11 − E[U 11]), Y ∼ N 2(O, I). Note that

Then,

and we have

which yields \(u_{12}(X)=\frac {7}{6}+\frac {1}{\sqrt {3}}\,y_1+\sqrt {2}\,y_2\) and \(u_{13}(X)=4+2\sqrt {2}\,y_2\). The intersection of the two lines corresponding to u 12(X) = 0 and u 13(X) = 0 is the point \((y_1,y_2)=(\sqrt {3}(\frac {5}{6}),-\sqrt {2})\). Thus, u 12(X) ≥ 0 and u 13(X) ≥ 0 give \(y_2\ge -\frac {4}{2\sqrt {2}}=-\sqrt {2}\) and \(\frac {7}{6}+\frac {1}{\sqrt {3}}\,y_1+\sqrt {2}\,y_2\ge 0\). We can express the resulting probability as ρ 1 − ρ 2 where

$$\displaystyle \begin{aligned} \rho_1=\int_{y_2=-\sqrt{2}}^{\infty}\int_{y_1=-\infty}^{\infty}\tfrac{1}{2\pi}\mathrm{e}^{-\frac{1}{2}(y_1^2+y_2^2)}\mathrm{d}y_1\wedge\mathrm{d}y_2=1-\varPhi(-\sqrt{2}),{} \end{aligned} $$
(12.7.10)

which is explicitly available, where Φ(⋅) denotes the distribution function of a standard normal variable, and

$$\displaystyle \begin{aligned} \rho_2&=\int_{y_1=-\infty}^{\sqrt{3}(\tfrac{5}{6})}\int_{y_2=-\sqrt{2}}^{\tfrac{1}{\sqrt{2}}(\tfrac{7}{6}+\tfrac{1}{\sqrt{3}}y_1)}\tfrac{1}{2\pi}\mathrm{e}^{-\tfrac{1}{2}(y_1^2+y_2^2)}\mathrm{d}y_1\wedge\mathrm{d}y_2\\ &=\int_{y_1=-\infty}^{\sqrt{3}(\tfrac{5}{6})}\tfrac{1}{\sqrt{(2\pi)}}\mathrm{e}^{-\tfrac{1}{2}y_1^2}[\varPhi(-\tfrac{1}{\sqrt{2}}(\tfrac{7}{6}+\tfrac{1}{\sqrt{3}}y_1))-\varPhi(-\sqrt{2})]\mathrm{d}y_1\\ &=\int_{y_1=-\infty}^{\sqrt{3}(\tfrac{5}{6})}\tfrac{1}{\sqrt{(2\pi)}}\mathrm{e}^{-\tfrac{1}{2}y_1^2}\varPhi(-\tfrac{1}{\sqrt{2}}(\tfrac{7}{6}+\tfrac{1}{\sqrt{3}}y_1))\mathrm{d}y_1-\varPhi(\sqrt{3}(\tfrac{5}{6}))\varPhi(-\sqrt{2}).{} \end{aligned} $$
(12.7.11)

Therefore, the required probability is

$$\displaystyle \begin{aligned} \rho_1-\rho_2&=1-\varPhi(-\sqrt{2})+\varPhi(-\sqrt{2})\varPhi(\sqrt{3}(\tfrac{5}{6}))\\ &\ \ \ \ -\int_{y_1=-\infty}^{\sqrt{3}(\tfrac{5}{6})}\tfrac{1}{\sqrt{(2\pi)}}\mathrm{e}^{-\tfrac{1}{2}y_1^2}\,\varPhi(-\tfrac{1}{\sqrt{2}}(\tfrac{7}{6}+\tfrac{1}{\sqrt{3}}y_1))\, \mathrm{d}y_1.{} \end{aligned} $$
(12.7.12)

Note that all quantities, except the integral, are explicitly available from standard normal tables. The integral part can be read from a bivariate normal table. If a bivariate normal table is used, then one can approximate the required probability from (12.7.9). Alternatively, once evaluated numerically, the integral is found to be equal to 0.2182 which subtracted from 0.9941, yields a probability of 0.7759 for P(1|1, A).

12.7.3. Classification when the population parameters are unknown

When training samples are available from the populations π i, i = 1, …, k, we can estimate the parameters and proceed with the classification. Let \(X_j^{(i)},j=1,\ldots ,n_i,\) be a simple random sample of size n i from the i-th population π i. Then, the sample average is \(\bar {X}^{(i)}=\tfrac {1}{n_i}\sum _{j=1}^{n_i}X_j^{(i)}\), and with our usual notations, the sample matrix, the matrix of sample means and sample sum of products matrix are the following:

$$\displaystyle \begin{aligned} {\mathbf{X}}^{\mathbf{(i)}}&=[X_1^{(i)},\ldots,X_{n_i}^{(i)}],\ \bar{\mathbf{X}}^{\mathbf{(i)}}=[\bar{X}^{(i)},\ldots,\bar{X}^{(i)}],\\ S_i&=[{\mathbf{X}}^{\mathbf{(i)}}-\bar{\mathbf{X}}^{\mathbf{(i)}}][{\mathbf{X}}^{\mathbf{(i)}} - \bar{\mathbf{X}}^{\mathbf{(i)}}]',i=1,\ldots,k,\end{aligned} $$

where

Note that X (i) and \(\bar {\mathbf {X}}^{\mathbf {(i)}}\) are p × n i matrices and \(X_j^{(i)}\) is a p × 1 vector for each j = 1, …, n i, and i = 1, …, k. Let the population mean value vectors and the common covariance matrix be μ (1), …, μ (k), and Σ > O, respectively. Then, the unbiased estimators for these parameters are the following, identifying the estimators/estimates by a hat: \(\hat {\mu }_j^{(i)}=\bar {X}^{(i)},\ i=1,\ldots ,k, \) and \(\hat {\varSigma }=\frac {S}{n_1+\cdots +n_k-k},\ S=S_1+\cdots +S_k\). On replacing the population parameters by their unbiased estimators, the classification criteria u ij(X), j = 1, …, k, ji, become the following: Classify an observation vector X into π i if \(\hat {u}_{ij}(X)\ge \ln k_{ij}, \ k_{ij}=\frac {q_jC(i|j)}{q_iC(j|i)},\ j=1,\ldots ,k,\ j\ne i,\) or \(\hat {u}_{ij}\ge 0,\ j=1,\ldots ,k,\ j\ne i\), if q 1 = ⋯ = q k, and the C(i|j)’s are equal j = 1, …, k, ji, where

$$\displaystyle \begin{aligned} \hat{u}_{ij}(X)=(\bar{X}^{(i)}-\bar{X}^{(j)})'\hat{\varSigma}^{-1}X -\tfrac{1}{2}(\bar{X}^{(i)}-\bar{X}^{(j)})'\hat{\varSigma}^{-1}(\bar{X}^{(i)}+\bar{X}^{(j)}){} \end{aligned} $$
(12.7.13)

for j = 1, …, k, ji. Unfortunately, the exact distribution of \(\hat {u}_{ij}(X)\) is difficult to obtain even when the populations π i’s have p-variate normal distributions. However, when \(n_j\to \infty , \ \bar {X}^{(j)}\to \mu ^{(j)},\ j=1,\ldots ,k,\) and when n j →, j = 1, …, k, \(\hat {\varSigma }\to \varSigma \). Then, asymptotically, that is, when n j →, j = 1, …, k, \(\hat {u}_{ij}(X)\to u_{ij}(X),\) so that the theory discussed in the previous sections is applicable. As well, the classification probabilities can then be evaluated as illustrated in Example 12.7.2.

12.8. The Maximum Likelihood Method when the Population Covariances Are Equal

Consider k real normal populations π i : P i(X) ≃ N p(μ (i), Σ), Σ > O, i = 1, …, k, having the same covariance matrix but different mean value vectors μ (i), i = 1, …, k. A p-vector X at hand is to be classified into one of these populations π j, j = 1, …, k. Consider a simple random sample \(X_1^{(i)},X_2^{(i)},\ldots ,X_{n_i}^{(i)}\) of sizes n i from π i for i = 1, …, k. Employing our usual notations, the sample means, sample matrices, matrices of sample means and the sample sum of products matrices are as follows:

(12.8.1)

Then, the unbiased estimators of the population parameters, denoted with a hat, are

$$\displaystyle \begin{aligned} \hat{\mu}^{(i)}=\bar{X}^{(i)},\ i=1,\ldots,k,\ \ \mathrm{and}\ \ \hat{\varSigma}=\frac{S}{n_1+n_2+\cdots+n_k-k}.{} \end{aligned} $$
(12.8.2)

The null hypothesis can be taken as \(X_1^{(i)},\ldots ,X_{n_i}^{(i)}\) and X originating from π i and \(X_1^{(j)},\ldots ,X_{n_j}^{(j)}\) coming from π j, j = 1, …, k, ji, the alternative hypothesis being: X and \(X_1^{(j)},\ldots ,X_{n_j}^{(j)}\) coming from π j for j = 1, …, k, ji, and \(X_1^{(i)},\ldots ,X_{n_i}^{(i)}\) originating from π i. On proceeding as in Sect. 12.6, when the prior probabilities are equal and the cost functions are identical, the criterion for classification of the observed vector X to π i for a specific i is

$$\displaystyle \begin{aligned} A_i:&\Big(\frac{n_j}{n_j+1}\Big)^2(X-\bar{X}^{(j)})'\Big(\frac{S}{n_{(k)}}\Big)^{-1}(X-\bar{X}^{(j)})\\ &-\Big(\frac{n_i}{n_i+1}\Big)^2(X-\bar{X}^{(i)})'\Big(\frac{S}{n_{(k)}}\Big)^{-1}(X-\bar{X}^{(i)})\ge 0{} \end{aligned} $$
(12.8.3)

for j = 1, …, k, ji, where the decision rule is A = (A 1, …, A k), S = S (1) + ⋯ + S (k) and n (k) = n 1 + n 2 + ⋯ + n k − k. Note that (12.8.3) holds for each i, i = 1, …, k, and hence, A 1, …, A k are available from (12.8.3). Thus, the vector X at hand is classified into A i, that is, assigned to the population π i, if the inequalities in (12.8.3) are satisfied. This statement holds for each i, i = 1, …, k. The exact distribution of the criterion in (12.8.3) is difficult to establish but the probabilities of classification can be computed from the asymptotic theory discussed in Sect. 12.7 by observing the following:

When \(n_i\to \infty ,~ \bar {X}^{(i)}\to \mu ^{(i)},\ i=1,\ldots ,k,\) and when \(n_1\to \infty ,\ldots ,n_k\to \infty ,~ \hat {\varSigma }\to \varSigma \). Thus, asymptotically, when n i →, i = 1, …, k, the criterion specified in (12.8.3) reduces to the criterion (12.7.3) of Sect. 12.7. Accordingly, when n i → or for very large n i’s, i = 1, …, k, one may utilize (12.7.3) for computing the probabilities of classification, which was illustrated in Examples 12.7.1 and 12.7.2.

12.9. Maximum Likelihood Method and Unequal Covariance Matrices

The likelihood procedure can also provide a classification rule when the normal population covariance matrices are different. For example, let π 1 : P 1(X) ≃ N p(μ (1), Σ 1), Σ 1 > O, and π 2 : P 2(X) ≃ N p(μ (2), Σ 2), Σ 2 > O, where μ (1)μ (2) and Σ 1Σ 2. Let a simple random sample \(X_1^{(1)},\ldots ,X_{n_1}^{(1)}\) of size n 1 from π 1 and a simple random sample \(X_1^{(2)},\ldots ,X_{n_2}^{(2)}\) of size n 2 from π 2 be available. Let \(\bar {X}^{(1)}\) and \(\bar {X}^{(2)}\) be the sample averages and S 1 and S 2 be the sample sum of products matrices, respectively. In classification problems, there is an additional vector X which comes from π 1 under the null hypothesis and from π 2 under the alternative. Then, the maximum likelihood estimators, denoted by a hat, will be the following:

$$\displaystyle \begin{aligned} \hat{\mu}^{(1)}=\bar{X}^{(1)}, \ \hat{\mu}^{(2)}=\bar{X}^{(2)}, \ \hat{\varSigma}_1=\frac{S_1}{n_1}\ \ \mathrm{and}\ \ \hat{\varSigma}_2=\frac{S_2}{n_2}, \end{aligned}$$
(i)

respectively, when no additional vector is involved. However, these estimators will change in the presence of the additional vector X, where X is the vector at hand to be assigned to π 1 or π 2. When X originates from π 1 or π 2, μ (1) and μ (2) are respectively estimated as follows:

$$\displaystyle \begin{aligned} \hat{\mu}_{*}^{(1)}=\frac{n_1\bar{X}_1+X}{n_1+1}\ \ \mathrm{and}\ \ \hat{\mu}_{*}^{(2)}=\frac{n_2\bar{X}_2+X}{n_2+1}, \end{aligned}$$
(ii)

and when X comes from π 1 or π 2, Σ 1 and Σ 2 are estimated by

$$\displaystyle \begin{aligned} \hat{\varSigma}_{1*}=\frac{S_1+S_3^{(1)}}{n_1+1}\ \ \mathrm{and}\ \ \hat{\varSigma}_{2*}=\frac{S_2+S_3^{(2)}}{n_2+1} \end{aligned}$$
(iii)

where

$$\displaystyle \begin{aligned} S_3^{(1)}&=(X-\hat{\mu}_{*}^{(1)})(X-\hat{\mu}_{*}^{(1)})'=\Big(\frac{n_1}{n_1+1}\Big)^2(X-\bar{X}_1)(X-\bar{X}_1)' \\ S_3^{(2)}&=(X-\hat{\mu}_{*}^{(2)})(X-\hat{\mu}_{*}^{(2)})'=\Big(\frac{n_2}{n_2+1}\Big)^2(X-\bar{X}_2)(X-\bar{X}_2)',\end{aligned} $$
(iv)

referring to the derivations provided in Sect. 12.6 when discussing maximum likelihood procedures. Thus, the null hypothesis can be X and \(X_1^{(1)},\ldots ,X_{n_1}^{(1)}\) are from π 1 and \(X_1^{(2)},\ldots ,X_{n_2}^{(2)}\) are from π 2, versus the alternative: X and \(X_1^{(2)},\ldots ,X_{n_2}^{(2)}\) being from π 2 and \(X_1^{(1)},\ldots ,X_{n_1}^{(1)}\), from π 1. Let L 0 and L 1 denote the likelihood functions under the null and alternative hypotheses, respectively. Observe that under the null hypothesis, Σ 1 is estimated by \(\hat {\varSigma }_{1*}\) of (iii) and Σ 2 is estimated by \(\hat {\varSigma }\) of (i), respectively, so that the likelihood ratio criterion λ is given by

$$\displaystyle \begin{aligned} \lambda=\frac{\max L_0}{\max L_1}=\frac{|\hat{\varSigma}_{2*}|{}^{\frac{n_2+1}{2}}|\hat{\varSigma}_1|{}^{\frac{n_1}{2}}} {|\hat{\varSigma}_{1*}|{}^{\frac{n_1+1}{2}}|\hat{\varSigma}_2|{}^{\frac{n_2}{2}}}.{} \end{aligned} $$
(12.9.1)

The determinants in (12.9.1) can be represented as follows, referring to the simplifications discussed in Sect. 12.6:

$$\displaystyle \begin{aligned} \lambda= \frac{(n_1+1)^{\frac{p(n_1+1)}{2}}}{(n_2+1)^{\frac{p(n_2+1)}{2}}} \frac{|S_2|{}^{\frac{n_2+1}{2}}}{|S_1|{}^{\frac{n_1+1}{2}}} \frac{[1+(\frac{n_2}{n_2+1})^2(X-\bar{X}_2)'S_2^{-1}(X-\bar{X}_2)]^{\frac{n_2+1}{2}}|\hat{\varSigma}_1|{}^{\frac{n_1}{2}}} {[1+(\frac{n_1}{n_1+1})^2(X-\bar{X}_1)'S_1^{-1}(X-\bar{X}_1)]^{\frac{n_1+1}{2}}|\hat{\varSigma}_2|{}^{\frac{n_2}{2}}}.{} \end{aligned} $$
(12.9.2)

The classification rule then consists of assigning the observed vector X to π 1 if λ ≥ 1 and, to π 2 if λ < 1. We could have expressed the criterion in terms of \(\lambda _1=\lambda ^{\frac {2}{n}}\) if n 1 = n 2 = n, which would have simplified the expressions appearing in (12.9.2).