Bayesian prediction under a class of multivariate distributions

In this paper the prediction problem is studied under members of a class \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\Im^{*}}$$\end{document} of multivariate distributions, constructed by AL-Hussaini and Ateya (Stat Pap 46:321–338, 2005; J Egypt Math Soc 14(1):45–54, 2006). More attention is given to bivariate compound Rayleigh distribution, which is a member of this class, as illustrative example.


Introduction
This section deals with a class of continuous distributions and its multivariate version * , and the generation of a multivariate sample from * , and one and two-sample predictions.
Suppose that a class of distribution functions is of the form where a and b are non-negative real numbers such that a may assume the value zero and b the value infinity, λ η (x) is a continuous, monotone increasing, and differentiable function of x such that λ η (x) → 0 as x → a + , λ η (x) → ∞ as x → b − and η is a parameter (could be a vector), (θ, δ, η) belongs to a parameter space . This class covers some important distributions such as the Weibull, exponential, Rayleigh, compound Weibull, compound exponential (Lomax), compound Rayleigh, Pareto, power function, beta, Gompertz and compound Gompertz distributions, among others. The failure rate and survival functions corresponding to F ∈ are, respectively, δθλ η (x) and e −θδλ η (x) , so that the probability density function (pdf) is given, for 0 ≤ a < x < b ≤ ∞, by (1. 2) The class was used by AL-Hussaini and Osman [10], AL-Hussaini [4], Ahmad [1,2], Ahmad and Fawzy [3], AL-Hussaini and Ahmad [5,6], and Jafar et al. [13].

A class of multivariate distributions
AL-Hussaini and Ateya [7,8] constructed a class of multivariate distributions by compounding members of the class with the gamma distribution. The resulting multivariate distributions form a class * , given by * = F * : where ≡ , It was assumed that is a positive random variable following the Gamma (α, β) distribution with pdf g (θ ) given by The pdf f X (x) in (1.3) may be obtained by writing Maximum likelihood and Bayes estimation of the parameters of members of the class * were obtained by AL-Hussaini and Ateya [7,8] and particularly when the underlying population distribution is bivariate compound Weibull or bivariate compound Gompertz.

One-sample prediction
Suppose that x 1 < x 2 < · · · < x r be the informative type II censored sample, representing the first r ordered lifetimes of a random sample of size n drawn from a population with pdf f X (x), cumulative distribution function (cdf)F X (x) and reliability function (rf)R(x). In one-sample scheme the Bayesian prediction intervals (BPI's) for the remaining unobserved future (n − r ) lifetimes are sought based on the first r observed ordered lifetimes.
For the remaining (n − r ) components, let y s = x r +s denote the future lifetime of the sth component to fail, 1 ≤ s ≤ (n − r ). The conditional density function of y s given that the r components had already failed is θ is the vector of parameters. The predictive density function is given by where π * (θ |x) is the posterior density function of θ given x and x = (x 1 , . . . , x r ).

Two-sample prediction
Let x 1 < x 2 < · · · < x r and z 1 < z 2 < · · · < z m represent informative (type II censored) sample from a random sample of size n and a future ordered sample of size m, respectively. It is assumed that the two samples are independent and drawn from a population with pdf f X (x), cdf F X (x) and rf R(x).
Our aim is to obtain the BPI's for z s , s = 1, 2, . . . , m. The conditional density function of z s , given the vector of parameters θ , is θ is the vector of parameters. The predictive density function is given by (1.12) By solving Equations (1.11) and (1.12), we get the interval (L , U ).

Bayesian prediction intervals for future bivariate observations
The main goal in this section is to study the one-sample and two-sample prediction problems in case of bivariate informative observations. While ordering a set of univariate random variables is a clear and straight-forward matter as it can be done by simply ordering the set of random variables, such ordering is not as clear if we are dealing with a set of random vectors.
Barnett [11] classified the principles used for ordering multivariate data into four principles: marginal, reduced (aggregate), partial and conditional (sequential) ordering. An interesting detailed discussion of such principles with illustrative examples are given in Barnett's paper.
In our paper, we wish to predict bivariate random vectors. The first components of the predicted random vectors are based on the ordered first components of the informative sample, as it is done in the univariate case. To predict the second components, we compute the norms of each vector of the informative sample, order the norms and then predict the future norms as it is done in the univariate case. The relation between the components of vectors and norms enables us to obtain the second components of the predicted vectors. In other words, we obtain the second component of a predicted vector from the knowledge of the values of the first component and the norm of the vector. Ateya [9] used this point of view to obtain the BPI's of future observations from bivariate truncated generalized Cauchy distribution.

One-sample prediction
Let (x 1 , y 1 ), . . . , (x r , y r ) be the first r bivariate informative observations from a random sample of size n of bivariate observations. Suppose that the first components of such informative vectors are ordered, that is x 1 < x 2 < · · · < x r and that their norms are given by z 1 , z 2 , . . . , z r .

Two-sample prediction in case of (BVCR) distribution
In this case we apply the steps in Sect.

as follows
Step 1 Substituting from (3.4) and (3.6) in (1.9) and then using (3.8) and (3.9) we can write g 2 (z * s |c, α)π * (c, α|z 1:r , . . . , z r :r ) = A * * B * i, j,s,m c n+r +c 1 −l 1 +k− j+1 and A is a normalizing constant. It then follows that the predictive density function of Z * s is given by Step 2 Using the pdf (3.2), its cdf and the same prior as in (3.8) the predictive density function of X * s is given by where where A 1 is a normalizing constant and To obtain (1 − τ ) % BPI for X * s , say (L 2s , U 2s ), we solve the following two nonlinear equations, numerically, Step 3 From Steps 2 and 3, a

Numerical example
In this section we follow the steps  In these tables, we observe that