1 Introduction

An auxiliary variable may be defined as a variable whose information is known before collecting data. Auxiliary information is beneficial for finding more reliable and efficient estimators. Without spending more money on the survey, this variable could provide the surveyor with extra information about the variable under study. The correlation between auxiliary and study variables could be negative or positive. Auxiliary information for estimating the finite population mean is used [1, 2, 4, 6, 12, 23].

Most of the time, auxiliary information is not readily available for every population unit. Two-phase sampling is often used for estimating the population parameters of auxiliary variables in the first phase sample [22]. Two-phase sampling is usually used where the collection of data on variables of interest is very costly. This method of sampling is a less time-consuming and more accurate estimation method. This technique is also a cost-effective approach for estimating the population mean. Estimation strategy to estimate the current population mean in two-occasion successive sampling under two-phase is suggested [9, 10].

Information on an auxiliary variable may be mostly attained on the first (second) occasion. Auxiliary information on the first (second) occasion for estimation of the current population mean in two-occasion successive sampling is used [3, 16]. Further, a generalized class of estimates to estimate the finite population mean in the existence of non-response is suggested [5, 11, 15, 26].

Non-response occurs when there is a massive contrast between those who respond and those who do not respond. Non-response occurs when those who respond and those who do not respond significantly differ from each other. They include the refusal from some people to take part in the survey, poor conduction of survey, survey having not submitted by some participants due to forgetfulness, failure of the survey to reach all the targeted participants in the sample, and the greater tendency of some participants for answering as compared to others. In the first case, some people decline to participate in the survey. This may happen because the researcher asks the participant for information that may cause embarrassment for them, or the information is about activities that are not legal.

In the second case, the poor conduction of the survey may result in non-response. For instance, if the researcher has a small survey for young adults or a smartphone survey for old adults, both cases will probably result in non-response for the intended population. Thirdly, some participants do not remember to submit the survey after conduction. The fourth reason can be that all the members in the sample did not receive the survey (questionnaire). For example, the survey's email may remain unsent in the Spam, e.g., cell phone or laptop. The following reason can be that some people have more inclined to give answers. For instance, people who are all-time readers are more likely and interested to give answers about reading than those who do not read or read less. In the historical context, some researchers have observed and even experienced that those members of the population who have less income are less likely to respond to surveys [7].

Similarly, another researcher has noted that unmarried males are another group who are less likely or unlikely to respond. Non-response, which results from a contrast between those who respond and those who do not, is considered a bias in statistics, making the results invalid. An auxiliary variable may be defined as a variable whose information is known before collecting data. Auxiliary information is beneficial for finding more reliable and efficient estimators. Without spending more money on the survey, this variable could provide the surveyor with extra information about the variable under study. The correlation between auxiliary and study variables could be negative or positive. Auxiliary information for estimating the finite population mean is used [1, 2, 4, 6, 12].

Most of the time, auxiliary information is not readily available for every population unit. Two-phase sampling is often used for estimating the population parameters of auxiliary variables in the first phase sample. Two-phase sampling is usually used where the collection of data on variables of interest is very costly. This method of sampling is a less time-consuming and more accurate estimation method. This technique is also a cost-effective approach for estimating the population mean. Estimation strategy to estimate the current population mean in two-occasion successive sampling under two-phase is suggested [9, 10, 25].

Information on an auxiliary variable may be mostly attained on the first (second) occasion. Beevi and Chandran [3], Singh et al. [19], amongst them, used the auxiliary information on the first (second) occasion for estimation of the current population mean in two-occasion successive sampling. Further, Chaudhary et al. [5], Riaz et al. [11], Singh and Singh [15] & Zahid et al. [26] suggested a generalized class of estimates to estimate the finite population mean in the existence of non-response.

Non-response occurs when there is a massive contrast between those who respond and those who do not (respond). Non-response occurs when those who respond and those who do not respond significantly differ from each other. They include the refusal from some people to take part in the survey, poor conduction of survey, survey having not submitted by some participants due to forgetfulness, failure of the survey to reach all the targeted participants in the sample, and the greater tendency of some participants for answering as compared to others. In the first case, some people decline to participate in the survey. This may happen because the researcher asks the participant for information that may cause embarrassment for them, or the information is about activities that are not legal.

In the second case, the poor conduction of the survey may result in non-response. For instance, if the researcher has a small survey for young adults or a smartphone survey for old adults, both cases will probably result in non-response for the intended population. Thirdly, some participants do not remember to submit the survey after conduction. The fourth reason can be that all the members in the sample did not receive the survey (questionnaire). For example, the survey’s email may remain unsent in the Spam, e.g., cell phone or laptop. In the historical context, some researchers have observed and even experienced that those members of the population who have less income are less likely to respond to surveys.

Similarly, another researcher has noted that unmarried males are another group who are less likely or unlikely to respond. Non-response, which results from a contrast between those who respond and those who do not, is considered a bias in statistics, making the results invalid. When non-response occurs, most surveys fail to get information on one or more study variables. Ahmad et al. [1], Sharma [13], Sharma and Singh [14], Singh and Khalid [17], Singh and Bandyopadhyay [18], Singh et al. [20], Singh et al. [21], Singh and Joarder [24] & Zaman and Kadilar [27] proposed some estimators for to estimate population parameters under random non-response.

Non-response cannot be completely erased in practice, but it can be overcome by making much effort to get information. Successive (rotation) sampling is more susceptible to the non-response due to its repetitiveness. Motivated by the above work and considering the importance of resolving random non-response, some efficient estimators of population mean in two-occasion successive sampling under a two-phase set-up are proposed.

2 Material and Methods

Consider a finite population \(N\) distinct and identifiable units that has been sampled on two occasions. The character under consideration is denoted by \(x\left( y \right)\) on first (second) occasions, respectively. On both occasions it is assumed that random non-response occurs in the study variable. Further we introduce a stable auxiliary variable \(z\) with unknown population mean and has positive correlation with study variable \(x\left( y \right)\) on first (second) occasion. To estimate the population, mean of auxiliary variable z on the first occasion a preliminary random sample \(S_{{n^{\prime} }}\) of size \(n^{\prime}\) units is drawn without replacement. From the first phase (preliminary) sample a second-phase sample of size \(n\) is drawn by the method of simple random sampling and then the information of study variable \(x\) is collected. From the second phase sample a sub-sample of size \(m\) is retained (matched) from the responding units randomly on the first occasion for its use on the second (current) occasion. It is assumed that these matched units will again respond on the second (current) occasion. From the non-sampled units of the population a preliminary (first phase) sample of size \(u^{\prime}\) is drawn again to estimate the population mean of the auxiliary variable \(z\). Through the method of simple random sampling without replacement the information of the auxiliary variable is collected. To gather the information of study variable \(y\) a second-phase sample of size \(u = \left( {n - m} \right) = n\mu \left( {u < u^{\prime} } \right)\) is drawn from the first-phase sample by the method of simple random sampling without replacement.

The notations which are used in the methodology are as follows:

Let \(\overline{x}_{m}\) and \(\overline{y}_{m}\) are the sample means of matched portions on the first and second occasions, \(\overline{x}_{u}\) and \(\overline{y}_{u}\) are the sample mean of unmatched proportion on the first and second occasion, \(\overline{x}^{*}_{m}\) and \(\overline{y}^{*}_{m}\) are the Hansen and Hurwitz [8] estimator for the matched proportion on the first and second occasion, \(\overline{x}^{*}_{u}\) and \(\overline{y}^{*}_{u}\) are the Hansen and Hurwitz [8] estimator for the unmatched proportion on the first and second occasion.

As \(\overline{x}_{m} = \frac{1}{m}\sum\limits_{i = 1}^{m} {x_{i} }\), \(\overline{y}_{m} = \frac{1}{m}\sum\limits_{i = 1}^{m} {y_{i} }\), \(\overline{x}_{u} = \frac{1}{u}\sum\limits_{i = 1}^{u} {x_{i} }\), \(\overline{y}_{u} = \frac{1}{u}\sum\limits_{i = 1}^{u} {y_{i} }\), \(\overline{x}^{*}_{m} = \frac{{m_{1} \overline{x}_{{m_{1} }} + m_{2} \overline{x}_{{m_{{h_{2} }} }} }}{m}\), \(\overline{y}^{*}_{m} = \frac{{m_{1} \overline{y}_{{m_{1} }} + m_{2} \overline{y}_{{m_{{h_{2} }} }} }}{m}\), \(\overline{x}^{*}_{u} = \frac{{u_{1} \overline{x}_{{u_{1} }} + m_{2} \overline{x}_{{u_{{h_{2} }} }} }}{u}\) and \(\overline{y}^{*}_{u} = \frac{{u_{1} \overline{y}_{{u_{1} }} + u_{2} \overline{y}_{{u_{{h_{2} }} }} }}{u}\).

Since the estimators \(T_{mj}^{*}\) and \(T_{mj}^{**}\) are the modified exponential type estimators, they are biased for the population mean \(\overline{Y}.\) The expressions of bias and the mean square error of the suggested estimators are derived up to the first order of approximations under the large sample assumptions and using the following transformations:

Let \(\overline{y}_{u}^{*} = \overline{Y} \left( {1 + e_{1} } \right),\)\(\overline{z}_{u}^{*} = \overline{Z}\left( {1 + e_{2} } \right),\)\(\overline{z}_{{u^{\prime} }} = \overline{Z}\left( {1 + e_{3} } \right),\)\(\overline{y}_{m} = \overline{Y}\left( {1 + e_{4} } \right),\)\(\overline{x}_{n}^{*} = \overline{X}\left( {1 + e_{5} } \right),\)\(\overline{x}_{m} = \overline{X}\left( {1 + e_{6} } \right),\)\(\overline{z}_{{n^{\prime} }} = \overline{Z}\left( {1 + e_{7} } \right),\)\(\overline{z}_{n} = \overline{Z}\left( {1 + e_{8} } \right),\)\(\overline{y}_{u} = \overline{Y}\left( {1 + e_{9} } \right),\)\(\overline{z}_{u} = \overline{Z}\left( {1 + e_{10} } \right)\) and \(\overline{x}_{n} = \overline{X}\left( {1 + e_{11} } \right).\)

Such that \(E\left( {e_{k} } \right) = 0\) and \(\left| {e_{k} } \right| < 1\) for \((k = 1,2, \ldots ,11)\).

Thus we have the following expectations as:

We assume \(E\left( {e_{1}^{2} } \right) = f_{2}^{*} C_{y}^{2} ,\)\(E\left( {e_{2}^{2} } \right) = f_{2}^{*} C_{z}^{2} ,\)\(E\left( {e_{3}^{2} } \right) = f_{2}{\prime} C_{z}^{2} ,\)\(E\left( {e_{4}^{2} } \right) = f_{1} C_{y}^{2} ,\)\(E\left( {e_{5}^{2} } \right) = f_{1}^{*} C_{x}^{2} ,\)\(E\left( {e_{6}^{2} } \right) = f_{1} C_{x}^{2} ,\)\(E\left( {e_{7}^{2} } \right) = f^{\prime} C_{z}^{2} ,\)\(E\left( {e_{8}^{2} } \right) = fC_{z}^{2} ,\)\(E\left( {e_{9}^{2} } \right) = f_{2} C_{y}^{2} ,\)\(E\left( {e_{10}^{2} } \right) = f_{2} C_{z}^{2} ,\)\(E\left( {e_{11}^{2} } \right) = f_{2}{\prime} C_{x}^{2} ,\)\(E\left( {e_{1} e_{2} } \right) = f_{2}^{*} \rho_{yz} C_{y} C_{z} ,\)\(E\left( {e_{2} e_{3} } \right) = f_{2}{\prime} C_{z}^{2} ,\)\(E\left( {e_{1} e_{3} } \right) = f_{2}{\prime} \rho_{yz} C_{y} C_{z} ,\)\(E\left( {e_{1} e_{9} } \right) = f_{2} C_{y}^{2} ,\)\(E\left( {e_{2} e_{9} } \right) = f_{2} \rho_{yz} C_{y} C_{z} ,\)\(E\left( {e_{3} e_{9} } \right) = f_{2}{\prime} \rho_{yz} C_{y} C_{z} ,\)\(E\left( {e_{6} e_{7} } \right) = f^{\prime} \rho_{xz} C_{x} C_{z} ,\)\(E\left( {e_{5} e_{6} } \right) = f_{1}^{*} C_{x}^{2} ,\)\(E\left( {e_{5} e_{4} } \right) = f_{1}^{*} \rho_{yx} C_{y} C_{x} ,\)\(E\left( {e_{4} e_{6} } \right) = f_{1} \rho_{yx} C_{y} C_{x} ,\)\(E\left( {e_{7} e_{8} } \right) = f^{\prime} \rho_{yx} C_{y} C_{x} ,\)\(E\left( {e_{7} e_{4} } \right) = f^{\prime} \rho_{yz} C_{y} C_{z} ,\)\(E\left( {e_{8} e_{5} } \right) = f\rho_{xz} C_{x} C_{z} ,\) \(E\left( {e_{8} e_{4} } \right) = f\rho_{yz} C_{y} C_{z} ,\)\(E\left( {e_{6} e_{8} } \right) = f\rho_{xz} C_{x} C_{z} ,\)\(E\left( {e_{3} e_{10} } \right) = f_{2}{\prime} C_{z}^{2} ,\)\(E\left( {e_{3} e_{9} } \right) = f_{2}{\prime} \rho_{yz} C_{y} C_{z} ,\)\(E\left( {e_{7} e_{5} } \right) = f^{\prime} \rho_{xz} C_{x} C_{z} ,\)\(E\left( {e_{10} e_{9} } \right) = f_{2} \rho_{yz} C_{y} C_{z} ,\)\(E\left( {e_{11} e_{7} } \right) = f^{\prime} \rho_{xz} C_{x} C_{z} ,\)\(E\left( {e_{11} e_{6} } \right) = fC_{x}^{2} ,\)\(E\left( {e_{11} e_{8} } \right) = f\rho_{xz} C_{x} C_{z} ,\)\(E\left( {e_{11} e_{4} } \right) = f\rho_{yx} C_{y} C_{x} ,\)\(\, f = \left( {\frac{1}{n} - \frac{1}{N}} \right),\)\(f^{\prime} = \left( {\frac{1}{{n^{\prime} }} - \frac{1}{N}} \right),\)\(f_{1} = \, \left( {\frac{1}{m} - \frac{1}{N}} \right),\)\(f_{1}^{*} = \left( {\frac{1}{{nq_{1} + 2p_{1} }} - \frac{1}{N}} \right),\)\(f_{2} = \left( {\frac{1}{u} - \frac{1}{N}} \right),\)\(f_{2}{\prime} = \left( {\frac{1}{{u^{\prime} }} - \frac{1}{N}} \right)\) and \(f_{2}^{*} = \left( {\frac{1}{{uq_{2} + 2p_{2} }} - \frac{1}{N}} \right)\).

where \(p_{i}\) is the random non-response and \(q_{i} \, = \, 1 - p_{i}\).

Note: It may be noted that, if \(p_{i} = 0\) which indicates no non-response, for such cases above mentioned expected values will coincide with the usual results.

2.1 The Existing Estimator

Existing work based on Singh and Khalid [19], the fresh sample \(S_{u}\) of size u drawn on the current occasion for the estimation of population mean \(\overline{Y}\). The estimator \(T_{u}\) of the population mean \(\overline{Y}\) of study variable \(y\) is formulated as

$$T_{u} = \overline{y}_{u}^{*} \exp \left( {\frac{{\overline{z}_{{u^{\prime} }} - \overline{z}_{u}^{*} }}{{\overline{z}_{{u^{\prime} }} + \overline{z}_{u}^{*} }}} \right)$$
(1)

The estimator defined in Eq. (1) can be further extended as

$$T_{u} = g\left( {\overline{y}_{u}^{*} ,\overline{z}_{u}^{*} , \overline{z}_{{u^{\prime} }} } \right)$$
(2)

Since the estimator \(T_{u}\) is the modified exponential-type estimator, this estimator is biased for the population mean \(\overline{Y}\). The bias and the \(MSE\) of the estimator are attained up to the first order of approximation.

The Bias and Mean Square Error (\(MSE\)) of the estimator \(T_{u}\) is given as

$$\begin{gathered} Bias\left( {T_{u} } \right) = E\left( {T_{u} - \overline{Y}} \right) = E\left[ {g_{1} \overline{Y}e_{1} } \right. + g_{2} \overline{Z}e_{2} + g_{3} \overline{Z}e_{3} + g_{3} \overline{Z}e_{3} + g_{11} \left( {\overline{Y}} \right)^{2} e_{1}^{2} + g_{22} \left( {\overline{Z}} \right)^{2} e_{2}^{2} \hfill \\ \, \;\;\;\;\;{ + }g_{33} (\overline{Z})^{2} e_{3}^{2} + g_{12} \overline{Y}\overline{Z}e_{1} e_{2} - g_{13} \overline{Y}\overline{Z}e_{1} e_{3} + \left. {g_{23} \left( {\overline{Z}} \right)^{2} e_{2} e_{3} } \right] \hfill \\ \end{gathered}$$
(3)

Squaring and expanding (3), we get the expression

$$\begin{gathered} MSE(T_{u} ) = E\left( {T_{u} - \overline{Y}} \right)^{2} = E\left( {g_{1} \overline{Y}e_{1} + g_{2} \overline{Z}e_{2} + g_{3} \overline{Z}e_{3} } \right)^{2} \hfill \\ MSE(T_{u} ) = \overline{Y}^{2} \left[ {f_{2}^{*} \left( {C_{y}^{2} + \frac{{C_{z}^{2} }}{4} - \rho_{yz} C_{y} C_{z} } \right) + f_{2}{\prime} \left( { - \frac{{C_{z}^{2} }}{4} + \rho_{yz} C_{y} C_{z} } \right)} \right]. \hfill \\ \end{gathered}$$
(4)

2.2 The Proposed Estimator

Motivated by the research of Singh and Khalid [19] & Zahid and Shabbir [26], To estimate the population mean \(\overline{Y}\) of the study variable \(y\) on current (second) occasion, we consider the estimators \(T_{mj}^{*}\) and \(T_{mj}^{**}\) which is based on matched sample \(S_{m}\) of size \(m\) common to both the occasions. Since, the random non-response is observed on study variable under the situation when the population mean of the auxiliary \(z\) is unknown, the estimators \(T_{mj}^{*}\) and \(T_{mj}^{**}\) of the current population mean of study variable \(y\) is formulated as

$$T_{mj}^{*} = \overline{y}_{m} \left( {\frac{{\overline{x}_{n}^{*} }}{{\overline{x}_{m} }}} \right)^{{\alpha_{1} }} \exp \left\{ {\left( {1 - \alpha_{2} } \right)\left( {\frac{{\overline{z}_{{n^{\prime} }} - \overline{z}_{n} }}{{\overline{z}_{{n^{\prime} }} + \overline{z}_{n} }}} \right)} \right\}$$
(5)

and

$$T_{mj}^{**} = \overline{y}_{m} \left( {\frac{{\overline{z}_{{n^{\prime} }} }}{{\overline{z}_{n} }}} \right)^{{\alpha_{3} }} \exp \left\{ {\left( {1 - \alpha_{4} } \right)\left( {\frac{{\overline{x}_{n}^{*} - \, \overline{x}_{m} }}{{\overline{x}_{n}^{*} + \, \overline{x}_{m} }}} \right)} \right\}$$
(6)

here, \(\alpha_{1} ,\)\(\alpha_{2} ,\)\(\alpha_{3}\) and \(\alpha_{4}\) are the scalar quantities which contains the different values, we obtain different classes of the proposed estimators.

The functional form of the above estimator defined in Eqs. (5) and (6) are also shown in the functional form as follows

$$T_{mj}^{*} = k\left( {\overline{y}_{m} , \, \overline{x}_{n}^{*} , \, \overline{x}_{m} , \, \overline{z}_{n} , \, \overline{z}_{{n^{\prime} }} } \right)$$

and

$$T_{mj}^{**} = k^{*} \left( {\overline{y}_{m} , \, \overline{x}_{n}^{*} , \, \overline{x}_{m} , \, \overline{z}_{n} , \, \overline{z}_{{n^{\prime} }} } \right)$$

Now, consider the convex linear combinations of the estimators \(T_{u} ,\) \(T_{mj}^{*}\) and \(T_{mj}^{**}\), finally, we have the estimators of current population mean \(\overline{Y},\) which is given as:

$$T_{1} = \varphi_{1} T_{u} + (1 - \varphi_{1} )T_{mj}^{*}$$
(7)

and

$$T_{2} = \varphi_{2} T_{u} + (1 - \varphi_{2} )T_{mj}^{**}$$
(8)

where \(\varphi_{i} (0 < \varphi_{i} < 1)\) for \(\left( {i \, = \, 1,2} \right)\) are the unknown constants to be determined by minimization of the variance/\(MSE^{\prime}s\) of the estimators \(T_{i}\)\(\left( {i \, = \, 1,2} \right)\).

Tables 1 and 2 represent the different classes of the suggested estimators.

Table 1 Different classes of the proposed estimator \(T_{mj}^{*}\)
Table 2 Different classes of the proposed estimator \(T_{mj}^{**}\)

Using Taylor series expansions up to first order, we expand the functional form of the estimators \(T_{mj}^{*}\) and \(T_{mj}^{**}\) as shown in Eqs. (7) and (8) respectively and using the above transformations; we get the following expressions

$$\begin{gathered} T_{mj}^{*} = \overline{Y} + \frac{{\partial T_{mj}^{*} }}{{\partial \overline{y}_{m} }}\left( {\overline{y}_{m} - \overline{Y}} \right) + \frac{{\partial T_{mj}^{*} }}{{\partial \overline{x}_{n}^{*} }}\left( {\overline{x}_{n}^{*} - \overline{X}} \right) + \frac{{\partial T_{mj}^{*} }}{{\partial \overline{x}_{m} }}\left( {\overline{x}_{m} - \overline{X}} \right) + \frac{{\partial T_{mj}^{*} }}{{\partial \overline{z}_{n} }}\left( {\overline{z}_{n} - \overline{Z}} \right) \hfill \\ { + }\frac{{\partial T_{mj}^{*} }}{{\partial \overline{z}_{{n^{\prime} }} }}\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right) + \frac{1}{2}\left\{ {\frac{{\partial^{2} T_{mj}^{*} }}{{\partial^{2} \overline{y}_{m} }}\left( {\overline{y}_{m} - \overline{Y}} \right)^{2} + } \right.\frac{{\partial^{2} T_{mj}^{*} }}{{\partial^{2} \overline{x}_{n}^{*} }}\left( {\overline{x}_{n}^{*} - \overline{X}} \right)^{2} + \frac{{\partial^{2} T_{mj}^{*} }}{{\partial^{2} \overline{x}_{m} }}\left( {\overline{x}_{m} - \overline{X}} \right)^{2} \hfill \\ { + }\frac{{\partial^{2} T_{mj}^{*} }}{{\partial^{2} \overline{z}_{n} }}\left( {\overline{z}_{n} - \overline{Z}} \right)^{2} + \frac{{\partial^{2} T_{mj}^{*} }}{{\partial^{2} \overline{z}_{{n^{\prime} }} }}\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right)^{2} + 2\frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{y}_{m} \partial \overline{x}_{n}^{*} }}\left( {\overline{y}_{m} - \overline{Y}} \right)\left( {\overline{x}_{n}^{*} - \overline{X}} \right) \hfill \\ { + }2\frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{y}_{m} \partial \overline{x}_{m} }}\left( {\overline{y}_{m} - \overline{Y}} \right)\left( {\overline{x}_{m} - \overline{X}} \right) + 2\frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{y}_{m} \partial \overline{z}_{n} }}\left( {\overline{y}_{m} - \overline{Y}} \right)\left( {\overline{z}_{n} - \overline{Z}} \right) \hfill \\ { + }2\frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{y}_{m} \partial \overline{z}_{{n^{\prime} }} }}\left( {\overline{y}_{m} - \overline{Y}} \right)\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right) + 2\frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{x}_{n}^{*} \partial \overline{x}_{m} }}\left( {\overline{x}_{n}^{*} - \overline{X}} \right)\left( {\overline{x}_{m} - \overline{X}} \right) \hfill \\ { + }2\frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{x}_{n}^{*} \partial \overline{z}_{n} }}\left( {\overline{x}_{n}^{*} - \overline{X}} \right)\left( {\overline{z}_{n} - \overline{Z}} \right) + 2\frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{x}_{n}^{*} \partial \overline{z}_{{n^{\prime} }} }}\left( {\overline{x}_{n}^{*} - \overline{X}} \right)\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right) \hfill \\ { + }2\frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{z}_{n} \partial \overline{z}_{{n^{\prime} }} }}\left( {\overline{z}_{n} - \overline{X}} \right)\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right) + \left. \ldots \right\}{ + }... \hfill \\ \end{gathered}$$

where \(k_{1} = \frac{{\partial T_{mj}^{*} }}{{\partial \overline{y}_{m} }} = 1,\) \(k_{2} = \frac{{\partial T_{mj}^{*} }}{{\partial \overline{x}_{n}^{*} }} = \frac{{\alpha_{1} \overline{Y}}}{{\overline{X}}},\) \(k_{3} = \frac{{\partial T_{mj}^{*} }}{{\partial \overline{x}_{m} }} = - \frac{{\alpha_{1} \overline{Y}}}{{\overline{X}}},\) \(k_{4} = \frac{{\partial T_{mj}^{*} }}{{\partial \overline{z}_{n} }} = \frac{{ - \overline{Y}\left( {1 - \alpha_{2} } \right)}}{{2\overline{Z}}},\) \(k_{5} = \frac{{\partial T_{mj}^{*} }}{{\partial \overline{z}_{{n^{\prime} }} }} = \frac{{\overline{Y}\left( {1 - \alpha_{2} } \right)}}{{2\overline{Z}}},\) \(k_{11} = \frac{{\partial^{2} T_{mj}^{*} }}{{2\partial^{2} \overline{y}_{m} }} = 0,\) \(k_{22} = \frac{{\partial^{2} T_{mj}^{*} }}{{2\partial^{2} \overline{x}_{n}^{*} }} = \frac{{\alpha_{1} \overline{Y}\left( {\alpha_{1} - 1} \right)}}{{\overline{X}^{2} }},\) \(k_{33} = \frac{{\partial^{2} T_{mj}^{*} }}{{2\partial^{2} \overline{x}_{m} }} = \frac{{\alpha_{1} \overline{Y}\left( {\alpha_{1} + 1} \right)}}{{\overline{X}^{2} }},\) \(k_{44} = \frac{{\partial^{2} T_{mj}^{*} }}{{2\partial^{2} \overline{z}_{n} }} = \overline{Y}(1 - \alpha_{2} )\left[ {\frac{{1 + \overline{Z}\left( {2 - \alpha_{2} } \right)}}{{4\overline{Z}^{3} }}} \right],\) \(k_{55} = \frac{{\partial^{2} T_{mj}^{*} }}{{2\partial^{2} \overline{z}_{{n^{\prime} }} }} = - \frac{{\overline{Y}(1 - \alpha_{2}^{2} )}}{{4\overline{Z}^{2} }},\) \(k_{12} = \frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{y}_{m} \partial \overline{x}_{n}^{*} }} = \frac{{\alpha_{1} }}{{\overline{X}}},\) \(k_{13} = \frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{y}_{m} \partial \overline{x}_{m} }} = - \frac{{\alpha_{1} }}{{\overline{X}}},\) \(k_{14} = \frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{y}_{m} \partial \overline{z}_{n} }} = \frac{{\alpha_{2} - 1}}{{2\overline{Z}}},\) \(k_{15} = \frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{y}_{m} \partial \overline{z}_{{n^{\prime} }} }} = \frac{{1 - \alpha_{2} }}{{2\overline{Z}}},\) \(k_{23} = \frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{x}_{n}^{*} \partial \overline{x}_{m} }} = - \frac{{\overline{Y}\alpha_{1}^{2} }}{{\overline{X}^{2} }},\) \(k_{24} = \frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{x}_{n}^{*} \partial \overline{z}_{n} }} = \frac{{ - \alpha_{1} \overline{Y}(1 - \alpha_{2} )}}{{2\overline{X}\overline{Z}}},\) \(k_{25} = \frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{x}_{n}^{*} \partial \overline{z}_{{n^{\prime} }} }} = \frac{{\alpha_{1} \overline{Y}(1 - \alpha_{2} )}}{{2\overline{X}\overline{Z}}},\) \(k_{34} = \frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{x}_{m} \partial \overline{z}_{n} }} = \frac{{\alpha_{1} \overline{Y}\left( {1 - \alpha_{2} } \right)}}{{2\overline{X}\overline{Z}}},\) \(k_{35} = \frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{x}_{m} \partial \overline{z}_{{n^{\prime} }} }} = \frac{{ - \alpha_{1} \overline{Y} \left( {1 - \alpha_{2} } \right)}}{{2\overline{X}\overline{Z}}}\) and \(k_{45} = \frac{{\partial^{2} T_{mj}^{*} }}{{\partial \overline{z}_{n} \partial \overline{z}_{{n^{\prime} }} }} = - \frac{{\overline{Y}\left( {\alpha_{2}^{2} - 3\alpha_{2} + 2} \right)}}{{4\overline{Z}^{2} }}.\) The following are the expressions of the bias and \(MSE\) of the estimator \(T_{mj}^{*}\)

$$\begin{gathered} Bias\left( {T_{mj}^{*} } \right) = E\left[ {k_{1} } \right.\overline{Y}e_{4} + k_{2} \overline{X}e_{5} + k_{3} \overline{Y}e_{6} + k_{4} \overline{Z}e_{8} + k_{5} \overline{Z}e_{7} + k_{11} \overline{Y}^{2} e_{4}^{2} + k_{22} \overline{X}^{2} e_{5}^{2} + k_{33} \overline{X}^{2} e_{6}^{2} \hfill \\ \, \;\;\;\; \, + k_{44} \overline{Z}^{2} e_{8}^{2} + k_{55} \overline{Z}^{2} e_{7}^{2} + k_{12} e_{4} e_{5} \overline{Y}\overline{Z} + k_{13} e_{4} e_{6} \overline{Y}\overline{X} + k_{14} e_{4} e_{8} \overline{Y}\overline{Z} + k_{15} e_{4} e_{7} \overline{Y}\overline{Z} \hfill \\ \, \;\;\;\;\;\; \, + k_{23} e_{5} e_{5} \overline{X}\overline{X} + k_{24} e_{8} e_{5} \overline{Z}\overline{X} + k_{25} e_{7} e_{5} \overline{Z}\overline{X} + k_{34} e_{6} e_{8} \overline{Z}\overline{X} + k_{35} e_{6} e_{7} \overline{Z}\overline{X} + k_{45} e_{7} e_{8} \left. {\overline{Z}\overline{Z}} \right] \hfill \\ \end{gathered}$$
(9)

and

$$MSE\left( {T_{mj}^{*} } \right) = E\left[ {k_{1} } \right.\overline{Y}e_{4} + k_{2} \overline{X}e_{5} + k_{3} \overline{Y}e_{6} + k_{4} \overline{Z}e_{8} \left. { + k_{5} \overline{Z}e_{7} } \right]^{2}$$
(10)

Applying expectation and substituting the values of \(k_{1} ,k_{2} , \ldots ,k_{45}\) in Eqs. (9) and (10), we get the Bias and mean square error of the estimator \(T_{mj}^{*}\) as

$$\begin{gathered} Bias\left( {T_{mj}^{*} } \right) = f_{1} \left\{ {\alpha_{1} \overline{Y}\left( {\alpha_{1} - 1} \right)C_{x}^{2} } \right. + \frac{{\alpha_{1} }}{{\overline{X}}}\rho_{yx} C_{y} C_{x} \overline{Y}\overline{Z} - \left. {\overline{Y}\alpha_{1}^{2} C_{x}^{2} } \right\} - f_{1} \left\{ {\alpha_{1} \overline{Y}\left( {\alpha_{1} + 1} \right)C_{x}^{2} { + }\alpha_{1} \rho_{yx} C_{y} C_{x} \overline{Y}} \right\} \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + f\left\{ {\frac{1}{{4\overline{Z}}}\overline{Y}(1 - \alpha_{2} )} \right.\left( {1 + \overline{Z}\left( {2 - \alpha_{2} } \right)} \right)C_{z}^{2} - \frac{1}{2}\left( {\alpha_{2} - 1} \right)\rho_{yz} C_{y} C_{z} \overline{Y} \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \frac{1}{2}\alpha_{1} \overline{Y}(1 - \alpha_{2} )\rho_{xz} C_{x} C_{z} + \left. {\frac{1}{2}\alpha_{1} \overline{Y}\left( {1 - \alpha_{2} } \right)\rho_{xz} C_{x} C_{z} } \right\} - f^{\prime} \left\{ {\frac{1}{4}\overline{Y}(} \right.1 - \alpha_{2}^{2} )C_{z}^{2} \hfill \\ \, + \frac{1}{2}(1 - \alpha_{2} )\rho_{yz} C_{y} C_{z} \overline{Y} - \frac{1}{2}\alpha_{1} \overline{Y}(1 - \alpha_{2} )\rho_{xz} C_{x} C_{z} + \frac{1}{2}\alpha_{1} \overline{Y} \left( {1 - \alpha_{2} } \right)\rho_{xz} C_{x} C_{z} \hfill \\ \, - \left. {\frac{1}{4}\overline{Y}\left( {\alpha_{2}^{2} - 3\alpha_{2} + 2} \right)\rho_{yz} C_{y} C_{z} } \right\} \hfill \\ \end{gathered}$$
(11)

and

$$\begin{gathered} MSE\left( {T_{mj}^{*} } \right) = \overline{Y}^{2} \left[ {f_{1} \left\{ {C_{y}^{2} } \right. + \alpha_{1}^{2} C_{x}^{2} - \left. {2\alpha_{1} \rho_{yx} C_{y} C_{x} } \right\} + } \right.f_{1}^{*} \left\{ {\alpha_{1}^{2} C_{x}^{2} } \right. - 2\alpha_{1}^{2} C_{x}^{2} \left. { + 2\alpha_{1} \rho_{yx} C_{y} C_{x} } \right\} \hfill \\ \, + f\left\{ {\frac{{\alpha_{2}^{2} + 1 - 2\alpha_{2} }}{4}C_{x}^{2} } \right. + \alpha_{1} \left( {\alpha_{2} - 1} \right)\rho_{xz} C_{x} C_{z} - \alpha_{1} \left( {\alpha_{2} - 1} \right)\rho_{xz} C_{x} C_{z} \hfill \\ \, + \left. {\left( {\alpha_{2} - 1} \right)\rho_{yz} C_{y} C_{z} } \right\} + f^{\prime} \left\{ {\frac{{1 + \alpha_{2}^{2} - 2\alpha_{2} }}{{4\overline{Y}}}C_{z}^{2} } \right. - \frac{1}{2}\left( {\alpha_{2}^{2} - 2\alpha_{2} + 1} \right)\rho_{yx} C_{y} C_{x} \hfill \\ \, + \alpha_{1} \left( {1 - \alpha_{2} } \right)\rho_{xz} C_{x} C_{z} - \alpha_{1} \left( {1 - \alpha_{2} } \right)\rho_{xz} C_{x} C_{z} \left. {\left. { + \left( {1 - \alpha_{2} } \right)\rho_{yz} C_{y} C_{z} } \right\}} \right] \hfill \\ \end{gathered}$$
(12)

The function \(k\left( {\overline{y}_{m} , \, \overline{x}_{n}^{*} , \, \overline{x}_{m} , \, \overline{z}_{n} , \, \overline{z}_{{n^{\prime} }} } \right)\) is based on statistics \(\left( {\overline{y}_{m} , \, \overline{x}_{n}^{*} , \, \overline{x}_{m} , \, \overline{z}_{n} , \, \overline{z}_{{n^{\prime} }} } \right)\) and satisfies the following regularity conditions:

  1. i.

    The function \(k\left( {\overline{y}_{m} , \, \overline{x}_{n}^{*} , \, \overline{x}_{m} , \, \overline{z}_{n} , \, \overline{z}_{{n^{\prime} }} } \right)\) is continuous and bound in \(R^{5}\).

  2. ii.

    The first order partial derivatives of \(k\left( {\overline{y}_{m} , \, \overline{x}_{n}^{*} , \, \overline{x}_{m} , \, \overline{z}_{n} , \, \overline{z}_{{n^{\prime} }} } \right)\) exist which are continuous and bounded in \(R^{5}\).

  3. iii.

    \(k\left( {\overline{Y},{\overline{\text{X}}},{\overline{\text{X}}}, \, \overline{Z}, \, \overline{Z}} \right) = \overline{Y}\) and \(k_{1} \left( {\overline{Y},{\overline{\text{X}}},{\overline{\text{X}}}, \, \overline{Z}, \, \overline{Z}} \right) = 1\), where \(k_{1} \left( {\overline{Y},{\overline{\text{X}}},{\overline{\text{X}}}, \, \overline{Z}, \, \overline{Z}} \right)\) is the first derivative of \(k\) with respect the \(\overline{y}_{m}\).

  4. iv.

    The function \(k\left( {\overline{y}_{m} , \, \overline{x}_{n}^{*} , \, \overline{x}_{m} , \, \overline{z}_{n} , \, \overline{z}_{{n^{\prime} }} } \right)\) assumes the values in the closed convex sub set \(R^{5}\) of five dimensional real space containing the point \(\left( {\overline{Y},{\overline{\text{X}}},{\overline{\text{X}}}, \, \overline{Z}, \, \overline{Z}} \right)\).

Using the Taylor series expansions, we have

$$\begin{gathered} T_{mj}^{**} = \overline{Y} + \frac{{\partial T_{mj}^{**} }}{{\partial \overline{y}_{m} }}\left( {\overline{y}_{m} - \overline{Y}} \right) + \frac{{\partial T_{mj}^{**} }}{{\partial \overline{x}_{n}^{*} }}\left( {\overline{x}_{n}^{*} - \overline{X}} \right) + \frac{{\partial T_{mj}^{**} }}{{\partial \overline{x}_{m} }}\left( {\overline{x}_{m} - \overline{X}} \right) + \frac{{\partial T_{mj}^{**} }}{{\partial \overline{z}_{n} }}\left( {\overline{z}_{n} - \overline{Z}} \right) \hfill \\ \, + \frac{{\partial T_{mj}^{**} }}{{\partial \overline{z}_{{n^{\prime} }} }}\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right) + \frac{1}{2}\left\{ {\frac{{\partial^{2} T_{mj}^{**} }}{{\partial^{2} \overline{y}_{m} }}\left( {\overline{y}_{m} - \overline{Y}} \right)^{2} + } \right.\frac{{\partial^{2} T_{mj}^{**} }}{{\partial^{2} \overline{x}_{n}^{*} }}\left( {\overline{x}_{n}^{*} - \overline{X}} \right)^{2} + \frac{{\partial^{2} T_{mj}^{**} }}{{\partial^{2} \overline{x}_{m} }}\left( {\overline{x}_{m} - \overline{X}} \right)^{2} \hfill \\ \, + \frac{{\partial^{2} T_{mj}^{**} }}{{\partial^{2} \overline{z}_{n} }}\left( {\overline{z}_{n} - \overline{Z}} \right)^{2} + \frac{{\partial^{2} T_{mj}^{**} }}{{\partial^{2} \overline{z}_{{n^{\prime} }} }}\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right)^{2} + 2\frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{y}_{m} \partial \overline{x}_{n}^{*} }}\left( {\overline{y}_{m} - \overline{Y}} \right)\left( {\overline{x}_{n}^{*} - \overline{X}} \right) \hfill \\ \, + 2\frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{y}_{m} \partial \overline{x}_{m} }}\left( {\overline{y}_{m} - \overline{Y}} \right)\left( {\overline{x}_{m} - \overline{X}} \right) + 2\frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{y}_{m} \partial \overline{z}_{n} }}\left( {\overline{y}_{m} - \overline{Y}} \right)\left( {\overline{z}_{n} - \overline{Z}} \right) \hfill \\ \,\;\; + 2\frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{y}_{m} \partial \overline{z}_{{n^{\prime} }} }}\left( {\overline{y}_{m} - \overline{Y}} \right)\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right) + 2\frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{x}_{n}^{*} \partial \overline{x}_{m} }}\left( {\overline{x}_{n}^{*} - \overline{X}} \right)\left( {\overline{x}_{m} - \overline{X}} \right) \hfill \\\;\; { + }2\frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{x}_{n}^{*} \partial \overline{z}_{n} }}\left( {\overline{x}_{n}^{*} - \overline{X}} \right)\left( {\overline{z}_{n} - \overline{Z}} \right) + 2\frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{x}_{n}^{*} \partial \overline{z}_{{n^{\prime} }} }}\left( {\overline{x}_{n}^{*} - \overline{X}} \right)\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right) \hfill \\ { + }2\frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{x}_{m} \partial \overline{z}_{n} }}\left( {\overline{x}_{m} - \overline{X}} \right)\left( {\overline{z}_{n} - \overline{Z}} \right) + 2\frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{x}_{m} \partial \overline{z}_{{n^{\prime} }} }}\left( {\overline{x}_{m} - \overline{X}} \right)\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right) \hfill \\ \,\;\; + \left. {2\frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{z}_{n} \partial \overline{z}_{{n^{\prime} }} }}\left( {\overline{z}_{n} - \overline{X}} \right)\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right) + \ldots } \right\} \hfill \\ \end{gathered}$$

where \(\begin{gathered} T_{mj}^{**} = \overline{Y} + k_{1}^{*} \left( {\overline{y}_{m} - \overline{Y}} \right) + k_{2}^{*} \left( {\overline{x}_{n}^{*} - \overline{X}} \right) + k_{3}^{*} \left( {\overline{x}_{m} - \overline{X}} \right) + k_{4}^{*} \left( {\overline{z}_{n} - \overline{Z}} \right) + k_{5}^{*} \left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right) \hfill \\ \, + k_{11}^{*} \left( {\overline{y}_{m} - \overline{Y}} \right)^{2} + k_{22}^{*} \left( {\overline{x}_{n}^{*} - \overline{X}} \right)^{2} + k_{33}^{*} \left( {\overline{x}_{m} - \overline{X}} \right)^{2} + k_{44}^{*} \left( {\overline{z}_{n} - \overline{Z}} \right)^{2} + k_{55}^{*} \left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right)^{2} \hfill \\ \, + k_{12}^{*} \left( {\overline{y}_{m} - \overline{Y}} \right)\left( {\overline{x}_{n}^{*} - \overline{X}} \right) + k_{13}^{*} \left( {\overline{y}_{m} - \overline{Y}} \right)\left( {\overline{x}_{m} - \overline{X}} \right) + k_{14}^{*} \left( {\overline{y}_{m} - \overline{Y}} \right)\left( {\overline{z}_{n} - \overline{Z}} \right) \hfill \\ \, + k_{15}^{*} \left( {\overline{y}_{m} - \overline{Y}} \right)\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right) + k_{23}^{*} \left( {\overline{x}_{n}^{*} - \overline{X}} \right)\left( {\overline{x}_{m} - \overline{X}} \right) + k_{24}^{*} \left( {\overline{x}_{n}^{*} - \overline{X}} \right)\left( {\overline{z}_{n} - \overline{Z}} \right) \hfill \\ \, + k_{25}^{*} \left( {\overline{x}_{n}^{*} - \overline{X}} \right)\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right) + k_{34}^{*} \left( {\overline{x}_{m} - \overline{X}} \right)\left( {\overline{z}_{n} - \overline{Z}} \right) + k_{35}^{*} \left( {\overline{x}_{m} - \overline{X}} \right)\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right) \hfill \\ \,\;\; k_{45}^{*} \left( {\overline{z}_{n} - \overline{X}} \right)\left( {\overline{z}_{{n^{\prime} }} - \overline{Z}} \right) + \ldots \hfill \\ \end{gathered}\) and \(k_{1}^{*} = \frac{{\partial T_{mj}^{**} }}{{\partial \overline{y}_{m} }} = 1,\)\(k_{2}^{*} = \frac{{\partial T_{mj}^{**} }}{{\partial \overline{x}_{n}^{*} }} = \frac{{\overline{Y}\left( {1 - \alpha_{4} } \right)}}{{2 \, \overline{X}}},\)\(k_{3}^{*} = \frac{{\partial T_{mj}^{**} }}{{\partial \overline{x}_{m} }} = \frac{{ - \overline{Y}\left( {1 - \alpha_{4} } \right)}}{{2 \, \overline{X}}},\)\(k_{4}^{*} = \frac{{\partial T_{mj}^{**} }}{{\partial \overline{z}_{n} }} = \frac{{ - \alpha_{3} \overline{Y}}}{{\overline{Z}}},\)\(k_{5}^{*} = \frac{{\partial T_{mj}^{**} }}{{\partial \overline{z}_{{n^{\prime} }} }} = \frac{{\alpha_{3} \overline{Y}}}{{\overline{Z}}} \, ,\)\(k_{11}^{*} = \frac{{\partial^{2} T_{mj}^{**} }}{{2\partial^{2} \overline{y}_{m} }} = 0,\)\(k_{22}^{*} = \frac{{\partial^{2} T_{mj}^{**} }}{{2\partial^{2} \overline{x}_{n}^{*} }} = - \frac{{\overline{Y}\left( {\alpha_{4}^{2} + 3\alpha_{4} - 3} \right)}}{{8 \, \overline{X}^{2} }},\)\(k_{33}^{*} = \frac{{\partial^{2} T_{mj}^{**} }}{{2\partial^{2} \overline{x}_{m} }} = \frac{{\overline{Y}\left( {\alpha_{4}^{2} - \, 4\alpha_{4} + 3 \, } \right)}}{{4\overline{X}^{2} }},\)\(k_{44}^{*} = \frac{{\partial^{2} T_{mj}^{**} }}{{2\partial^{2} \overline{z}_{n} }} = \frac{{\overline{Y}\alpha_{3} \left( {\alpha_{3} + 1} \right)}}{{\overline{Z}^{2} }},\)\(k_{55}^{*} = \frac{{\partial^{2} T_{mj}^{**} }}{{2\partial^{2} \overline{z}_{{n^{\prime} }} }} = \frac{{\overline{Y}\alpha_{3} \left( {\alpha_{3} - 1} \right)}}{{\overline{Z}^{2} }},\)\(k_{12}^{*} = \frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{y}_{m} \partial \overline{x}_{n}^{*} }} = \frac{{1 - \, \alpha_{4} }}{{2 \, \overline{X}}},\)\(k_{13}^{*} = \frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{y}_{m} \partial \overline{x}_{m} }} = \frac{{\alpha_{4} - 1}}{{2 \, \overline{X}}},\)\(k_{14}^{*} = \frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{y}_{m} \partial \overline{z}_{n} }} = \frac{{ - \alpha_{3} }}{{\overline{Z}}},\)\(k_{15}^{*} = \frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{y}_{m} \partial \overline{z}_{{n^{\prime} }} }} = \frac{{\alpha_{3} }}{{\overline{Z}}},\)\(k_{23}^{*} = \frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{x}_{n}^{*} \partial \overline{x}_{m} }} = \frac{{\overline{Y}\left( {1 - \alpha_{4} } \right)^{2} }}{{4\overline{X}^{2} }},\)\(k_{24}^{*} = \frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{x}_{n}^{*} \partial \overline{z}_{n} }} = \frac{{\overline{Y}\alpha_{3} \left( {\alpha_{4} - 1} \right)}}{{2\overline{X}\overline{Z}}},\)\(k_{25}^{*} = \frac{{\partial^{2} T_{mj}^{**} \,}}{{\partial \overline{x}_{n}^{*} \partial \overline{z}_{{n^{\prime} }} }} = \frac{{\overline{Y}\alpha_{3} \left( {1 - \alpha_{4} } \right)}}{{2\overline{X}\overline{Z}}},\)\(k_{34}^{*} = \frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{x}_{m} \partial \overline{z}_{n} }} = \frac{{\overline{Y}\alpha_{3} \left( {1 - \alpha_{4} } \right)}}{{2 \, \overline{X}\overline{Z}}},\)\(k_{35}^{*} = \frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{x}_{m} \partial \overline{z}_{{n^{\prime} }} }} = \frac{{\overline{Y}\alpha_{3} \left( {\alpha_{4} - 1} \right)}}{{2\overline{X}\overline{Z}}},\)\(k_{45}^{*} = \frac{{\partial^{2} T_{mj}^{**} }}{{\partial \overline{z}_{n} \partial \overline{z}_{{n^{\prime} }} }} = \frac{{ - \overline{Y}\,\alpha_{3}^{2} }}{{\overline{Z}^{2} }}.\)

The following are the expressions of the bias and \(MSE\) of the estimator \(T_{mj}^{**}\):

$$\begin{gathered} B\left( {T_{mj}^{**} } \right) = E\left[ {k_{1} } \right.^{*} \overline{Y}e_{4} + k_{2}^{*} \overline{X}e_{5} + k_{3}^{*} \overline{Y}e_{6} + k_{4}^{*} \overline{Z}e_{8} + k_{5}^{*} \overline{Z}e_{7} + k_{11}^{*} \overline{Y}^{2} e_{4}^{2} + k_{22}^{*} \overline{X}^{2} e_{5}^{2} + \hfill \\ \, k_{33}^{*} \overline{X}^{2} e_{6}^{2} + k_{44}^{*} \overline{Z}^{2} e_{8}^{2} + k_{55}^{*} \overline{Z}^{2} e_{7}^{2} + k_{12}^{*} e_{4} e_{5} \overline{Y}\overline{Z} + k_{13}^{*} e_{4} e_{6} \overline{Y}\overline{X} + k_{14}^{*} e_{4} e_{8} \overline{Y}\overline{Z} + \hfill \\ \, k_{15}^{*} e_{4} e_{7} \overline{Y}\overline{Z} + k_{23}^{*} e_{5} e_{5} \overline{X}\overline{X} + k_{24}^{*} e_{8} e_{5} \overline{Z}\overline{X} + k_{25}^{*} e_{7} e_{5} \overline{Z}\overline{X} + k_{34}^{*} e_{6} e_{8} \overline{Z}\overline{X} + \hfill \\ \, k_{35}^{*} e_{6} e_{7} \overline{Z}\overline{X} + k_{45}^{*} e_{7} e_{8} \left. {\overline{Z}\overline{Z}} \right] \hfill \\ \end{gathered}$$
(13)

and

$$MSE\left( {T_{mj}^{**} } \right) = E\left[ {k_{1} } \right.^{*} \overline{Y}e_{4} + k_{2}^{*} \overline{X}e_{5} + k_{3}^{*} \overline{Y}e_{6} + k_{4}^{*} \overline{Z}e_{8} \left. { + k_{5}^{*} \overline{Z}e_{7} } \right]^{2}$$
(14)

Applying expectation and substituting the values of \(k_{1}^{*} ,k_{2}^{*} , \ldots ,k_{45}^{*}\) in Eqs. (13) and (14), we get the bias and mean square error of the estimator \(T_{mj}^{**}\) as

$$\begin{gathered} Bias\left( {T_{mj}^{**} } \right) = f_{1}^{*} \left\{ {\frac{ - 1}{8}\overline{Y}} \right.\left( {\alpha_{4}^{2} + 3\alpha_{4} - 3} \right)C_{x}^{2} + \left( {\frac{{1 - \, \alpha_{4} }}{{2 \, \overline{X}}}} \right)\rho_{yx} C_{y} C_{x} \overline{Y}\overline{Z} + \left. {\frac{1}{4}\overline{Y}\left( {1 - \alpha_{4} } \right)^{2} C_{x}^{2} } \right\} \hfill \\ { + }f\left\{ {\frac{1}{4}\overline{Y}} \right.\left( {\alpha_{4}^{2} - \, 4\alpha_{4} + 3 \, } \right)C_{x}^{2} + \left. {\left( {\frac{{\alpha_{4} - 1}}{2}} \right)\rho_{yx} C_{y} C_{x} \overline{Y}_{1} } \right\} - f\left\{ {\overline{Y}\alpha_{3} \left( {\alpha_{3} + 1} \right)C_{z}^{2} } \right. \hfill \\ \, + \alpha_{3} \rho_{yz} C_{y} C_{z} \overline{Y} + \frac{1}{2}\overline{Y}\alpha_{3} \left( {\alpha_{4} - 1} \right)\rho_{xz} C_{x} C_{z} + \left. {\frac{1}{2}\overline{Y}\alpha_{3} \left( {1 - \alpha_{4} } \right)\rho_{xz} C_{x} C_{z} } \right\} \hfill \\ \, + f^{\prime} \left\{ {\frac{1}{2}\overline{Y}\alpha_{3} \left( {1 - \alpha_{4} } \right)\rho_{xz} C_{x} C_{z} + } \right.\overline{Y}\alpha_{3} \left( {\alpha_{3} - 1} \right)C_{z}^{2} + \alpha_{3} \rho_{yz} C_{y} C_{z} \overline{Y} \hfill \\ \, + \frac{1}{2}\overline{Y}\alpha_{3} \left( {\alpha_{4} - 1} \right)\rho_{xz} C_{x} C_{z} - \left. {\overline{Y}\,\alpha_{3}^{2} \rho_{yz} C_{y} C_{z} } \right\} \hfill \\ \end{gathered}$$
(15)

and

$$\begin{gathered} MSE\left( {T_{mj}^{**} } \right) = \overline{Y}^{2} \left[ {f_{1} } \right.\left\{ {C_{y}^{2} } \right. + \frac{{\left( {1 + \alpha_{4}^{2} - 2\alpha_{4} } \right)}}{{4 }}C_{x}^{2} - \left. {\left( {1 - \alpha_{4} } \right)\rho_{yx} C_{y} C_{x} } \right\} \hfill \\ \, + f_{1}^{*} \left\{ {\frac{{\left( {1 + \alpha_{4}^{2} - 2\alpha_{4} } \right)}}{{4 }}C_{x}^{2} } \right. - \frac{{\left( {1 - 2\alpha_{4} + \alpha_{4}^{2} } \right)}}{2 \, }C_{x}^{2} + \left. {\left( {1 - \alpha_{4} } \right)\rho_{yx} C_{y} C_{x} } \right\} \hfill \\ { + }f\left\{ {\alpha_{3}^{2} C_{x}^{2} } \right. - \alpha_{3} \left( {1 - \alpha_{4} } \right)\rho_{xz} C_{x} C_{z} + \alpha_{3} \left( {1 - \alpha_{4} } \right)\rho_{xz} C_{x} C_{z} - \left. {2\alpha_{3} \rho_{yz} C_{y} C_{z} } \right\} \hfill \\ \, + f^{\prime} \left\{ {\alpha_{3}^{2} C_{z}^{2} } \right. - 2\alpha_{3}^{2} \rho_{yx} C_{y} C_{x} + \alpha_{3} \left( {1 - \alpha_{4} } \right)\rho_{xz} C_{x} C_{z} - \alpha_{3} \left( {1 - \alpha_{4} } \right)\rho_{xz} C_{x} C_{z} \left. { + \left. {2\alpha_{3} \rho_{yz} C_{y} C_{z} } \right\}} \right] \hfill \\ \end{gathered}$$
(16)

The function \(k^{*} \left( {\overline{y}_{m} , \, \overline{x}_{n}^{*} , \, \overline{x}_{m} , \, \overline{z}_{n} , \, \overline{z}_{{n^{\prime} }} } \right)\) is based on statistics \(\left( {\overline{y}_{m} , \, \overline{x}_{n}^{*} , \, \overline{x}_{m} , \, \overline{z}_{n} , \, \overline{z}_{{n^{\prime} }} } \right)\) and satisfies the following regularity conditions:

  1. i.

    The function \(k^{*} \left( {\overline{y}_{m} , \, \overline{x}_{n}^{*} , \, \overline{x}_{m} , \, \overline{z}_{n} , \, \overline{z}_{{n^{\prime} }} } \right)\) is continuous and bound in \(R^{5}\).

  2. ii.

    The first order partial derivatives of \(k^{*} \left( {\overline{y}_{m} , \, \overline{x}_{n}^{*} , \, \overline{x}_{m} , \, \overline{z}_{n} , \, \overline{z}_{{n^{\prime} }} } \right)\) exist which are continuous and bounded in \(R^{5}\).

  3. iii.

    \(k^{*} \left( {\overline{Y},{\overline{\text{X}}},{\overline{\text{X}}}, \, \overline{Z}, \, \overline{Z}} \right) = \overline{Y}\) and \(k_{1}^{*} \left( {\overline{Y},{\overline{\text{X}}},{\overline{\text{X}}}, \, \overline{Z}, \, \overline{Z}} \right) = 1\), where \(k_{1}^{*} \left( {\overline{Y},{\overline{\text{X}}},{\overline{\text{X}}}, \, \overline{Z}, \, \overline{Z}} \right)\) is the first derivative of \(k\) with respect the \(\overline{y}_{m}\).

  4. iv.

    The function \(k^{*} \left( {\overline{y}_{m} , \, \overline{x}_{n}^{*} , \, \overline{x}_{m} , \, \overline{z}_{n} , \, \overline{z}_{{n^{\prime} }} } \right)\) assumes the values in the closed convex sub set \(R^{5}\) of five dimensional real space containing the point \(\left( {\overline{Y},{\overline{\text{X}}},{\overline{\text{X}}}, \, \overline{Z}, \, \overline{Z}} \right)\).

Theorem 1:

The biases of the estimators \(T_{1}\) and \(T_{2}\) up to the first order of approximation are obtained as

$$Bias(T_{1} ) = \Phi_{1} B(T_{u} ) + (1 - \Phi_{1} )B(T_{mj}^{*} )$$
(17)

and

$$Bias(T_{2} ) = \Phi_{2} B(T_{u} ) + (1 - \Phi_{2} )B(T_{mj}^{**} )$$
(18)

where

$$\begin{gathered} Bias\left( {T_{u} } \right) = E\left( {T_{u} - \overline{Y}} \right) = E\left[ {g_{1} \overline{Y}e_{1} } \right. + g_{2} \overline{Z}e_{2} + g_{3} \overline{Z}e_{3} + g_{3} \overline{Z}e_{3} + g_{11} \left( {\overline{Y}} \right)^{2} e_{1}^{2} + g_{22} \left( {\overline{Z}} \right)^{2} e_{2}^{2} + \hfill \\ \, g_{33} (\overline{Z})^{2} e_{3}^{2} + g_{12} \overline{Y}\overline{Z}e_{1} e_{2} - g_{13} \overline{Y}\overline{Z}e_{1} e_{3} + \left. {g_{23} \left( {\overline{Z}} \right)^{2} e_{2} e_{3} } \right] \hfill \\ \end{gathered}$$
$$\begin{gathered} Bias\left( {T_{mj}^{*} } \right) = f_{1} \left\{ {\alpha_{1} \overline{Y}\left( {\alpha_{1} - 1} \right)^{*} C_{x}^{2} } \right. + \left( {\frac{{\alpha_{1} }}{{\overline{X}}}} \right)\rho_{yx} C_{y} C_{x} \overline{Y}\overline{Z} - \left. {\overline{Y}\alpha_{1}^{2} C_{x}^{2} } \right\} - f_{1} \left\{ {\alpha_{1} \overline{Y}} \right.\left( {\alpha_{1} + 1} \right)C_{x}^{2} \hfill \\ \left. {{ + }\alpha_{1} \rho_{yx} C_{y} C_{x} \overline{Y}} \right\} + f\left\{ {\frac{1}{{4\overline{Z}}}\overline{Y}(1 - \alpha_{2} )} \right.\left( {1 + \overline{Z}\left( {2 - \alpha_{2} } \right)} \right)C_{z}^{2} \hfill \\ \, - \frac{1}{2}\left( {\alpha_{2} - 1} \right)\rho_{yz} C_{y} C_{z} \overline{Y} + \frac{1}{2}\alpha_{1} \overline{Y}(1 - \alpha_{2} )\rho_{xz} C_{x} C_{z} + \left. {\frac{1}{2}\alpha_{1} \overline{Y}\left( {1 - \alpha_{2} } \right)\rho_{xz} C_{x} C_{z} } \right\} \hfill \\ \, - f^{\prime} \left\{ {\frac{1}{4}\overline{Y}(} \right.1 - \alpha_{2}^{2} )C_{z}^{2} + \frac{1}{2}(1 - \alpha_{2} )\rho_{yz} C_{y} C_{z} \overline{Y} - \frac{1}{2}\alpha_{1} \overline{Y}(1 - \alpha_{2} )\rho_{xz} C_{x} C_{z} \hfill \\ \, + \frac{1}{2}\alpha_{1} \overline{Y} \left( {1 - \alpha_{2} } \right)\rho_{xz} C_{x} C_{z} - \left. {\frac{1}{4}\overline{Y}\left( {\alpha_{2}^{2} - 3\alpha_{2} + 2} \right)\rho_{yz} C_{y} C_{z} } \right\} \hfill \\ \end{gathered}$$
(19)
$$\begin{gathered} B\left( {T_{mj}^{**} } \right) = f_{1}^{*} \left\{ {\frac{ - 1}{8}\overline{Y}\left( {\alpha_{4}^{2} + 3\alpha_{4} - 3} \right)C_{x}^{2} } \right. + \left( {\frac{{1 - \, \alpha_{4} }}{{2 \, \overline{X}}}} \right)\rho_{yx} C_{y} C_{x} \overline{Y}\overline{Z} + \left. {\frac{1}{4}\overline{Y}\left( {1 - \alpha_{4} } \right)^{2} C_{x}^{2} } \right\} \hfill \\ { + }f_{1} \left\{ {\frac{1}{4}\overline{Y}\left( {\alpha_{4}^{2} - \, 4\alpha_{4} + 3 \, } \right)C_{x}^{2} + \left. {\left( {\frac{{\alpha_{4} - 1}}{2}} \right)\rho_{yx} C_{y} C_{x} \overline{Y}} \right\}} \right. - f\left\{ {\overline{Y}\alpha_{3} \left( {\alpha_{3} + 1} \right)C_{z}^{2} } \right. \hfill \\ \, + \alpha_{3} \rho_{yz} C_{y} C_{z} \overline{Y} + \frac{1}{2}\overline{Y}\alpha_{3} \left( {\alpha_{4} - 1} \right)\rho_{xz} C_{x} C_{z} + \left. {\frac{1}{2}\overline{Y}\alpha_{3} \left( {1 - \alpha_{4} } \right)\rho_{xz} C_{x} C_{z} } \right\} \hfill \\ \, + f^{\prime} \left\{ {\frac{1}{2}\overline{Y}\alpha_{3} \left( {1 - \alpha_{4} } \right)\rho_{xz} C_{x} C_{z} + \overline{Y}\alpha_{3} \left( {\alpha_{3} - 1} \right)C_{z}^{2} + \alpha_{3} \rho_{yz} C_{y} C_{z} \overline{Y}} \right\} \hfill \\ \end{gathered}$$
(20)

Theorem 2:

The \(MSE^{\prime}s\) of the estimators \(T_{1}\) and \(T_{2}\) up to the first order of approximations are obtained as.

$$MSE(T_{mj}^{*} ) = \Phi_{1}^{2} MSE(T_{u} ) + (1 - \Phi_{1} )^{2} MSE(T_{mj}^{*} )$$
(21)

and

$$MSE(T_{mj}^{**} ) = \Phi_{2}^{2} MSE(T_{u} ) + (1 - \Phi_{2} )^{2} MSE(T_{mj}^{**} )$$
(22)

where

$$MSE(T_{u} ) = \overline{Y}^{2} \left[ {f_{2}^{*} \left( {C_{y}^{2} + \frac{{C_{z}^{2} }}{4} - \rho_{yz} C_{y} C_{z} } \right) + f_{2}{\prime} \left( { - \frac{{C_{z}^{2} }}{4} + \rho_{yz} C_{y} C_{z} } \right)} \right]$$
(23)
$$\begin{gathered} MSE\left( {T_{mj}^{*} } \right) = \overline{Y}^{2} \left[ {f_{1} \left( {C_{y}^{2} } \right. + \alpha_{1}^{2} C_{x}^{2} - \left. {2\alpha_{1} \rho_{yx} C_{y} C_{x} } \right) + } \right.f_{1}^{*} \left( {\alpha_{1}^{2} C_{x}^{2} } \right. - 2\alpha_{1}^{2} C_{x}^{2} \left. { + 2\alpha_{1} \rho_{yx} C_{y} C_{x} } \right) \hfill \\ \, + f\left( {\frac{{\left( {\alpha_{2}^{2} + 1 - 2\alpha_{2} } \right)}}{4}C_{x}^{2} } \right. + \alpha_{1} \left( {\alpha_{2} - 1} \right)\rho_{xz} C_{x} C_{z} - \alpha_{1} \left( {\alpha_{2} - 1} \right)\rho_{xz} C_{x} C_{z} \hfill \\ \, + \left. {\left( {\alpha_{2} - 1} \right)\rho_{yz} C_{y} C_{z} } \right) + f^{\prime} \left( {\frac{{\left( {1 + \alpha_{2}^{2} - 2\alpha_{2} } \right)}}{{4\overline{Y}}}C_{z}^{2} } \right. - \frac{1}{2}\left( {\alpha_{2}^{2} - 2\alpha_{2} + 1} \right)\rho_{yx} C_{y} C_{x} \hfill \\ \, + \alpha_{1} \left( {1 - \alpha_{2} } \right)\rho_{xz} C_{x} C_{z} - \alpha_{1} \left( {1 - \alpha_{2} } \right)\rho_{xz} C_{x} C_{z} \left. {\left. { + \left( {1 - \alpha_{2} } \right)\rho_{yz} C_{y} C_{z} } \right)} \right] \hfill \\ \end{gathered}$$
(24)
$$\begin{gathered} MSE\left( {T_{mj}^{**} } \right) = f_{1} \left\{ {\overline{Y}^{2} C_{y}^{2} } \right. + \frac{{\overline{Y}^{2} \left( {1 + \alpha_{4}^{2} - 2\alpha_{4} } \right)}}{{4 }}C_{x}^{2} - \left. {\overline{Y}^{2} \left( {1 - \alpha_{4} } \right)\rho_{yx} C_{y} C_{x} } \right\} + \hfill \\ \, f_{1}^{*} \left\{ {\frac{{\overline{Y}^{2} \left( {1 + \alpha_{4}^{2} - 2\alpha_{4} } \right)}}{{4 }}C_{x}^{2} } \right. - \frac{{\overline{Y}^{2} \left( {1 - 2\alpha_{4} + \alpha_{4}^{2} } \right)}}{2 \, }C_{x}^{2} \hfill \\ \, + \left. {\overline{Y}^{2} \left( {1 - \alpha_{4} } \right)\rho_{yx} C_{y} C_{x} } \right\} + f\left\{ {\alpha_{3}^{2} \overline{Y}^{2} C_{x}^{2} } \right. - \alpha_{3} \overline{Y}^{2} \left( {1 - \alpha_{4} } \right)\rho_{xz} C_{x} C_{z} \hfill \\ \, + \alpha_{3} \overline{Y}^{2} \left( {1 - \alpha_{4} } \right)\rho_{xz} C_{x} C_{z} - \left. {2\alpha_{3} \overline{Y}^{2} \rho_{yz} C_{y} C_{z} } \right\} + f^{\prime} \left\{ {\alpha_{3}^{2} \overline{Y}^{2} C_{z}^{2} } \right. \hfill \\ \, - 2\alpha_{3}^{2} \overline{Y}^{2} \rho_{yx} C_{y} C_{x} + \alpha_{3} \overline{Y}^{2} \left( {1 - \alpha_{4} } \right)\rho_{xz} C_{x} C_{z} - \alpha_{3} \overline{Y}^{2} \left( {1 - \alpha_{4} } \right)\rho_{xz} C_{x} C_{z} + \left. {2\alpha_{3} \overline{Y}^{2} \rho_{yz} C_{y} C_{z} } \right\} \hfill \\ \hfill \\ \hfill \\ \end{gathered}$$
(25)

Since the estimators \(T_{u}\), \(T_{mj}^{*}\) and \(T_{mj}^{**}\) are based on two non-overlapping samples of sizes u and m respectively, therefore the covariance type terms are of order \(N^{ - 1}\), and they are ignored for large population sizes i.e. \(C\left( {T_{u} ,T_{mj}^{*} ,T_{mj}^{**} } \right) = \, 0\).

3 The Minimum Mean Square Errors of the Estimators \(T_{mj}^{*}\) and \(T_{mj}^{**}\)

The mean square errors of the estimators \(T_{mj}^{*}\) and \(T_{mj}^{**}\) derived in Eqs. (24) and (25) are the functions of unknown constants \(\Phi_{1}\) and \(\Phi_{2}\), therefore, they are minimum with respect to \(\Phi_{1}\) and \(\Phi_{2}\), subsequently the optimum values of \(\Phi_{1}\) and \(\Phi_{2}\) say \(\mathop {\Phi_{1} }\nolimits_{opt}\) and \(\mathop {\Phi_{2} }\nolimits_{opt}\) are obtained as

$$\mathop {\Phi_{1} }\nolimits_{opt} = \frac{{MSE\left( {T_{mj}^{*} } \right)}}{{MSE\left( {T_{mj}^{**} } \right) + MSE(T_{u} )}}$$
(26)

and

$$\mathop {\Phi_{2} }\nolimits_{opt} = \frac{{MSE\left( {T_{mj}^{**} } \right)}}{{MSE\left( {T_{mj}^{**} } \right) + MSE(T_{u} )}}$$
(27)

Now substituting the values of \(\mathop {\Phi_{1} }\nolimits_{opt}\) and \(\mathop {\Phi_{2} }\nolimits_{opt}\) from Eqs. (26) and (27), which yield the optimum (minimum) mean square errors of the estimators \(T_{mj}^{*}\) and \(T_{mj}^{**}\).

$$MSE\left( {T_{mj}^{*} } \right)_{opt} = \frac{{MSE(T_{mj}^{*} )*MSE(T_{u} )}}{{MSE(T_{mj}^{*} ) + MSE(T_{u} )}}$$
(28)

and

$$MSE\left( {T_{mj}^{**} } \right)_{opt} = \frac{{MSE(T_{mj}^{**} )*MSE(T_{u} )}}{{MSE(T_{mj}^{**} ) + MSE(T_{u} )}}$$
(29)

4 Efficiency Comparisons

The \(PRE^{\prime}s\) of the suggested estimators \(T_{mj}^{*}\) and \(T_{mj}^{**}\) are calculated with respect to the estimator \(\tau\), which is defined for complete response scenario on both the occasions, to analyze their effectiveness in random non-response scenarios;

$$\tau = \psi \tau_{u} + \left( {1 - \psi } \right)\tau_{m}$$
(30)

where \(\tau_{u} = \overline{y}_{u}\), \(\tau_{m} = \overline{y}_{m} \left( {\frac{{\overline{x}_{n} }}{{\overline{x}_{m} }}} \right)\) and \(\psi \left( {0 \le \psi \le 1} \right)\) is an unknown constant to be determined by the \(MSE\) of estimator \(\tau\).

The minimum \(MSE\) of the estimator \(\tau\) up to the first order of approximations is obtained as

$$MSE\left( \tau \right)_{Min} = \frac{{V\left( {\tau_{u} } \right).MSE\left( {\tau_{m} } \right)}}{{V\left( {\tau_{u} } \right) + MSE\left( {\tau_{m} } \right)}}$$

where

\(V\left( {\tau_{u} } \right) = f_{2} C_{y}^{2} \overline{Y}^{2}\) and \(MSE\left( {\tau_{m} } \right) = \overline{Y}^{2} \left\{ {\left( {C_{y}^{2} + C_{x}^{2} - 2\rho_{yx} C_{y} C_{x} } \right) - f\left( {C_{x}^{2} - 2\rho_{yx} C_{y} C_{x} } \right)} \right\}\).

Thus, the expressions of percent relative efficiencies \(E^{*}\) and \(E^{**}\) of the estimators \(T_{mj}^{*}\) and \(T_{mj}^{**}\) with respect to estimator \(\tau\) under the optimality conditions are given as.

\(PRE(E^{*} ) = \frac{{MSE\left( \tau \right)_{Min} }}{{MSE(T_{mj}^{*} )_{opt} }} \times 100\) and \(PRE(E^{**} ) = \frac{{MSE\left( \tau \right)_{Min} }}{{MSE(T_{mj}^{**} )_{opt} }} \times 100\).

To demonstrate the efficacy of the suggested class estimators \(T_{mj}^{*}\) and \(T_{mj}^{**}\), we used two real and two simulated datasets.

Population 1 [Source: Sukhatme and Sukhatme (1970, page-185)].

Here, \(y\) and \(x\) are the area under wheat in 1937 and 1936. \(z\) represents the total cultivated area in 1931.

\(N = 34,\) \(\overline{Y} = 201.4118\) acres, \(\overline{X} = 218041\) acres, \(\overline{Z} = 765\) acres, \(\rho_{yx} = 0.929929,\)\(\rho_{xz} = 0.8307,\)\(\rho_{yz} = 0.8992,\)\(C_{x}^{2} = 0.5895,\)\(C_{y}^{2} = 0.57078,\)\(C_{z}^{2} = 0.6191,\)\(\, \beta_{x} = 0.197.\)

Population 2 [Source: Murthy (1967, page-399)].

Here, \(y\) and \(x\) are the area under wheat in 1964 and 1963. \(z\) represents the total cultivated area in 1961.

$$\begin{gathered} N = 34, \, \overline{Y} = 199.44 \, acres, \, \overline{X} = 208.89041 \, acres, \, \overline{Z} = 764.59 \, acres, \, \rho_{yx} = 0.9801, \hfill \\ \rho_{xz} = 0.9097, \, \rho_{yz} = 0.9043, \, C_{x}^{2} = 0.5191, \, C_{y}^{2} = 0.5673, \, C_{z}^{2} = 0.3527, \, \beta_{x} = 0.394. \hfill \\ \end{gathered}$$

5 Simulation Study

By using the statistical computational software \(R\) and carried out simulation studies relevant to our theoretical results. For our purpose two data sets (Population 3 and Population 4) are from Normal (Gaussian) distribution, with given parameters for the study and the auxiliary variables, the population parameters for the datasets generated are given below.

Population 3

\(N = 40, \, \overline{Y} = 205.89 \, acres, \, \overline{X} = 186.55 \, acres, \, \overline{Z} = 569.9 \, acres, \, \rho_{yx} = 0.877,\) \(\rho_{xz} = 0.76, \, \rho_{yz} = 0.867, \, C_{x}^{2} = 0.497998, \, C_{y}^{2} = 0.553997, \, C_{z}^{2} = 0.255996, \, \beta_{x} = 0.683.\)

Population 4

\(N = 50, \, \overline{Y} = 284.73 \, acres, \, \overline{X} = 298.54 \, acres, \, \overline{Z} = 700.98 \, acres, \, \rho_{yx} = 0.91,\)\(\rho_{xz} = 0.855, \, \rho_{yz} = 0.91, \, C_{x}^{2} = 0.63246, \, C_{y}^{2} = 0.64, \, C_{z}^{2} = 0.386996, \, \beta_{x} = 0.519.\)

6 Results

7 Interpretations of Empirical Results

The following interpretations may be read out from Tables 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26:

  1. (1)

    From Tables 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26 it is evident that:

  2. (a)

    For fixed values of \(p_{2}\) the percent relative efficiency \(E^{*}\) and \(E^{{**}}\) is increasing with the decreasing values of sample sizes \(n^{\prime} , \, n, \, m\) and \(u^{\prime}\). This trend is very much useful in terms of increased precision of the estimates with the reduced survey cost.

  3. (b)

    For fixed values of or fixed values of \(n^{\prime} , \, n, \, m\) and \(u^{\prime}\). The percent relative efficiency \(E^{*}\) and \(E^{{**}}\) is decreasing with the increasing values of non-response probability \(p_{2}\). These relations demonstrate that the probability of non-response on current occasions play an important role to handle the effectiveness of the estimation procedure.

Table 3 Percentage relative efficiencies of the estimator \(T_{m1}^{*}\) Population 1
Table 4 Percentage relative efficiencies of the estimator \(T_{m1}^{*}\) Population 2
Table 5 Percentage relative efficiencies of the estimator \(T_{m1}^{*}\) Population 3
Table 6 Percentage relative efficiencies of the estimator \(T_{m1}^{*}\) Population 4
Table 7 Percentage relative efficiencies of the estimator \(T_{m2}^{*}\) Population 1
Table 8 Percentage relative efficiencies of the estimator \(T_{m2}^{*}\) Population 2
Table 9 Percentage relative efficiencies of the estimator \(T_{m2}^{*}\) Population 3
Table 10 Percentage relative efficiencies of the estimator \(T_{m2}^{*}\) Population 4
Table 11 Percentage relative efficiencies of the estimator \(T_{m3}^{*}\) Population 1
Table 12 Percentage relative efficiencies of the estimator \(T_{m3}^{*}\) Population 2
Table 13 Percentage relative efficiencies of the estimator \(T_{m3}^{*}\) Population 3
Table 14 Percentage relative efficiencies of the estimator \(T_{m3}^{*}\) Population 4
Table 15 Percentage relative efficiencies of the estimator \(T_{m1}^{**}\) Population 1
Table 16 Percentage relative efficiencies of the estimator \(T_{m1}^{**}\) Population 2
Table 17 Percentage relative efficiencies of the estimator \(T_{m1}^{**}\) Population 3
Table18 Percentage relative efficiencies of the estimator \(T_{m1}^{**}\) Population 4
Table 19 Percentage relative efficiencies of the estimator \(T_{m2}^{**}\) Population 1
Table 20 Percentage relative efficiencies of the estimator \(T_{m2}^{**}\) Population 2
Table 21 Percentage relative efficiencies of the estimator \(T_{m2}^{**}\) Population 3
Table 22 Percentage relative efficiencies of the estimator \(T_{m2}^{**}\) Population 4
Table 23 Percentage relative efficiencies of the estimator \(T_{m3}^{**}\) Population 1
Table 24 Percentage relative efficiencies of the estimator \(T_{m3}^{**}\) Population 2
Table 25 Percentage relative efficiencies of the estimator \(T_{m3}^{**}\) Population 3
Table 26 Percentage relative efficiencies of the estimator \(T_{m3}^{**}\) Population 4

8 Conclusions

From the above discussion, we may conclude that the proposed class of estimators \(T_{mj}^{*}\) and \(T_{mj}^{**}\) contribute significantly to deal with the different realistic situations of random non-responses while estimating population mean on current (second) occasion successive sampling using two-phase setup. Since the proposed classes of estimators in comparison with the estimator are highly rewarding, increased precision at non-responding units is increasing on either of the occasions. Hence the proposed class of estimators may be recommended for their practical applications to the survey practitioners.