Introduction

In the sampling literature, it is well established that efficiency of the estimator of population parameters of interest can be increased by the use of auxiliary information related auxiliary variable x, which is highly correlated with study variable y. In literature of survey sampling many authors have suggested estimators based on auxiliary information. However in many situations of practical importance, instead of an auxiliary variable x there exist an attribute (say, \({\upvarphi }\)) which is highly correlated with study variable y. In these situations by taking the advantage of point bi-serial (see [1]) correlation between the study variable y and the auxiliary attribute \({\upvarphi }\), the efficient estimators of population parameter of interest can be constructed. Several authors including [210] have paid their attention towards the improved estimation using auxiliary attribute.

Let us consider a sample of size n is drawn by SRSWOR from a population of size N. Further let \(\hbox {y}_\mathrm{i}\) and \({\upphi }_\mathrm{i}\) denote the observations on variable y and \({\upphi }\) respectively for the ith unit (i = 1,2,3,...,N). It is assumed that attribute \({\upphi }\) takes only the two values 0 and 1 according as \({\upphi } =1\), if ith unit of the population possesses attribute \({\upphi }=0\), if otherwise. The variance of the usual unbiased estimator \(\hbox {S}_\mathrm{y}^2 \) is given by

$$\begin{aligned} \hbox {V}\,(\hbox {S}_{\mathrm{y}}^{2} )=\frac{\hbox {S}_\mathrm{y}^4}{\hbox {n}}({\uplambda }_{40} -1) \end{aligned}$$
(1)

where

$$\begin{aligned} {\uplambda }_{\mathrm{rs}} =\frac{{\upmu }_{\mathrm{rq}} }{{\upmu }_{20}^{\mathrm{r}/2} {\upmu }_{02}^{\mathrm{q}/2}},\quad {\upmu }_{\mathrm{rq}} =\frac{\sum \nolimits _{\mathrm{i}=1}^\mathrm{N} (\hbox {y}_\mathrm{i} -\overline{\hbox {Y}} )^{\mathrm{r}}({\upphi }_\mathrm{i} -\hbox {P})^{\mathrm{q}}}{\hbox {N}-1} \end{aligned}$$

In this paper a family of estimator have been proposed for the population variance \(\hbox {S}_\mathrm{y}^2\) when the auxiliary information is available in the form of attribute. For main results we confine ourselves to sampling scheme SRSWOR ignoring the finite population correction.

Estimators in Literature

In order to have an estimate of the study variable y, assuming the knowledge of the population proportion P, [11] proposed the following estimators.

$$\begin{aligned}&\displaystyle \hbox {t}_1 = \hbox {s}_\mathrm{y}^2 \frac{\hbox {S}_{\upphi }^2}{\hbox {s}_{\upphi }^2 }&\end{aligned}$$
(2)
$$\begin{aligned}&\displaystyle \hbox {t}_2 = \hbox {s}_\mathrm{y}^2 +\hbox {b}_{\upphi } (\hbox {S}_{\upphi }^2 -\hbox {s}_{\upphi }^2 )&\end{aligned}$$
(3)
$$\begin{aligned}&\displaystyle \hbox {t}_3 = \hbox {s}_\mathrm{y}^2 \exp \left[ {\frac{\hbox {S}_{\upphi }^2 -\hbox {s}_{\upphi }^2 }{\hbox {S}_{\upphi }^2 +\hbox {s}_{\upphi }^2}} \right]&\end{aligned}$$
(4)

The MSE expression of the estimator \(\hbox {t}_1\) and variance of \(\hbox {t}_2\) are given, respectively, by

$$\begin{aligned}&\displaystyle \hbox {MSE}(\hbox {t}_1)=\frac{\hbox {S}_\mathrm{y}^4 \left[ {({\uplambda }_{40} -1)+({\uplambda }_{04} -1)-2({\uplambda }_{22} -1)} \right] }{\hbox {n}}&\end{aligned}$$
(5)
$$\begin{aligned}&\displaystyle \hbox {V}(\hbox {t}_2 )=\frac{1}{\hbox {n}}\left[ {\hbox {S}_\mathrm{y}^4 ({\uplambda }_{40} -1)+\hbox {b}_{\upphi }^2 \hbox {S}_{\upphi }^4 ({\uplambda }_{04} -1)-\hbox {2b}_{\upphi } \hbox {S}_\mathrm{y}^2 \hbox {S}_{\upphi }^2 ({\uplambda }_{22} -1)} \right]&\end{aligned}$$
(6)

On differentiating (6) with respect to \(\hbox {b}_{\upphi } \) and equating to zero we obtain

$$\begin{aligned} \hbox {b}_{\upphi } =\frac{\hbox {S}_\mathrm{y}^2 ({\uplambda }_{22} -1)}{\hbox {S}_{\upphi } ^2 ({\uplambda }_{04} -1)} \end{aligned}$$
(7)

Substituting the optimum value of \(\hbox {b}_{\upphi }\) in (6), we get the optimum variance of estimator \(\hbox {t}_2\), as

$$\begin{aligned} \hbox {V}(\hbox {t}_2 )_{\min } =\frac{\hbox {S}_\mathrm{y}^4 }{\hbox {n}}\left[ {({\uplambda }_{40} -1)-\frac{({\uplambda }_{22} -1)^{2}}{({\uplambda } _{04} -1)}} \right] \end{aligned}$$
(8)

The MSE expression of the estimator \(\hbox {t}_3\) is given by

$$\begin{aligned} \hbox {MSE}(\hbox {t}_3)=\frac{\hbox {S}_\mathrm{y}^4 }{\hbox {n}}\left[ {({\uplambda }_{40} -1)+\frac{({\uplambda }_{04} -1)}{4}-({\uplambda }_{22} -1)} \right] \end{aligned}$$
(9)

Following [12, 13] proposed the following variance estimator using known values of some population parameters

$$\begin{aligned}&\displaystyle \hbox {t}_{\mathrm{KC1}} = \hbox {s}_\mathrm{y}^2 \left( {\frac{\hbox {S}_{\upphi }^2 +\hbox {C}_\mathrm{p}}{\hbox {s}_{\upphi }^2 +\hbox {C}_\mathrm{p}}} \right)&\end{aligned}$$
(10)
$$\begin{aligned}&\displaystyle \hbox {t}_{\mathrm{KC2}} = \hbox {s}_\mathrm{y}^2 \left( {\frac{\hbox {S}_{\upphi }^2 +{\upbeta }_{2{\upphi }}}{\hbox {s}_{\upphi }^2 +{\upbeta }_{2{\upphi }}}} \right)&\end{aligned}$$
(11)
$$\begin{aligned}&\displaystyle \hbox {t}_{\mathrm{KC3}} = \hbox {s}_\mathrm{y}^2 \left( {\frac{\hbox {S}_{\upphi }^2 {\upbeta }_{2{\upphi } } +\hbox {C}_\mathrm{p} }{\hbox {s}_{\upphi }^2 {\upbeta }_{2{\upphi } } +\hbox {C}_\mathrm{p}}} \right)&\end{aligned}$$
(12)
$$\begin{aligned}&\displaystyle \hbox {t}_{\mathrm{KC4}} = \hbox {s}_\mathrm{y}^2 \left( {\frac{\hbox {S}_{\upphi }^2 \hbox {C}_\mathrm{p} +{\upbeta }_{2{\upphi } } }{\hbox {s}_{\upphi }^2 \hbox {C}_\mathrm{p} +{\upbeta }_{2{\upphi }} }} \right)&\end{aligned}$$
(13)

where \(\hbox {s}_\mathrm{y}^2\) and \(\hbox {s}_{\upphi }^2 \) are unbiased estimator of population variances \(\hbox {S}_\mathrm{y}^2\) and \(\hbox {S}_{\upphi }^2\), respectively.

To obtain the bias and MSE, we write-

\(\hbox {s}_\mathrm{y}^2 =\hbox {S}_\mathrm{y}^2 \left( {1+\hbox {e}_0 } \right) \) and \(\hbox {s}_{\upphi }^2 =\hbox {S}_{\upphi }^2 \left( {1+\hbox {e}_1} \right) .\)

Such that \(\hbox {E}\left( {\hbox {e}_0} \right) =\hbox {E}\left( {\hbox {e}_1} \right) =0\)

and \(\hbox {E}\left( {\hbox {e}_0^2 } \right) =\frac{\left( {{\uplambda } _{40} -1} \right) }{\hbox {n}},\, \hbox {E}\left( {\hbox {e}_1^2 } \right) =\frac{\left( {{\uplambda }_{04} -1} \right) }{\hbox {n}}\), \(\hbox {E}\left( {\hbox {e}_0 \hbox {e}_1 } \right) =\frac{\left( {{\uplambda }_{22} -1} \right) }{\hbox {n}}\)

and \({\upkappa }_{\mathrm{pb}} ={\uprho }_{\mathrm{pb}} \frac{\hbox {C}_\mathrm{y}}{\hbox {C}_\mathrm{p}}.\)

The MSE expression of \(\hbox {t}_{\mathrm{KC}_\mathrm{i}}\) (i = 1,2,3,4) to the first order of approximation are respectively given by

$$\begin{aligned} \hbox {MSE}(\mathrm{t}_{\mathrm{KC}_\mathrm{I}} )=\frac{\hbox {S}_\mathrm{y}^4 }{\hbox {n}}\left[ {({\uplambda }_{40} -1)+\hbox {w}_\mathrm{i}^2 ({\uplambda }_{04} -1)-\hbox {2w}_\mathrm{i} ({\uplambda }_{22} -1)} \right] ,\, \left( {\hbox {i}=1,2,3,4} \right) \end{aligned}$$
(14)

where

$$\begin{aligned} \hbox {w}_1 =\frac{\hbox {S}_{\upphi }^2}{\hbox {S}_{\upphi }^2 +\hbox {C}_\mathrm{p}}, \quad \hbox {w}_2 =\frac{\hbox {S}_{\upphi }^2 }{\hbox {S}_{\upphi }^2 +{\upbeta }_{2{\upphi }}}, \quad \hbox {w}_3 =\frac{\hbox {S}_{\upphi }^2 {\upbeta }_{2{\upphi }}}{\hbox {S}_{\upphi }^2 {\upbeta }_{2{\upphi } } +\hbox {C}_\mathrm{p}}, \quad \hbox {w}_4 =\frac{\hbox {S}_{\upphi }^2 \hbox {C}_\mathrm{p}}{\hbox {S}_{\upphi }^2 \hbox {C}_\mathrm{p} +{\upbeta }_{2{\upphi }}} \end{aligned}$$

Following [5], Singh and Malik proposed the following variance estimator.

$$\begin{aligned} \hbox {t}_\mathrm{S} =\frac{\hbox {s}_\mathrm{y}^2 +\hbox {b}_{\upphi } (\hbox {S}_{\upphi }^2 -\hbox {s}_{\upphi }^2 )}{(\hbox {n}_1 \hbox {s}_{\upphi }^2 +\hbox {n}_2 )}(\hbox {n}_1 \hbox {S}_{\upphi }^2 +\hbox {n}_2 ) \end{aligned}$$
(15)

where \(\hbox {n}_1,\,\hbox {n}_2\) are either real numbers or the functions of the known parameters of attribute such as \(\hbox {C}_\mathrm{p},{\uprho }_{\mathrm{pb}},\, {\upbeta } _{2{\upphi }}\) and \({\upkappa }_{\mathrm{pb}}\).

The MSE expression of \(\hbox {t}_\mathrm{s}\) to the first order of approximation are respectively given by

$$\begin{aligned} \hbox {MSE}(\hbox {t}_\mathrm{s})= & {} \frac{1}{\hbox {n}}\left[ {\hbox {S}_\mathrm{y}^4 ({\uplambda }_{40} -1)+({\uplambda } _{04} -1)\left\{ {\hbox {b}_{\upphi }^2 \hbox {S}_{\upphi }^4 +\hbox {A}_1^2 \hbox {S}_\mathrm{y}^4 +\hbox {2A}_1 \hbox {b}_{\upphi } \hbox {S}_\mathrm{y}^2 \hbox {S}_{\upphi }^2} \right\} } \right. \nonumber \\&\left. -\hbox {2S}_\mathrm{y}^2 ({\uplambda }_{22} -1)\left\{ {\hbox {b}_{\upphi } \hbox {S}_{\upphi }^2 +\hbox {A}_1 \hbox {S}_\mathrm{y}^2} \right\} \right] \end{aligned}$$
(16)

where \(\hbox {A}_1 =\frac{\hbox {n}_1 \hbox {S}_{\upphi }^2}{\hbox {n}_1 \hbox {S}_{\upphi }^2 +\hbox {n}_2}.\)

The minimum MSE of \(\hbox {t}_\mathrm{s}\) is observed at \(\hbox {n}_1 ={\uprho }_{\mathrm{pb}}\) and \(\hbox {n}_2 ={\upbeta }_{2{\upphi }}\).

Following [5], Singh and Malik proposed another improve ratio type estimator \(\hbox {t}_{\mathrm{rs}}\) for the population variance as

$$\begin{aligned} \hbox {t}_{\mathrm{rs}} =\hbox {s}_\mathrm{y}^2 \frac{({\upeta } \hbox {S}_{\upphi }^2 -\hbox {v})}{\left[ {{\upalpha } ({\upeta } \hbox {s}_{\upphi }^2 -\hbox {v})+(1-{\upalpha } )({\upeta } \hbox {S}_{\upphi }^2 -\hbox {v})} \right] } \end{aligned}$$
(17)

where \({\upeta } ,\hbox {v}\) are either real numbers or the functions of the known parameters of attributes such as \(\hbox {C}_\mathrm{p} ,{\upbeta }_{2{\upphi } } ,{\uprho }_{\mathrm{pb}} \) and \({\upkappa }_{\mathrm{pb}}\).

Up to the first order approximation, the minimum MSE of \(\hbox {t}_{\mathrm{rs}}\) is given by,

$$\begin{aligned} \hbox {MSE}_{\min } (\hbox {t}_{\mathrm{rs}})=\frac{\hbox {S}_\mathrm{y}^4 }{\hbox {n}}\left\{ {({\uplambda }_{40} -1)+\hbox {A}_2^2 {\upalpha }_0^{2}({\uplambda }_{04} -1)-\hbox {2A}_2 {\upalpha }_0 ({\uplambda }_{22} -1)} \right\} \end{aligned}$$
(18)

where \({\upalpha }_0 =\frac{({\uplambda }_{22} -1)}{\hbox {A}_2 ({\uplambda }_{04} -1)}\) and \(\hbox {A}_2 =\frac{{\upeta } \hbox {S}_{\upphi }^2 }{({\upeta } \hbox {S}_{\upphi }^2 -\hbox {v})}.\)

Another improved class of estimator suggested by [13] is as follows

$$\begin{aligned} \hbox {t}_\mathrm{n} =\hbox {s}_\mathrm{y}^2 \left[ {\hbox {m}_1 +\hbox {m}_2 (\hbox {S}_{\upphi }^2 -\hbox {s}_{\upphi }^2 )} \right] \exp \left( {{\upgamma } \frac{\left[ {{\updelta } \hbox {S}_{\upphi }^2 +{\upmu }} \right] -\left[ {{\updelta } \hbox {s}_{\upphi }^2 +{\upmu }} \right] }{\left[ {{\updelta } \hbox {S}_{\upphi }^2 +{\upmu }} \right] +\left[ {{\updelta } \hbox {s}_{\upphi }^2 +{\upmu }} \right] }} \right) \end{aligned}$$
(19)

where \({\upgamma }\) and \({\upmu }\) are either real numbers or function of known parameters of the auxiliary attribute \({\upphi } \) such as \(\hbox {C}_\mathrm{p} ,{\upbeta }_{2{\upphi } } ,{\uprho }_{\mathrm{pb}}\) and \({\upkappa }_{\mathrm{pb}}\). The scalar \({\upgamma }\) takes value -1 and +1 for ratio and product type estimators, respectively.

The min MSE of estimator \(\hbox {t}_\mathrm{n}\) up to the first order of approximation is given by,

$$\begin{aligned} \hbox {MSE}(\hbox {t}_\mathrm{n} )=\hbox {S}_\mathrm{y}^4 \left[ {1+\hbox {m}_1^2 \hbox {R}_1 +\hbox {m}_2^2 \hbox {R}_2 +\hbox {2m}_1 \hbox {m}_2 \hbox {R}_3 -\hbox {2m}_1 \hbox {R}_4 -\hbox {2m}_2 \hbox {R}_5} \right] \end{aligned}$$
(20)

where,

$$\begin{aligned}&\hbox {R}_1 = 1+\frac{1}{\hbox {n}}\left[ {({\uplambda }_{40} -1)+{\upgamma }^{2}{\uptheta }^{2}({\uplambda }_{04} -1)+2{\upgamma } \left( {1+\frac{{\upgamma } }{2}} \right) {\uptheta }^{2}({\uplambda } _{04} -1)-4{\upgamma } {\uptheta } ({\uplambda }_{22} -1)} \right] \\&\hbox {R}_2 = \frac{1}{\hbox {n}}\hbox {S}_{\upphi }^4 ({\uplambda }_{40} -1) \\&\hbox {R}_3 = \frac{1}{\hbox {n}}\hbox {S}_{\upphi }^4 \left[ {2({\uplambda }_{22} -1)+2{\upgamma } {\uptheta } ({\uplambda }_{04} -1)} \right] \\&\hbox {R}_4 = 1+\frac{1}{\hbox {n}}\left[ {{\upgamma } \left( {1+\frac{{\upgamma } }{2}} \right) {\uptheta }^{2}({\uplambda }_{04} -1)-{\upgamma } {\uptheta } ({\uplambda }_{22} -1)} \right] \\&\hbox {R}_5= \frac{1}{\hbox {n}}\hbox {S}_{\upphi }^2 \left[ {{\upgamma } {\uptheta } ({\uplambda }_{04} -1)-({\uplambda }_{22} -1)} \right] \\&{\uptheta } = \frac{{\updelta } \hbox {S}_{\upphi }^2 }{2({\updelta } \hbox {S}_{\upphi }^2 +{\upmu })} \\&\hbox {m}_1 = \frac{(\hbox {R}_2 \hbox {R}_4 -\hbox {R}_3 \hbox {R}_5 )}{(\hbox {R}_1 \hbox {R}_2 -\hbox {R}_3^2 )} \\&\hbox {m}_2 = \frac{(\hbox {R}_1 \hbox {R}_5 -\hbox {R}_3 \hbox {R}_4 )}{(\hbox {R}_1 \hbox {R}_2 -\hbox {R}_3^2 )} \end{aligned}$$

The Suggested Class of Estimators

Motivated by [14] we propose generalized class of estimators \(\hbox {t}_\mathrm{M}\) for estimating the population variance \(\hbox {S}_\mathrm{y}^2 \), as

$$\begin{aligned} \hbox {t}_\mathrm{M} =\left[ {{\upomega }_1 \hbox {s}_\mathrm{y}^2 +{\upomega } _2 \left( {\frac{\hbox {s}_{\upphi }^2 }{\hbox {S}_{\upphi }^2 }} \right) ^{{\upgamma } }} \right] \exp \left[ {\frac{{\upeta } \left( {\hbox {S}_{\upphi }^2 -\hbox {s}_{\upphi }^2 } \right) }{{\upeta } \left( {\hbox {S}_{\upphi }^2 +\hbox {s}_{\upphi }^2 } \right) +2{\uplambda } }} \right] \end{aligned}$$
(21)

where \({\upomega }_1\) and \({\upomega }_2\,({\upomega }_1 + {\upomega }_2 \ne 1)\) are suitable constants to be determined such that MSE of \(\hbox {t}_\mathrm{M}\) is minimum, \({\upeta } ,\, {\upgamma }\) and \({\uplambda } \) are either real numbers or the functions of the known parameter associated with auxiliary attribute (See [15]).

\({\upgamma } \) and \({\upeta } \) are chosen in such a way that they generate ratio type and product type estimators for variance estimators for particular values as +1 and \(-\)1.

A set of new estimators generated from (21) using suitable values of \({\upomega }_1 ,{\upomega }_2 ,{\upgamma } ,{\upeta }\) and \({\uplambda }\) are listed in Table 1.

Table 1 Set of estimators generated from the class of estimators \(\hbox {t}_\mathrm{M}\)

Expanding Eq. (21) in terms of e’s up to the first order of approximation, we have,

$$\begin{aligned} \hbox {t}_{\mathrm{M}} -\hbox {S}_\mathrm{y}^2= & {} \hbox {S}_\mathrm{y}^2 \left( {{\upomega }_1 -1} \right) +{\upomega }_1 \hbox {S}_\mathrm{y}^2 \hbox {e}_0 +\left( {{\upomega }_2 +{\upomega }_2 {\upgamma } \hbox {e}_1 +{\upomega }_2 \frac{{\upgamma } ({\upgamma } -1)}{2}\hbox {e}_1^2} \right) -\frac{1}{2}{\upomega }_1 \hbox {S}_\mathrm{y}^2 \hbox {ve}_1 \nonumber \\&-\frac{1}{2}{\upomega }_1 \hbox {S}_\mathrm{y}^2 \hbox {ve}_0 \hbox {e}_1 -\frac{1}{2}{\upomega }_2 \hbox {ve}_1 -\frac{1}{2}{\upomega }_2 \hbox {v}{\upgamma } \hbox {e}_1^2 +\frac{3}{8}{\upomega }_1 \hbox {S}_\mathrm{y}^2 \hbox {v}^{2}\hbox {e}_1^2 +\frac{3}{8}{\upomega }_2 \hbox {v}^{2}\hbox {e}_1^2 \end{aligned}$$
(22)

where, \(\hbox {e}_0 =\frac{\hbox {s}_\mathrm{y}^2 -\hbox {S}_\mathrm{y}^2}{\hbox {S}_\mathrm{y}^2 }, \hbox {e}_1 =\frac{\hbox {s}_{\upphi }^2 -\hbox {S}_{\upphi }^2 }{\hbox {S}_{\upphi }^2}\) and \(\hbox {v}=\frac{{\upeta } \hbox {S}_{\upphi }^2 }{{\upeta } \hbox {S}_{\upphi }^2 +{\uplambda }}.\)

To obtain the bias and MSE of the estimator \(\hbox {t}_{\mathrm{M}}\) to the first degree of approximation, we write

Such that \(\hbox {E}(\hbox {e}_0 )=\hbox {E}(\hbox {e}_1)=0\)

Also \(\hbox {E}(\hbox {e}_0 )=\frac{{\uplambda }_{04} -1}{\hbox {n}}, \quad \hbox {E}(\hbox {e}_1 )=\frac{{\uplambda }_{40} -1}{\hbox {n}}\)    and    \(\hbox {E}(\hbox {e}_0 \hbox {e}_1)=\frac{{\uplambda } _{22} -1}{\hbox {n}}\)

Taking expectation both sides of Eq. (22), we get the bias expression of estimator \(\hbox {t}_\mathrm{M}\) as

$$\begin{aligned} \hbox {Bias}(\hbox {t}_\mathrm{M} )= & {} -\hbox {S}_\mathrm{y}^{2} +{\upomega }_1 \hbox {S}_\mathrm{y}^{2} \left[ {1-\frac{1}{2}\hbox {v}\frac{({\uplambda }_{22}-1)}{\hbox {n}}+\frac{3}{8}\hbox {v}^{2}\frac{({\uplambda }_{22} -1)}{\hbox {n}}} \right] \nonumber \\&+ \,{\upomega }_2 \left[ {1+\left\{ {\frac{1}{2}{\upgamma } ({\upgamma } -1)-\frac{1}{2}\hbox {v}{\upgamma } +\frac{3}{8}\hbox {v}^{2}} \right\} \frac{({\uplambda }_{40} -1)}{\hbox {n}}} \right] \end{aligned}$$
(23)

Squaring both sides of Eq. (23) and taking expectation we get the MSE expression of estimator \(\hbox {t}_\mathrm{M} \) as

$$\begin{aligned} \hbox {MSE}\left( {\hbox {t}_\mathrm{M} } \right) =\left[ {\hbox {S}_\mathrm{y}^{4} +{\upomega }_1^2 \hbox {S}_\mathrm{y}^4 \hbox {A}+{\upomega }_2^2 \hbox {B}+{\upomega }_1 \hbox {S}_\mathrm{y}^4 \hbox {D}+{\upomega }_2 \hbox {S}_\mathrm{y}^2 \hbox {G}+{\upomega }_1 {\upomega }_2 \hbox {S}_\mathrm{y}^2 \hbox {F}} \right] \end{aligned}$$
(24)

where

$$\begin{aligned} \hbox {A}= & {} \left[ {1+\frac{({\uplambda }_{40} -1)}{\hbox {n}}+\hbox {v}^{2}\frac{({\uplambda }_{04} -1)}{\hbox {n}}-\hbox {2v}\frac{({\uplambda }_{22} -1)}{\hbox {n}}} \right] \\ \hbox {B}= & {} \left[ {1+\left\{ {\hbox {v}^{2}+{\upgamma }^{2}-\hbox {2v}{\upgamma } +{\upgamma } ({\upgamma } -1)} \right\} \frac{({\uplambda }_{04} -1)}{\hbox {n}}} \right] \\ \hbox {D}= & {} \left[ {-2-\frac{3}{4}\hbox {v}^{2}\left\{ {\frac{({\uplambda }_{04} -1)}{\hbox {n}}} \right\} +\hbox {v}\left\{ {\frac{({\uplambda }_{22} -1)}{\hbox {n}}} \right\} } \right] \\ \hbox {G}= & {} \left[ {-2+\left\{ {\hbox {v}{\upgamma } -\frac{3}{4}\hbox {v}^{2}-{\upgamma } ({\upgamma } -1)} \right\} \frac{({\uplambda }_{04} -1)}{\hbox {n}}} \right] \\ \hbox {F}= & {} \left[ {2+\left\{ {\hbox {2v}^{2}-\hbox {2v}{\upgamma } +{\upgamma } ({\upgamma } -1)} \right\} \frac{({\uplambda }_{04} -1)}{\hbox {n}}+2({\upgamma } -\hbox {v})\frac{({\uplambda }_{22} -1)}{\hbox {n}}} \right] \end{aligned}$$

Partially differentiating Eq. (24) with respect to \({\upomega }_1\) and \({\upomega }_2\) and equating to zero, we get the optimum value of \({\upomega }_1\) and \({\upomega }_1\) as

$$\begin{aligned} {\upomega }_1 \hbox {(opt)}= & {} \left\{ {\frac{\hbox {GF}-\hbox {2BD}}{\hbox {4BA}-\hbox {F}^{2}}} \right\} \\ {\upomega }_2 (\hbox {opt})= & {} \left\{ {\frac{\hbox {DF}-\hbox {2GA}}{\hbox {4BA}-\hbox {F}^{2}}} \right\} \end{aligned}$$

Substituting the optimal values of \({\upomega }_i\) (i = 1,2) we obtain the minimum MSE associated with \(\hbox {t}_\mathrm{M} \),

$$\begin{aligned} \hbox {MSE}_{{\min }} (\hbox {t}_\mathrm{M} )=\hbox {S}_\mathrm{y}^4 \left[ {1-\frac{\hbox {BD}^{2}-\hbox {DFG}+\hbox {AG}^{2}}{(\hbox {4AB}-\hbox {F}^{2})}} \right] \end{aligned}$$
(25)

Efficiency Comparisons

We compare the efficiency of the proposed estimator \(\hbox {t}_\mathrm{M}\) under optimum condition with the usual unbiased estimator, ratio estimator, exponential ratio estimator, regression estimator for variance estimation using an auxiliary attribute:

$$\begin{aligned} \hbox {V}(\hbox {S}_\mathrm{y}^2 )-\hbox {MSE}(\hbox {t}_\mathrm{M})= & {} \frac{\hbox {S}_\mathrm{y}^4 }{\hbox {n}}\left[ {({\uplambda }_{40} -1)} \right] \nonumber \\&-\left[ {\hbox {S}_\mathrm{y}^{4} \left( {1+{\upomega }_1^2 \hbox {A}+{\upomega }_1 \hbox {D}} \right) +\hbox {S}_\mathrm{y}^2 \left( {{\upomega }_2 \hbox {G}+{\upomega }_1 {\upomega }_2 \hbox {F}} \right) +{\upomega }_2^2 \hbox {B}} \right] \ge 0 \end{aligned}$$
(26)
$$\begin{aligned} \hbox {MSE}(\hbox {t}_1 )-\hbox {MSE}(\hbox {t}_\mathrm{M})= & {} \frac{\hbox {S}_\mathrm{y}^4 \left[ {({\uplambda }_{40} -{\uplambda }_{04} -2{\uplambda }_{22} )} \right] }{\hbox {n}} \nonumber \\&-\left[ {\hbox {S}_\mathrm{y}^{4} \left( {1+{\upomega }_1^2 \hbox {A}+{\upomega }_1 \hbox {D}} \right) +\hbox {S}_\mathrm{y}^2 \left( {{\upomega }_2 \hbox {G}+{\upomega }_1 {\upomega }_2 \hbox {F}} \right) +{\upomega }_2^2 \hbox {B}} \right] \ge 0 \end{aligned}$$
(27)
$$\begin{aligned} \hbox {MSE}(\hbox {t}_2 )_{\min } -\hbox {MSE}(\hbox {t}_\mathrm{M})= & {} \frac{\hbox {S}_\mathrm{y}^4 }{\hbox {n}}\left[ ({\uplambda }_{40} -1)-\frac{({\uplambda }_{22} -1)^{2}}{({\uplambda }_{04} -1)} \right] \nonumber \\&-\left[ {\hbox {S}_\mathrm{y}^{4} \left( {1+{\upomega }_1^2 \hbox {A}+{\upomega }_1 \hbox {D}} \right) +\hbox {S}_\mathrm{y}^2 \left( {{\upomega }_2 \hbox {G}+{\upomega }_1 {\upomega }_2 \hbox {F}} \right) +{\upomega }_2^2 \hbox {B}} \right] \ge 0 \end{aligned}$$
(28)
$$\begin{aligned} \hbox {MSE}(\hbox {t}_3 )-\hbox {MSE}(\hbox {t}_\mathrm{M})= & {} \frac{\hbox {S}_\mathrm{y}^4 }{\hbox {n}}\left[ {{\uplambda }_{40} -{\uplambda }_{22} +\frac{({\uplambda }_{04} -1)}{4}} \right] \nonumber \\&-\left[ {\hbox {S}_\mathrm{y}^{4} \left( {1+{\upomega } _1^2 \hbox {A}+{\upomega }_1 \hbox {D}} \right) +\hbox {S}_\mathrm{y}^2 \left( {{\upomega }_2 \hbox {G}+{\upomega }_1 {\upomega }_2 \hbox {F}} \right) +{\upomega }_2^2 \hbox {B}} \right] \ge 0 \end{aligned}$$
(29)
$$\begin{aligned} \hbox {MSE}(\hbox {t}_{\mathrm{KC}_\mathrm{I}} )-\hbox {MSE}(\hbox {t}_\mathrm{M})= & {} \frac{\hbox {S}_\mathrm{y}^4 }{\hbox {n}}\left[ {({\uplambda }_{40} -1)+\hbox {w}_\mathrm{i}^2 ({\uplambda }_{04} -1)-\hbox {2w}_\mathrm{i} ({\uplambda }_{22} -1)} \right] \nonumber \\&-\left[ {\hbox {S}_\mathrm{y}^{4} \left( {1+{\upomega }_1^2 \hbox {A}+{\upomega }_1 \hbox {D}} \right) +\hbox {S}_\mathrm{y}^2 \left( {{\upomega }_2 \hbox {G}+{\upomega }_1 {\upomega }_2 \hbox {F}} \right) +{\upomega }_2^2 \hbox {B}} \right] \ge 0 \end{aligned}$$
(30)
$$\begin{aligned} \hbox {MSE}(\hbox {t}_\mathrm{S})-\hbox {MSE}(\hbox {t}_\mathrm{M})= & {} \frac{1}{\hbox {n}}\left[ \hbox {S}_\mathrm{y}^4 ({\uplambda }_{40} -1)+({\uplambda }_{04} -1)\left\{ {\hbox {b}_{\upphi }^2 \hbox {S}_{\upphi }^4 +\hbox {A}_1^2 \hbox {S}_\mathrm{y}^4 +\hbox {2A}_1 \hbox {b}_{\upphi } \hbox {S}_\mathrm{y}^2 \hbox {S}_{\upphi }^2} \right\} \right. \nonumber \\&\left. -\hbox {2S}_\mathrm{y}^2 \left\{ {\hbox {b}_{\upphi } \hbox {S}_{\upphi }^2 +\hbox {A}_1 \hbox {S}_\mathrm{y}^2 } \right\} \right] \nonumber \\&-\left[ {\hbox {S}_\mathrm{y}^{4} \left( {1+{\upomega }_1^2 \hbox {A}+{\upomega }_1 \hbox {D}} \right) +\hbox {S}_\mathrm{y}^2 \left( {{\upomega } _2 \hbox {G}+{\upomega }_1 {\upomega }_2 \hbox {F}} \right) +{\upomega }_2^2 \hbox {B}} \right] \ge 0 \end{aligned}$$
(31)
$$\begin{aligned} \hbox {MSE}(\hbox {t}_{\mathrm{rs}})-\hbox {MSE}(\hbox {t}_\mathrm{M})= & {} \frac{\hbox {S}_\mathrm{y}^4}{\hbox {n}}\left\{ {({\uplambda }_{40} -1)+\hbox {A}_2^2 {\upalpha }^{2}({\uplambda }_{04} -1)-\hbox {2A}_2 {\upalpha } ({\uplambda }_{22} -1)} \right\} \nonumber \\&-\left[ {\hbox {S}_\mathrm{y}^{4} \left( {1+{\upomega }_1^2 \hbox {A}+{\upomega }_1 \hbox {D}} \right) +\hbox {S}_\mathrm{y}^2 \left( {{\upomega }_2 \hbox {G}+{\upomega }_1 {\upomega }_2 \hbox {F}} \right) +{\upomega }_2^2 \hbox {B}} \right] \ge 0 \end{aligned}$$
(32)
$$\begin{aligned} \hbox {MSE}(\hbox {t}_\mathrm{n})-\hbox {MSE}(\hbox {t}_\mathrm{M})= & {} \hbox {S}_\mathrm{y}^4 \left[ {1+\hbox {m}_1^2 \hbox {R}_1 +\hbox {m}_2^2 \hbox {R}_2 +\hbox {2m}_1 \hbox {m}_2 \hbox {R}_3 -\hbox {2m}_1 \hbox {R}_4 -\hbox {2m}_2 \hbox {R}_5} \right] \nonumber \\&-\left[ {\hbox {S}_{\mathrm{y}}^{4} \left( {1+{\upomega }_1^2 \hbox {A}+{\upomega }_{1} \hbox {D}} \right) +\hbox {S}_\mathrm{y}^2 \left( {{\upomega }_2 \hbox {G}+{\upomega }_1 {\upomega }_2 \hbox {F}} \right) +{\upomega }_2^2 \hbox {B}} \right] \ge 0\nonumber \\ \end{aligned}$$
(33)

From Eqs. (26) to (33), we conclude that the proposed estimator \(\hbox {t}_\mathrm{M}\) under aforesaid conditions performs better than the other existing estimator for the same scenario discussed in this paper.

Empirical Study

In this section we compare the performance of different estimators considered in this paper using two population data sets. The description of population data sets are as follows.

Population I [Source: [16], p. 256].

  • y = Number of villages in the circle.

  • \({\upphi } =\) A circle consisting more than five villages.

  • N = 89, n = 23, \(\hbox {S}_\mathrm{y}^2 =4.074, \hbox {S}_{\upphi }^2 =0.11, \hbox {C}_\mathrm{y} =0.601, \hbox {C}_\mathrm{p} =2.678, {\uprho }_{\mathrm{pb}} =0.766\), \({\upbeta }_{2{\upphi }} =6.162, {\uplambda }_{22} =3.996, {\uplambda }_{40} =3.811\), \({\uplambda }_{04} =6.162\).

Population II [ Source: [17], p. 203].

  • y = Household size in each household of village.

  • \({\upphi } =\) Household consisting size more than five.

  • N=35, n=15, \(\hbox {S}_\mathrm{y}^2 =4.232, \hbox {S}_{\upphi }^2 =0.252, \hbox {C}_\mathrm{y} =0.346\), \(\hbox {C}_\mathrm{p} =0.879, {\uprho }_{\mathrm{pb}} =0.773,\, {\upbeta }_{2{\upphi } } =1.052, {\uplambda }_{22} =0.952,\, {\uplambda }_{40} =4.977, {\uplambda }_{04} =1.052\).

Table 2 exhibits that the PRE’s of the proposed estimators including the different members of the proposed class along with the PRE’s of the existing estimators with respect to \(\hbox {S}_\mathrm{y}^{2}\) for two real population data sets. Estimators \(\hbox {t}_{\mathrm{M}_\mathrm{i}}\) (i = 1,2, ..., 10) are obtained from class of estimators \(\hbox {t}_\mathrm{M} \) by taking different values of \({\upeta } \) and \({\upgamma }\) and percent relative efficiency shown in the table. The highest PRE is obtained for \({\upgamma } =0,\, {\upeta } =1\) and \({\uplambda } =\hbox {C}_\mathrm{p}\) of estimator \(\hbox {t}_\mathrm{M}\). It has also been observed that the suggested class of estimators \(\hbox {t}_\mathrm{M}\) under optimum condition is more efficient than the usual unbiased estimator, ratio estimator, regression estimator, [11, 13] estimator and other estimators discussed in this paper. Hence for observed choice of parameters the proposed estimator \(\hbox {t}_\mathrm{M}\) is best among the entire estimators considered in this paper.

Table 2 PRE’s of various estimators w.r.t. \(\hbox {S}_\mathrm{y}^2\)

Conclusion

In this article we have suggested a generalized class of estimators for the population variance of study variable y when information is available on an auxiliary attribute in simple random sampling without replacement (SRSWOR). In addition, some known estimators of population variance such as usual unbiased estimator, ratio, and exponential ratio type estimators are found to be members of the proposed generalized class of estimators. Some new members are also generated from the proposed generalized class of estimators. We have determined the biases and mean square errors of the proposed class of estimators up to the first order of approximation. The proposed generalized class of estimators is advantageous in the sense that the properties of the estimators, which are members of the proposed class of estimators, can be easily obtained from the properties of the proposed generalized class. Thus the study unifies properties of several estimators for population variance using information on an auxiliary attribute. In theoretical and empirical efficiency comparisons, it has been shown that almost all the members of the proposed generalized class of estimators are more efficient than the usual unbiased estimator, ratio, exponential ratio, regression estimator, estimators due to [11, 13] and all other estimators considered here using information on an auxiliary attribute, scrupulously, estimator \(\hbox {t}_{\mathrm{M7}}\) is best among all the members of generalized class in the sense of having least mean square error.