1 Introduction

Let us consider the following linear model

$$\begin{aligned} \left\{ \begin{array}{l} y=X\beta +\epsilon \\ E(\epsilon )=0\\ Cov(\epsilon )=\sigma ^2I_n\\ \end{array} \right. \end{aligned}$$
(1.1)

where y presents an \(n\times 1\) vector of observation, X presents an \(n\times p\) known matrix of rank p, \(\beta \) shows a \(p\times 1\) vector of unknown parameters, \(\epsilon \) shows an \(n\times 1\) vector of disturbances with expectation \(E(\epsilon )=0\) and variance-covariance matrix \(Cov(\epsilon )=\sigma ^{2}I_{n}\).

Suppose that \(\tilde{\beta }\) presents any estimator of \(\beta \), then the quadratic loss function which presents the goodness of fit of the model is defined as follows

$$\begin{aligned} (y-X\tilde{\beta })'(y-X\tilde{\beta }) \end{aligned}$$
(1.2)

while the authors usually use the following squared loss function or the weighted squared error loss function to define the precision of estimation

$$\begin{aligned} (\tilde{\beta }-\beta )'(\tilde{\beta }-\beta ) \end{aligned}$$
(1.3)

or weighted squared error loss function

$$\begin{aligned} (\tilde{\beta }-\beta )'X'X(\tilde{\beta }-\beta ) \end{aligned}$$
(1.4)

As we all know, both the three criteria are important and it may be desirable to employ both the criteria simultaneously in practice. Zellner (1994) considered both the criteria of goodness of fit and precision of estimation together, and proposed the following balanced loss function:

$$\begin{aligned} w(y-X\tilde{\beta })'(y-X\tilde{\beta })+(1-w)(\tilde{\beta }-\beta )'X'X(\tilde{\beta }-\beta ) \end{aligned}$$
(1.5)

with w shows a scalar between 0 and 1. When \(w=0\), the loss function (1.5) presents the precision of estimation and when \(w=1\), the loss function (1.5) shows the goodness of fitted model.

Furthermore, Shalabh (1995) use the conception of simultaneous prediction of actual and average values of study variable, introduced the following loss function

$$\begin{aligned}&w^2(y-X\tilde{\beta })'(y-X\tilde{\beta })+(1-w)^2(\tilde{\beta }-\beta )'X'X(\tilde{\beta }-\beta )\nonumber \\&\quad +\,2w(1-w)(X\tilde{\beta }-y)'X(\tilde{\beta }-\beta ) \end{aligned}$$
(1.6)

with w presents a scalar between 0 and 1. It is easy to know that (1.6) is an extension of (1.5) and it is also take care of the covariability between the goodness of fit and precision of estimation.

The balanced loss function has been received considerable attention in the literature. Rodrigues and Zellner (1994) have applied the balanced loss function in the estimation of mean time to failure. Gruber (2004) have considered the empirical Bayes and approximate minimum mean square error estimator under a general balanced loss function. Zhu et al. (2010) got the best linear unbiased estimator under the balanced loss function. Hu et al. (2010) got the optimal estimator under the balanced loss function and also considered the relative efficiency of the optimal estimator with the ordinary least squares estimator.

As the weighted balanced loss function is very popular and important, and there is no author to study the best linear unbiased estimator under the weighted balanced loss function in the singular linear model. We also want to know what form of the optimal estimator under the weighted balanced loss function. Is the optimal estimator in singular linear model under weighted balanced loss function is same as the usual best linear unbiased estimator which is derived under the squared loss function or balanced loss function? In this we will do this work. We will derive the optimal estimator under the weighted balanced loss function. Some relative efficiencies are also presented in this paper.

The rest of this paper is organized as follows. In Sect. 2, we present the singular linear model and the weighted balanced loss function. In Sect. 3 we consider the best linear unbiased estimator and discuss some relative efficiencies. A numerical example is presented to show the lower and upper bounds of these relative efficiencies in Sect. 4 and some conclusion remarks are presented in Sect. 5.

2 Linear model and loss function

In this paper we discuss the following singular linear model

$$\begin{aligned} \left\{ \begin{array}{l} y=X\beta +\epsilon \\ E(\epsilon )=0\\ Cov(\epsilon )=\sigma ^2V\\ \end{array} \right. \end{aligned}$$
(2.1)

where y presented an \(n\times 1\) observable random vector, \(\epsilon \) shows an \(n\times 1\) random error vector, X shows an \(n\times p\) known matrix with \(rank(X)=p\), \(\sigma ^2\) is known constant, V presents an \(n\times n\) known nonnegative definite matrix where \(\beta \) are \(p\times 1\) unknown parameters. Plenty of statisticians have presented many ways to get the best linear estimator of regression coefficient of this model. Rao (1985) give a simple way to obtain the best linear unbiased estimator . Fan and Wu (2014) discussed the best linear unbiased estimator under the weighted balanced loss function when V is a positive definite matrix.

With the unified theory least squares and Shalabh (1995) thought of weighted balanced loss, we present the following weighted balanced loss function:

$$\begin{aligned} W(\tilde{\beta },\beta )= & {} w^2(y-X\tilde{\beta })'T^{+}(y-X\tilde{\beta })+(1-w)^2(\tilde{\beta }-\beta )'S(\tilde{\beta }-\beta )\nonumber \\&+\,2w(1-w)(X\tilde{\beta }-y)'T^{+}X(\tilde{\beta }-\beta ) \end{aligned}$$
(2.2)

for w is a scalar between 0 and 1, \(T=V+XUX'\), U is a positive definite matrix, S is a positive definite matrix and \(\tilde{\beta }\) presents an any estimator of \(\beta \). Then the corresponding risk function is denoted by

$$\begin{aligned} R(\tilde{\beta },\beta )=E\left\{ W(\tilde{\beta },\beta )\right\} \end{aligned}$$
(2.3)

3 Main results

In this section we will list the main results of this paper. Firstly, we discuss the best linear unbiased estimator under the weighted balanced loss function in the singular linear model.

3.1 The best linear unbiased estimator

Write \(\mathfrak {I}_1=\{Ly; \ L \in R^{p\times n}, \ L \ is \ a \ constant\ matrix\ and \ LX=I_p\}\), the optimal estimator in \(\mathfrak {I}_1\) is got when V is an nonnegative definite matrix.

Firstly, we list five Lemmas which are needed in the following discussions.

Definition 3.1

If Ly in \(\mathfrak {I}_1\) let

$$\begin{aligned} R(Ly,\beta )= & {} E\left\{ W(Ly,\beta )\right\} \nonumber \\= & {} E\{w^2(y-XLy)'T^{+}(y-XLy)+(1-w)^2(Ly-\beta )'S(Ly-\beta )\nonumber \\&+\,2w(1-w)(XLy-y)'T^{+}X(Ly-\beta )\} \end{aligned}$$
(3.1)

reach the minimum, then Ly is the best linear unbiased estimator.

Lemma 3.1

Let X be an \(n\times p\) matrix, L is a \(p\times n\) matrix, then \(\frac{\partial trL'X'XL}{\partial L}=2X'XL\).

Proof

See Wang (1987). \(\square \)

Lemma 3.2

Let A be an \(n\times n\) positive definite matrix, and \(\lambda _1\ge \cdots \ge \lambda _n>0\) be the ordered eigenvalues of A, P is an \(n\times k\) matrix with \(P'P=I_k\), \(n>k\), denote \(l=min(k,n-k)\), then

$$\begin{aligned} 1\le \frac{tr(P'AP)}{tr(P'A^{-1}P)^{-1}}\le \left[ \frac{\sum _{i=1}^l(\lambda _i+\lambda _{n-i+1})}{2\sum _{i=1}^l(\sqrt{\lambda _i\lambda _{n-i+1}})}\right] ^2 \end{aligned}$$

Proof

See Yang (1988). \(\square \)

Lemma 3.3

Let A be an \(n\times n\) positive definite matrix, and \(\lambda _1\ge \cdots \ge \lambda _n>0\) be the ordered eigenvalues of A, P is an \(n\times k\) matrix with \(P'P=I_k\), \(n>k\), denote \(l=min(k,n-k)\), then

$$\begin{aligned} tr\left( P'AP-(P'A^{-1}P)^{-1}\right) \le \sum _{i=1}^l\left( \sqrt{\lambda _i}-\sqrt{\lambda _{n-i+1}}\right) ^2 \end{aligned}$$

Proof

See Rao (1985). \(\square \)

Lemma 3.4

Let \(T=V+XUX'\) and \(rank(T)=rank(V\vdots X)\), then

  1. (1)

    \(\mathfrak {R}(V\vdots X)=\mathfrak {R}(T)\)

  2. (2)

    \((y-X\beta )'T^-(y-X\beta )\), \(X'T^-X\) and \(X'T^-y\) are independent of the choice of \(T^-\).

  3. (3)

    \(X'T^-T=X'\), \(X(X'T^-X)^-XT^-X=X\).

Proof

See Wang (1987) and Wang et al. (2006). \(\square \)

Lemma 3.5

Let \(rank(X)=p\), \(T=V+XUX'\) and \(rank(T)=rank(V\vdots X)\), then \(X'T^-X\) is a reversible matrix.

Proof

See Hu et al. (2010). \(\square \)

In the following theorem we will derive the optimal estimator of \(\beta \) in \(\mathfrak {I}_1\) under the weighted balanced loss function.

Theorem 3.1

For the model (2.1), when V is an nonnegative definite matrix, if \(L=(X'T^{-}X)^{-1}X'T^{-}\), then Ly is the best linear unbiased estimator of \(\beta \) under the weighted balanced loss function.

Proof

Suppose that \(Ly\in \mathfrak {I}_1\) and according to the weighted balanced loss function (2.2), then its risk function is presented as follows (Lemmas 3.4, 3.5)

$$\begin{aligned} R(Ly,\beta )= & {} E\{W(Ly,\beta )\}\nonumber \\= & {} E\left\{ w^2(y-XLy)'T^{-}(y-XLy)+(1-w)^2(Ly-\beta )'S(Ly-\beta ) \right. \nonumber \\&\left. +\,2w(1-w)(XLy-y)'T^{-}X(Ly-\beta )\right\} \nonumber \\= & {} \sigma ^2\left[ w^2tr(I_n-XL)'T^{-}(I_n-XL)V+(1-w)^2trL'SLV\right. \nonumber \\&\left. +\,2w(1-w)tr(XL-I_n)'T^{-} XLV\right] \nonumber \\&+\,w^2\beta 'X'(I_n-XL)'T^{-}(I_n-XL)X\beta +(1-w)^2\beta '(LX-I_p)'S\nonumber \\&\times (LX-I_p)\beta +2w(1-w)\beta 'X'(XL-I_n)'T^{-}X(LX-I_p)\beta \nonumber \\= & {} \sigma ^2w^2tr(T^-T)-2\sigma ^2pw+(2-w)w \sigma ^2 tr(X'T^-XU)\nonumber \\&+\,w(2-w)trL'X'T^-XLV+(1-w)^2trL'SLV\nonumber \\= & {} \sigma ^2w^2tr(T^-T)-2\sigma ^2pw+(2-w)w \sigma ^2 tr(X'T^-XU)+\sigma ^2trL'MLV\nonumber \\ \end{aligned}$$
(3.2)

where \(M=w(2-w)X'T^-X+(1-w)^2S\). If we want Ly to be the best linear unbiased estimator of \(\beta \) that is equal to

$$\begin{aligned} \left\{ \begin{array}{l} min \ tr(L'MLV)\\ s. t. \ LX=I_p\\ \end{array} \right. \end{aligned}$$
(3.3)

Using the Lagrange way. Let

$$\begin{aligned} F(L,\lambda )=tr(L'MLV)-2tr[\lambda '(LX-I_p)] \end{aligned}$$
(3.4)

for \(\lambda \) shows a \(p\times p\) matrix of Lagrangian Multipliers. Differentiate with respect to L and \(\lambda \) (Lemma 3.1):

$$\begin{aligned} ML(T-XUX')-\lambda X'=0 \end{aligned}$$
(3.5)
$$\begin{aligned} LX-I_p=0 \end{aligned}$$
(3.6)

Then we solve this problem and get

$$\begin{aligned} L=(X'T^{-}X)^{-1}X'T^{-}+G(I_n-TT^-) \end{aligned}$$
(3.7)

with G is an any \(p\times p\) matrix.

Now we prove that the risk of \((X'T^{-}X)^{-1}X'T^{-}y\) is equal to the risk of \(Ly=[(X'T^{-}X)^{-1}X'T^{-}+G(I_n-TT^-)]y\).

Since \((I_n-TT^-)V=(I_n-TT^-)(T-XUX')=O\) and

$$\begin{aligned}&R\left( \big [(X'T^{-}X)^{-1}X'T^{-}+G(I_n-TT^-)\big ]y,\beta ,\sigma ^2\right) \nonumber \\&\quad =\sigma ^2w^2tr(T^-T)-2\sigma ^2pw+(2-w)w \sigma ^2 tr(X'T^-XU)+\sigma ^2trL'MLV\nonumber \\&\quad =\sigma ^2w^2tr(T^-T)-2\sigma ^2pw+(2-w)w\sigma ^2 tr(X'T^-XU)\nonumber \\&\qquad + \, \sigma ^2trT^-X(X'T^{-}X)^{-1}M(X'T^{-}X)^{-1}X'T^-V\nonumber \\&\qquad + \, 2\sigma ^2T^-X(X'T^{-}X)^{-1}MG(I_n-TT^-)V\nonumber \\&\qquad + \,\sigma ^2tr[G(I_n-TT^-)]'M[G (I_n-TT^-)]V \end{aligned}$$
(3.8)

So we have \(R\left( [(X'T^{-}X)^{-1}X'T^{-}+G(I_n-TT^-)]y,\beta \right) =R((X'T^{-}X)^{-1}X'T^{-}y, \beta )\).

Next we prove that \((X'T^{-}X)^{-1}X'T^{-}y\) reaches the minimum risk in \(\mathfrak {I}_1\).

Suppose that \(\tilde{L}y\) be an any estimator of \(\beta \) in \(\mathfrak {I}_1\). By \(LX=I_p\), we obtain \(\tilde{L}=(X'T^{-}X)^{-1}X'T^{-}+\mu N\), where \(\mu \) is an any \(p\times n\) matrix and \(N=I_n-X(X'T^{-}X)^{-1}X'T^{-}\), then we obtain

$$\begin{aligned}&R\left( \big [(X'T^{-}X)^{-1}X'T^{-}+\mu N \big ]y,\beta \right) \nonumber \\&\quad =\sigma ^2w^2tr(T^-T)-2\sigma ^2pw+(2-w)w \sigma ^2 tr(X'T^-XU)\nonumber \\&\qquad +\,\sigma ^2tr[(X'T^{-}X)^{-1}X'T^{-}+\mu N]'M[(X'T^{-}X)^{-1}X'T^{-}+\mu N]V\nonumber \\&\quad =R((X'T^{-}X)^{-1}X'T^{-}y,\beta )+\sigma ^2tr(\mu N)'M(\mu N)V\nonumber \\&\quad =R((X'T^{-}X)^{-1}X'T^{-}y,\beta )+\sigma ^2tr(\mu NV^{1/2} )'M(\mu NV^{1/2} )\nonumber \\&\quad \ge R((X'T^{-}X)^{-1}X'T^{-}y,\beta ) \end{aligned}$$
(3.9)

and the equality holds if and only if \(L=(X'T^{-}X)^{-1}X'T^{-}\). \(\square \)

Corollary 3.1

For the model (2.1), if \(\mathfrak {R}(X)\subset \mathfrak {R}(V)\) and \(L=(X'T^{-}X)^{-1}X'V^{-}\), then Ly is the best linear unbiased estimator for \(\beta \) under the weighted loss function.

Remark 3.1

Under the weighted balanced loss function, the best linear unbiased estimator of \(\beta \) in \(\mathfrak {I}_1\) is \(\hat{\beta }=(X'T^{-}X)^{-1}X'T^{-}y\), which is same as the estimator \(\hat{\beta }=(X'T^{-}X)^{-1}X'T^{-}y\) got by Zhu et al. (2010) and Hu et al. (2010) under the balanced loss function.

Remark 3.2

Under the weighted balanced loss function, if V is a positive definite matrix then the best linear unbiased estimator of \(\beta \) in \(\mathfrak {I}_1\) is \(\hat{\beta }=(X'V^{-1}X)^{-1}X'V^{-1}y\), which is same as the estimator obtained by Fan and Wu (2014) under the weighted balanced loss function.

3.2 Relative efficiencies of the best linear unbiased estimator

In Sect. (3.1), we derive the best linear unbiased estimator \(\hat{\beta }_T=(X'T^{-}X)^{-1}X'T^{-}y\) under the weighted balanced loss function in the singular linear regression model. However, as we all know, the covariance matrix is usually unknown, at this time, we use the ordinary least squares estimator \(\hat{\beta }_{OLS}=(X'X)^{-1}X'y\) to replace the best linear unbiased estimator. We know this will lead to some loss. We know this will lead to some loss. There are many papers have discussed this loss, such as: Rao (1985), Yang and Wang (2009), Yang and Wu (2011), Liu (2000),Liu et al. (2009), Wang and Yang (2012) and Wu (2014). In this subsection, we definite two relative efficiencies to measure the loss under the weighted balanced loss risk function.

Now we define two relative efficiencies:

$$\begin{aligned} e_1(\hat{\beta }_T|\hat{\beta }_{OLS})=R(\hat{\beta }_{OLS},\beta )-R(\hat{\beta }_T,\beta ) \end{aligned}$$
(3.10)
$$\begin{aligned} e_2(\hat{\beta }_T|\hat{\beta }_{OLS})=\frac{R(\hat{\beta }_T,\beta )}{R(\hat{\beta }_{OLS},\beta )} \end{aligned}$$
(3.11)

where \(R(\tilde{\beta },\beta )\) is defined in (2.3), \(\hat{\beta }_T=(X'T^{-}X)^{-1}X'T^{-}y\) and \(\hat{\beta }_{OLS}=(X'X)^{-1}X'y\).

Based on the definition of weighted balanced loss risk function, we obtain

$$\begin{aligned} \begin{aligned}&R(\hat{\beta }_{OLS},\beta )=E\{W((X'X)^{-1}X'y,\beta )\}\\&\quad =E\left\{ w^2(y-X(X'X)^{-1}X'y)'T^{-}(y-X(X'X)^{-1}X'y)\right. \\&\qquad +\,(1-w)^2((X'X)^{-1}X'y-\beta )' S((X'X)^{-1}X' y-\beta ) \\&\left. \qquad + \, 2w(1-w)(X (X'X)^{-1}X'y-y)'T^{-}X((X'X)^{-1}X' y-\beta )\right\} \\&\quad =\sigma ^2w^2tr(T^-T)-2\sigma ^2pw-(1-w)^2\sigma ^2trSU\\&\qquad +\,\sigma ^2trM(X'X)^{-1}(X'TX)(X'X)^{-1} \end{aligned} \end{aligned}$$
(3.12)

and

$$\begin{aligned}&R(\hat{\beta }_{T},\beta ,\sigma ^2)=E\{W((X'T^{-}X)^{-1}X'T^{-}y,\beta ,\sigma ^2)\}\nonumber \\&\quad =\sigma ^2w^2tr(T^-T)-2\sigma ^2pw-(1-w)^2\sigma ^2trSU+\sigma ^2trM(X'T^{-}X)^{-1}\qquad \qquad \end{aligned}$$
(3.13)

Now we present the bounds of the two relative efficiencies.

Theorem 3.2

For the model (2.1), when V is an nonnegative definite matrix, let \(T=Q'\Lambda Q\), Q is an \(k\times p\) orthogonal matrix, \(\Lambda =diag(\lambda _1,\ldots ,\lambda _k)\), where \(\lambda _1\ge \cdots \ge \lambda _k>0\). \(c_1\ge \cdots \ge c_p>0=c_{p+1}= \cdots =c_k\) are the ordered eigenvalues of \(QXM^{-1}X'Q'\), then we have

$$\begin{aligned} e_1(\hat{\beta }_T|\hat{\beta }_{OLS})\le \frac{\sigma ^2\sum _{i=1}^m(\sqrt{\lambda _i}-\sqrt{\lambda _{k-i+1}})^2}{c_p} \end{aligned}$$

where \(rank(X)=p\), \(rank(T)=k\) and \(k>p\), \(m=min(p,k-p)\).

Proof

Since \(\mathfrak {R}(X)\subset \mathfrak {R}(T)\), then QX is \(k\times p\) matrix with \(rank(QX)=p\). Denote \(QX=\gamma \), then we have \(rank(\gamma )=p\) and

$$\begin{aligned} e_1(\hat{\beta }_T|\hat{\beta }_{OLS})= & {} R(\hat{\beta }_{OLS},\beta )-R(\hat{\beta }_T,\beta )\nonumber \\= & {} \left\{ \sigma ^2w^2tr(T^-T)-2\sigma ^2pw-(1-w)^2\sigma ^2trSU \right. \nonumber \\&\left. +\,\sigma ^2trM(X'X)^{-1}(X'TX)(X'X)^{-1}\right\} -\left\{ \sigma ^2w^2tr(T^-T) \right. \nonumber \\&\left. -\,2\sigma ^2pw-(1-w)^2\sigma ^2trSU+\sigma ^2trM(X'T^{-}X)^{-1}\right\} \nonumber \\= & {} \sigma ^2tr M(X'X)^{-1}X'TX (X'X)^{-1}-\sigma ^2trM(X'T^{-}X)^{-1}\nonumber \\= & {} \sigma ^2tr M(\gamma '\gamma )^{-1}\gamma '\Lambda \gamma (\gamma '\gamma )^{-1}-\sigma ^2trM(\gamma '\Lambda ^{-1}\gamma )^{-1}\nonumber \\= & {} \sigma ^2tr \big [(\rho '\rho )^{-1}\rho '\Lambda \rho (\rho '\rho )^{-1}-tr(\rho '\Lambda ^{-1}\rho )^{-1}\big ] \end{aligned}$$
(3.14)

where \(\rho =\gamma M^{-1/2}\), \(rank(\rho )=k\). Then do singular value decomposition for \(\rho \), we obtain \(\rho =Q_1\Gamma ^{1/2} Q_2'\), where \(Q_1\) is an \(n\times p\) orthogonal matrix and \(Q_2\) is a \(p\times p\) orthogonal matrix. \(\Gamma =diag(c_1,\ldots ,c_p)\), then by Lemma 3.3 and equation (3.14), we obtain

$$\begin{aligned} e_1(\hat{\beta }_T|\hat{\beta }_{OLS})= & {} \sigma ^2tr\Gamma ^{-1}\left[ Q_1'\Lambda Q_1-(Q_1'\Lambda ^{-1}Q_1)^{-1}\right] \nonumber \\\le & {} \frac{\sigma ^2}{c_p}tr \left[ Q_1'\Lambda Q_1-(Q_1'\Lambda ^{-1}Q_1)^{-1}\right] \nonumber \\\le & {} \frac{\sigma ^2\sum _{i=1}^m \left( \sqrt{\lambda _i}-\sqrt{\lambda _{n-i+1}}\right) ^2}{c_p} \end{aligned}$$
(3.15)

The proof of Theorem 3.2 is completed. Now, we present another theorem of this paper. \(\square \)

Theorem 3.3

For the model (2.1), when V is an nonnegative definite matrix, let \(T=Q'\Lambda Q\), Q is an \(k\times p\) orthogonal matrix, \(\Lambda =diag(\lambda _1, \ldots ,\lambda _k)\), where \(\lambda _1\ge \cdots \ge \lambda _k>0\). \(c_1\ge \cdots \ge c_p>0=c_{p+1}= \cdots =c_k\) are the eigenvalues of \(QXM^{-1}X'Q'\). If \(w^2tr(T^-T)-2pw-(1-w)^2trSU=0\), then we have

$$\begin{aligned} e_2\big (\hat{\beta }_T|\hat{\beta }_{OLS}\big ) \ge \frac{c_p}{c_1}\left[ \frac{2\sum _{i=1}^m(\sqrt{\lambda _i\lambda _{k-i+1}})}{\sum _{i=1}^m(\lambda _i+\lambda _{k-i+1})}\right] ^2 \end{aligned}$$

where \(rank(X)=p\), \(rank(T)=k\), \(m=min(p,k-p)\) and \(k>p\).

Proof

By the proof of Theorem 3.2 and Lemma 3.2 we have

$$\begin{aligned} e_2\big (\hat{\beta }_T|\hat{\beta }_{OLS}\big )= & {} \frac{R(\hat{\beta }_T,\beta )}{R(\hat{\beta }_{OLS},\beta )}\nonumber \\= & {} \frac{tr M(X'T^{-}X)^{-1}}{tr M(X'X)^{-1}X'TX (X'X)^{-1}}\nonumber \\= & {} \frac{tr(\rho '\Lambda ^{-1}\rho )^{-1}}{tr (\rho '\rho )^{-1}\rho '\Lambda \rho (\rho '\rho )^{-1}}\nonumber \\= & {} \frac{tr \Gamma ^{-1}(Q_1'\Lambda ^{-1}Q_1)^{-1} }{tr \Gamma ^{-1}Q_1'\Lambda Q_1}\nonumber \\\ge & {} \frac{c_p}{c_1}\left[ \frac{2\sum _{i=1}^m(\sqrt{\lambda _i\lambda _{n-i+1}})}{\sum _{i=1}^m(\lambda _i+\lambda _{n-i+1})}\right] ^2 \end{aligned}$$
(3.16)

The proof of Theorem 3.4 is completed. \(\square \)

Theorem 3.4

For the model (2.1), when V is an nonnegative definite matrix, let \(T=Q'\Lambda Q\), Q is an \(k\times p\) orthogonal matrix, \(\Lambda =diag(\lambda _1,\ldots ,\lambda _k)\), where \(\lambda _1\ge \cdots \ge \lambda _k>0\). \(c_1\ge \cdots \ge c_p>0=c_{p+1}=\cdots =c_k\) are the eigenvalues of \(QXM^{-1}X'Q'\). If \(w^2tr(T^-T)-2pw-(1-w)^2trSU\ne 0\) and \(X'T^-X\le U^{-1}\), then

$$\begin{aligned} e_2\big (\hat{\beta }_T|\hat{\beta }_{OLS}\big )\ge 1-\frac{\sigma ^2\sum _{i=1}^m \left( \sqrt{\lambda _i}-\sqrt{\lambda _{k-i+1}}\right) ^2}{w^2(k-p)c_p} \end{aligned}$$

where \(rank(X)=p\), \(rank(T)=k\) and \(k>p\), \(m=min(p,k-p)\).

Proof

By Theorem 3.2, we have

$$\begin{aligned}&e_2\big (\hat{\beta }_T|\hat{\beta }_{OLS}\big )\nonumber \\= & {} \frac{R(\hat{\beta }_T,\beta )}{R\big (\hat{\beta }_{OLS},\beta \big )}\nonumber \\= & {} \frac{w^2tr(T^-T)-2pw-(1-w)^2trSU+trM(X'T^{-}X)^{-1}}{w^2tr(T^-T)-2pw-(1-w)^2trSU+trM(X'X)^{-1}(X'TX)(X'X)^{-1}}\nonumber \\= & {} 1-\frac{trM(X'X)^{-1}(X'TX)(X'X)^{-1}-trM(X'T^{-}X)^{-1}}{w^2tr(T^-T)-2pw-(1-w)^2trSU+trM(X'X)^{-1}(X'TX)(X'X)^{-1}}\nonumber \\\ge & {} 1-\frac{\frac{\sigma ^2\sum _{i=1}^m(\sqrt{\lambda _i}-\sqrt{\lambda _{k-i+1}})^2}{c_p}}{w^2tr(T^-T)-2pw-(1-w)^2trSU+trM(X'T^{-}X)^{-1}}\nonumber \\= & {} 1-\frac{\sigma ^2\sum _{i=1}^m(\sqrt{\lambda _i}-\sqrt{\lambda _{k-i+1}})^2}{\left( w^2k-2pw+w(2-w)p+tr(1-w)^2S[(X'T^{-}X)^{-1}-U]\right) c_p}\nonumber \\\ge & {} 1-\frac{\sigma ^2\sum _{i=1}^m(\sqrt{\lambda _i}-\sqrt{\lambda _{k-i+1}})^2}{w^2(k-p)c_p} \end{aligned}$$
(3.17)

\(\square \)

Fig. 1
figure 1

The estimated \(e_1\) and \(e_2\) versus w

4 Numerical example

In this section we will use a numerical example to explain the theoretical results which presented in Sect. 4. The data is given as follows, and the data was studied by many authors such as Chang and Yang (2012).

$$\begin{aligned} X=\left( \begin{array}{llll} 1.9&{}\quad 2.2&{}\quad 1.9&{}\quad 3.7\\ 1.8&{}\quad 2.2&{}\quad 2.0&{}\quad 3.8\\ 1.8&{}\quad 2.4&{}\quad 2.1&{}\quad 3.6\\ 1.8&{}\quad 2.4&{}\quad 2.2&{}\quad 3.8\\ 2.0&{}\quad 2.5&{}\quad 2.3&{}\quad 3.8\\ 2.1&{}\quad 2.6&{}\quad 2.4&{}\quad 3.7\\ 2.1&{}\quad 2.6&{}\quad 2.6&{}\quad 3.8\\ 2.2&{}\quad 2.6&{}\quad 2.6&{}\quad 4.0\\ 2.3&{}\quad 2.8&{}\quad 2.8&{}\quad 3.7\\ 2.3&{}\quad 2.7&{}\quad 2.8&{}\quad 3.8\\ \end{array} \right) , y=\left( \begin{array}{l} 2.3\\ 2.2\\ 2.2\\ 2.3\\ 2.4\\ 2.5\\ 2.6\\ 2.6\\ 2.7\\ 2.7\\ \end{array} \right) \end{aligned}$$
(4.1)

In this paper we consider \(\sigma ^2=0.05\), \(U_1=I\) and \(U_2=2I\) and

$$\begin{aligned} V=\left( \begin{array}{llllllllll} 1&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 1&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ \end{array} \right) \end{aligned}$$
(4.2)

It is easy to know that V is an nonnegative definite matrix. Then we can compute the estimated \(e_1\) and \(e_2\), which is showed in Figs. 1 and 2.

By the figure s we find that the \(e_1\) and \(e_2\) are all smaller than 1.

Fig. 2
figure 2

The estimated \(e_1\) and \(e_2\) versus w

5 Conclusion

In this paper, we derive the best linear unbiased estimator when the covariance matrix is an nonnegative matrix, and by the results we find that the best linear unbiased estimator under weighted balanced loss function in singular linear model is same as the best linear unbiased estimator got by Rao (1985) under the squared loss function and Hu et al. (2010) under the balanced loss function. Furthermore, some relative efficiencies based on the weighted balanced loss function are presented and the lower and upper bounds of these relative efficiencies are presented.