Skip to main content

Advertisement

Log in

Hypothesis test on response mean with inequality constraints under data missing when covariables are present

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

This paper addresses the problem of hypothesis test on response mean with various inequality constraints in the presence of covariates when response data are missing at random. The various hypotheses include to test single point, two points, set of inequalities as well as two-sided set of inequalities of the response mean. The test statistics is constructed by the weighted-corrected empirical likelihood function of the response mean based on the approach of weighted-corrected imputation for the response variable. We investigate limiting distributions and asymptotic powers of the proposed empirical likelihood ratio test statistics with auxiliary information. The results show that the test statistics with auxiliary information is more efficient than that without auxiliary information. A simulation study is undertaken to investigate the finite sample performance of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

Download references

Acknowledgments

The authors would like to thank anonymous referees for their valuable comments and suggestions which lead to the improvement of the paper. This research was supported by the National Natural Science Foundation of China (11271286, 11226218, 11401006) and the Specialized Research Fund for the Doctor Program of Higher Education of China (20120072110007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Han-Ying Liang.

Appendix: Proofs of main results

Appendix: Proofs of main results

For the convenience and simplicity, let \(Z\) be the standard normal random variable and \(C\) be positive constant whose value may vary at each occurrence.

Lemma 5.1

The empirical log-likelihood function \(\hat{l}_{AI}(\theta )\) is upper convex with respect to \(\theta \).

Proof

Note that \(\psi '_i(\theta )=(\mathbf{0}^T_q,-1)^T\) where \(\mathbf{0}_q\) is the \(q\times 1\) null vector and “ \('\) ” represents the derivative with respect to \(\theta \). Then

$$\begin{aligned} \hat{l}'_{AI}(\theta )=-\sum _{i=1}^n\frac{\eta '^T(\theta )\psi _i(\theta )+\eta ^T(\theta )\psi '_i(\theta )}{1+\eta ^T(\theta )\psi _i(\theta )}=-\sum _{i=1}^n\frac{\eta '^T(\theta )\psi _i(\theta )-\eta _{q+1}(\theta )}{1+\eta ^T(\theta )\psi _i(\theta )}. \end{aligned}$$

Furthermore, from (2.5) to (2.6), we have

$$\begin{aligned} \hat{l}'_{AI}(\theta )=\sum _{i=1}^n\frac{\eta _{q+1}(\theta )}{1+\eta ^T(\theta )\psi _i(\theta )}=\eta _{q+1}(\theta )\sum _{i=1}^nn\tilde{p}_i=n\eta _{q+1}(\theta ). \end{aligned}$$

From Eq. (2.6), we deduce by taking the derivative with respect to \(\theta \), that

$$\begin{aligned} \sum _{i=1}^n\frac{\psi '_i(\theta )-\psi _i(\theta )\eta '^T(\theta )\psi _i(\theta )}{[1+\eta ^T(\theta )\psi _i(\theta )]^2}=0. \end{aligned}$$

Then, it follows that

$$\begin{aligned} \left( \mathbf{0}^T_q,-\sum _{i=1}^n\tilde{p}_i^2 \right) ^T= & {} \eta '^T(\theta )\sum _{i=1}^n\tilde{p}_i^2\psi _i^T(\theta )\psi _i(\theta )\\= & {} \eta '^T(\theta )\sum _{i=1}^n\tilde{p}_i^2 \left[ A^T(\mathbf {X}_i)A(\mathbf {X}_i)+(\hat{Y}_i-\theta )^2\right] . \end{aligned}$$

Thus, we have \(\eta '_{q+1}(\theta )<0\), and hence \(\hat{l}''_{AI}(\theta )=n\eta '_{q+1}(\theta )<0\), which completes the proof of Lemma 5.1. \(\square \)

Set \( \Gamma _{AI}=\bigg (\begin{array}{cc} \Gamma _{1}~~\Gamma _{2}\\ \Gamma _{2}~~\Gamma _A \end{array}\bigg ) \) with \(\Gamma _1=E\{A(\mathbf {X})A^T(\mathbf {X})\},\) \(\Gamma _2=E\{A(\mathbf {X})(m(\mathbf {X})-\theta )\}\), \(\Gamma _A=E\{\sigma ^2(\mathbf {X})/p(\mathbf {X})\}+\text{ Var }(m(\mathbf {X}))\) and \(\sigma ^2(\mathbf {x})=\text{ Var }(Y|\mathbf {X}=\mathbf {x}).\)

Lemma 5.2

Suppose that conditions (C1)–(C8) hold. If \(\theta \) is the true mean of response \(Y\), then \( \frac{1}{\sqrt{n}}\sum _{i=1}^n\psi _i(\theta )\mathop {\rightarrow }\limits ^{d}N(0,\Gamma _{AI}). \)

Proof

The proof of Lemma 5.2 can be found in the proof of Theorem 2 of Xue (2009). \(\square \)

Lemma 5.3

Suppose that conditions (C1)–(C8) hold. If \(\theta \) is the true mean of response \(Y\), then

$$\begin{aligned} \sqrt{n}(\hat{\theta }_{ME}-\theta )\mathop {\rightarrow }\limits ^{d}N(0,\Gamma ) \quad \text{ and } \quad \hat{\theta }_{ME}=\hat{\theta }_n+O_p(n^{-1/2}), \end{aligned}$$

where \(\hat{\theta }_{ME}=\arg \max _{\theta }\hat{l}_{AI}(\theta )\) called the maximum EL estimator, \(\hat{\theta }_n=\frac{1}{n}\sum _{i=1}^n\hat{Y}_i\) and \(\Gamma \) is defined in Theorem 3.2.

Proof

Firstly, we prove \(\sqrt{n}(\hat{\theta }_{ME}-\theta )\mathop {\rightarrow }\limits ^{d}N(0,\Gamma ) \) holds. Let \(\tilde{\theta }=\hat{\theta }_{ME}\) and \(\tilde{\eta }=\tilde{\eta }(\hat{\theta }_{ME})=(\tilde{\eta }^T_1,\tilde{\eta }_2)^T\), where \(\tilde{\eta }_1\) is a \(q\)-dimensional column vector. Note that \(\tilde{\theta }\) and \(\tilde{\eta }\) satisfy the following three equations: \(Q_{kn}(\theta ,\eta )=0\) for \(k=1,2\) and \(3\), where \( Q_{1n}(\theta ,\eta )=n^{-1}\sum _{i=1}^n(\hat{Y}_i-\theta )/\{1+\eta ^T\psi _i(\theta )\}, \) \( Q_{2n}(\theta ,\eta )=n^{-1}\sum _{i=1}^nA(\mathbf {X}_i)/\{1+\eta ^T\psi _i(\theta )\} \) and \( Q_{3n}(\theta ,\eta )=n^{-1}\sum _{i=1}^n\eta _2/\{1+\eta ^T\psi _i(\theta )\}. \) Then by expanding \(Q_{kn}(\tilde{\theta },\tilde{\eta })=0\) at \((\theta ,0)\) for \(k=1,2\) and \(3\), we derive that

$$\begin{aligned} 0=Q_{kn}(\tilde{\theta },\tilde{\eta })&=Q_{kn}(\theta ,0)+\frac{\partial Q_{kn}(\theta ,0)}{\partial \theta }(\tilde{\theta }-\theta ) +\,\frac{\partial Q_{kn}(\theta ,0)}{\partial \eta _1^T}\tilde{\eta }_1+\frac{\partial Q_{kn}(\theta ,0)}{\partial \eta _2}\tilde{\eta }_2\\&\quad +o_p(\epsilon _n), \end{aligned}$$

where \(\epsilon _n=|\tilde{\theta }-\theta |+\Vert \tilde{\eta }_1\Vert +|\tilde{\eta }_2|\). Further we find

$$\begin{aligned} \left( \begin{array}{c} \tilde{\eta }_1\\ \tilde{\eta }_2\\ \tilde{\theta }-\theta \end{array}\right) =H_n^{-1}\left( \begin{array}{c} -Q_{1n}(\theta ,0)+o_p(\epsilon _n)\\ -Q_{2n}(\theta ,0)+o_p(\epsilon _n)\\ o_p(\epsilon _n) \end{array}\right) , \end{aligned}$$

where

$$\begin{aligned} H_n=&\left( \begin{array}{ccc} \frac{\partial Q_{1n}(\theta ,\eta )}{\partial \eta ^T_1} &{} \frac{\partial Q_{1n}(\theta ,\eta )}{\partial \eta _2} &{} \frac{\partial Q_{1n}(\theta ,\eta )}{\partial \theta }\\ \frac{\partial Q_{2n}(\theta ,\eta )}{\partial \eta ^T_1} &{} \frac{\partial Q_{2n}(\theta ,\eta )}{\partial \eta _2} &{} \frac{\partial Q_{2n}(\theta ,\eta )}{\partial \theta }\\ \frac{\partial Q_{3n}(\theta ,\eta )}{\partial \eta ^T_1} &{} \frac{\partial Q_{3n}(\theta ,\eta )}{\partial \eta _2} &{} \frac{\partial Q_{3n}(\theta ,\eta )}{\partial \theta }\\ \end{array}\right) _{(\theta ,\eta )=(\theta ,0)}\\ =&\left( \begin{array}{ccc} -\frac{1}{n}\sum _{i=1}^n (\hat{Y}_i-\theta )A^T(\mathbf {X}_i) &{} -\frac{1}{n}\sum _{i=1}^n (\hat{Y}_i-\theta )^2 &{} -1\\ -\frac{1}{n}\sum _{i=1}^n A(\mathbf {X}_i)A^T(\mathbf {X}_i) &{} -\frac{1}{n}\sum _{i=1}^n (\hat{Y}_i-\theta )A(\mathbf {X}_i) &{} 0\\ \mathbf{0}_{1\times q} &{} -1 &{} 0 \end{array}\right) . \end{aligned}$$

Lemma 4 in Xue (2009) gives \(\frac{1}{n}\sum _{i=1}^n (\hat{Y}_i-\theta )^2\mathop {\rightarrow }\limits ^{p}\Gamma _A\). Applying the Law of Large Numbers we have \(\frac{1}{n}\sum _{i=1}^n A(\mathbf {X}_i)A^T(\mathbf {X}_i)\mathop {\rightarrow }\limits ^{p}\Gamma _1\), and from Lemmas 1 and 2 in Xue (2009), one can derive

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^n(\hat{Y}_i-\theta )A(\mathbf {X}_i)&=\frac{1}{n}\sum _{i=1}^n\Bigg \{\frac{\delta _i}{p(\mathbf {X}_i)}(Y_i-m(\mathbf {X}_i))A(\mathbf {X}_i)\Bigg \}\\&\quad +\frac{1}{n}\sum _{i=1}^n(m(\mathbf {X}_i)-\theta )A(\mathbf {X}_i)+o_p(1)\\&\qquad \mathop {\rightarrow }\limits ^{p}E\{A(\mathbf {X})(m(\mathbf {X})-\theta )\}=\Gamma _2. \end{aligned}$$

Thus \( H_n\mathop {\rightarrow }\limits ^{p}\left( \begin{array}{ccc} -\Gamma ^T_2 &{} -\Gamma _A &{} -1\\ -\Gamma _1 &{} -\Gamma _2 &{} 0\\ 0 &{} -1 &{} 0 \end{array}\right) . \) Therefore, we obtain that

$$\begin{aligned} \sqrt{n}(\hat{\theta }_{ME}-\theta )&=\sqrt{n}Q_{1n}(\theta ,0)-\sqrt{n}\Gamma _2^T\Gamma _1^{-1}Q_{2n}(\theta ,0)+o_p(1) \nonumber \\&=(1,-\Gamma _2^T\Gamma _1^{-1})(\sqrt{n}Q_{1n}(\theta ,0),\sqrt{n}Q_{2n}(\theta ,0))^T+o_p(1). \end{aligned}$$
(5.1)

Note that Lemma 5.2 implies that

$$\begin{aligned} \left( \begin{array}{c} \sqrt{n}Q_{1n}(\theta ,0)\\ \sqrt{n}Q_{2n}(\theta ,0) \end{array}\right) = \left( \begin{array}{c} \frac{1}{\sqrt{n}}\sum _{i=1}^n(\hat{Y}_i-\theta )\\ \frac{1}{\sqrt{n}}\sum _{i=1}^nA(\mathbf {X}_i) \end{array}\right) \mathop {\rightarrow }\limits ^{d}N\left( \begin{array}{cc} \bigg (\begin{array}{c} 0\\ 0 \end{array}\bigg ), \bigg (\begin{array}{cc} \Gamma _A &{} \Gamma _2\\ \Gamma _2 &{} \Gamma _1 \end{array}\bigg ) \end{array}\right) , \end{aligned}$$

which together with (5.1) yields \(\sqrt{n}(\hat{\theta }_{ME}-\theta )\mathop {\rightarrow }\limits ^{d}N(0,\Gamma ).\)

Lemma 3 in Xue (2009) implies \(\sqrt{n}(\hat{\theta }_n-\theta )\mathop {\rightarrow }\limits ^{d}N(0,\Gamma _A)\), which, together with \(\sqrt{n}(\hat{\theta }_{ME}-\theta )\mathop {\rightarrow }\limits ^{d}N(0,\Gamma )\), leads to \(\hat{\theta }_{ME}=\hat{\theta }_n+O_p(n^{-1/2}).\) \(\square \)

Lemma 5.4

Assume that \(\theta ^*\) is the true mean of response \(Y\). Under the null hypothesis \(H_3\) and conditions (C1)–(C8), if \(E\Vert A(\mathbf {X})\Vert ^3<\infty \), \(\sup _{\mathbf {x}}E(|Y|^3|\mathbf {X}=\mathbf {x})<\infty \) and \(\Gamma _A>0\), then \( \frac{\max \{\widehat{\mathcal {L}}_{AI}(\theta _1),\widehat{\mathcal {L}}_{AI}(\theta _2)\}}{\widehat{\mathcal {L}}_{AI}(\theta ^*)} \mathop {\rightarrow }\limits ^{p}1. \)

Proof

We only prove the case when the true mean \(\theta ^*\) is \(\theta _1\), since the proof for the case when the true mean \(\theta ^*\) is \(\theta _2\) is similar. For any \(\theta \), denote \(\hat{l}_E(\theta )=-\log [n^n\widehat{\mathcal {L}}_{AI}(\theta )]=\sum _{i=1}^n\log \{1+\eta ^T(\theta )\psi _i(\theta )\}\) and \(\bar{\theta }=\theta _1+n^{-1/3}\).

Firstly, we establish \(\eta (\bar{\theta })=O_p(n^{-1/3}).\) Let \(\eta (\bar{\theta })=\rho u,\) where \(\rho \ge 0,u\in R^{q+1}\) and \(\Vert u\Vert =1\). Set

$$\begin{aligned} \bar{\psi }(\theta )=\frac{1}{n}\sum _{i=1}^n\psi _{i}(\theta ), \; \psi _m(\theta )=\max _{1\le i\le n}\Vert \psi _{i}(\theta )\Vert ,\quad S=\frac{1}{n}\sum _{i=1}^n\psi _{i}(\theta _1)\psi ^T_{i}(\theta _1). \end{aligned}$$

Denote by \(\mathrm{mineig}(S)\) the smallest eigenvalues of \(S\) and \(0_q\) the the \(q\times 1\) null vector. From Lemmas 5.2 and (2.6), we have

$$\begin{aligned} 0=&\, \frac{1}{n}\sum _{i=1}^n\frac{u^T \psi _i(\bar{\theta })}{1+\rho u^T \psi _i(\bar{\theta })}=\frac{1}{n}\sum _{i=1}^nu^T \psi _i(\bar{\theta })-\rho \frac{1}{n}\sum _{i=1}^n\frac{(u^T \psi _i(\bar{\theta }))^2}{1+\rho u^T \psi _i(\bar{\theta })}\\ \le&\, u^T\bar{\psi }(\bar{\theta })-\frac{\rho }{1+\rho \psi _m(\bar{\theta })}\frac{1}{n}\sum _{i=1}^n(u^T \psi _i(\bar{\theta }))^2\\ =&\, u^T\big [\bar{\psi }(\theta _1)+(0^T_{q},n^{-1/3})^T\big ] -\frac{\rho }{1+\rho \psi _m(\bar{\theta })}\frac{1}{n}\sum _{i=1}^nu^T \big [\psi _i(\theta _1)+(0^T_q,n^{-1/3})^T\big ]\\&\,\times \big [\psi _i^T(\theta _1) +(0^T_q,n^{-1/3})\big ]u\\ \le&~u^T\bar{\psi }(\theta _1)+Cn^{-1/3}-\frac{\rho }{1+\rho \psi _m(\bar{\theta })}\big \{\mathrm{mineig}(S)+O_p(n^{-1/2})\big \}, \end{aligned}$$

which gives

$$\begin{aligned} \rho \Big [\mathrm{mineig}(S)\!+\!O_p(n^{-1/2})\!-\!u^T\bar{\psi }(\theta _1)\psi _m(\bar{\theta })\!-\!Cn^{-1/3}\psi _m(\bar{\theta })\Big ]\!<\!|u^T\bar{\psi }(\theta _1)|\!+\!Cn^{-1/3}. \end{aligned}$$

Lemma 3 in Xue (2009) implies that

$$\begin{aligned} \psi _i(\theta ) =\left( \begin{array}{c} A(\mathbf {X}_i)\\ \frac{\delta _i}{p(\mathbf {X}_i)}(Y_i-m(\mathbf {X}_i))+m(\mathbf {X}_i)-\theta \end{array}\right) \{1+o_p(1)\}:=\psi ^*_i(\theta )\{1+o_p(1)\}. \end{aligned}$$

Note that \(\{\psi _i^*(\bar{\theta }),1\le i\le n\}\) is i.i.d., and that \(E\Vert A(\mathbf {X})\Vert ^3<\infty \) and \(\sup _{\mathbf {x}}E(|Y|^3|\mathbf {X}=\mathbf {x})<\infty \) imply \(E\Vert \psi ^*_i(\bar{\theta })\Vert ^3<\infty \). Then from the proof of Lemma 3 in Owen (1990), one can derive

$$\begin{aligned} \psi _m(\bar{\theta })=\max _{1\le i\le n}\Vert \psi ^*_i(\bar{\theta })\{1+o_p(1)\}\Vert =o_p(n^{1/3}). \end{aligned}$$

Then from \(|u^T\bar{\psi }(\theta _1)|=O_p(n^{-1/2})\) and Lemmas 5.2, we have

$$\begin{aligned} \rho [\mathrm{mineig}(S)+o_p(1)]=O_p(n^{-1/3}). \end{aligned}$$

Since \(\Gamma _{AI}\) is a positive definite matrix and \(S\mathop {\rightarrow }\limits ^{p}\Gamma _{AI}\), \(C+o_p(1)\le \mathrm{mineig}(S)\le C+o_p(1).\) Therefore \( \rho =O_p(n^{-1/3}) \) and \(\eta (\bar{\theta })=O_p(n^{-1/3}).\)

From (2.6), it follows that

$$\begin{aligned} 0=&\frac{1}{n}\sum _{i=1}^n\frac{\psi _i(\bar{\theta })}{1+\eta ^T(\bar{\theta }) \psi _i(\bar{\theta })} \nonumber \\ =&\frac{1}{n}\sum _{i=1}^n\psi _i(\bar{\theta })-\frac{1}{n}\sum _{i=1}^n\psi _i(\bar{\theta })\psi _i^T(\bar{\theta })\eta (\bar{\theta }) +\frac{1}{n}\sum _{i=1}^n\frac{\psi _i(\bar{\theta })[\eta ^T(\bar{\theta }) \psi _i(\bar{\theta })]^2}{1+\eta ^T(\bar{\theta }) \psi _i(\bar{\theta })}. \end{aligned}$$
(5.2)

In view of \(\eta (\bar{\theta })=O_p(n^{-1/3})\) and \(\psi _m(\bar{\theta })=o_p(n^{1/3})\), one can conclude that

$$\begin{aligned} \bigg \Vert \frac{1}{n}\sum _{i=1}^n\frac{\psi _i(\bar{\theta })[\eta ^T(\bar{\theta }) \psi _i(\bar{\theta })]^2}{1+\eta ^T(\bar{\theta }) \psi _i(\bar{\theta })}\bigg \Vert \le \Vert \eta (\bar{\theta })\Vert ^2\max _{1\le i\le n}\Vert \psi _i(\bar{\theta })\Vert \frac{1}{n}\sum _{i=1}^n\Vert \psi _i(\bar{\theta })\Vert ^2 \!=\!o_p(n^{-1/3}). \end{aligned}$$

This together with (5.2), yields

$$\begin{aligned} \eta (\bar{\theta })=\Big [\sum _{i=1}^n\psi _i(\bar{\theta })\psi ^T_i(\bar{\theta })\Big ]^{-1}\sum _{i=1}^n\psi _i(\bar{\theta }) +o_p(n^{-1/3}). \end{aligned}$$
(5.3)

By Taylor expansion, using (5.3) and law of the iterated logarithm for \(\{\psi _i^*(\theta _1),1\le i\le n\}\), we obtain

$$\begin{aligned} \hat{l}_E(\bar{\theta })=&~\sum _{i=1}^n\log \{1+\eta ^T(\bar{\theta })\psi _i(\bar{\theta })\} =\sum _{i=1}^n\log \{1+\eta ^T(\bar{\theta })\psi ^*_i(\bar{\theta })\}\{1+o_p(1)\}\\ =&~\sum _{i=1}^n \eta ^T(\bar{\theta })\psi ^*_i(\bar{\theta })\{1+o_p(1)\} -\frac{1}{2}\sum _{i=1}^n[\eta ^T(\bar{\theta })\psi ^*_i(\bar{\theta })]^2\{1+o_p(1)\}+o_p(n^{1/3})\\ =&~\frac{n}{2}\left[ \frac{1}{n}\sum _{i=1}^n\psi ^*_i(\bar{\theta })\right] ^T \left[ \frac{1}{n}\sum _{i=1}^n\psi ^*_i(\bar{\theta })\psi ^{*T}_i(\bar{\theta })\right] ^{-1} \left[ \frac{1}{n}\sum _{i=1}^n\psi ^*_i(\bar{\theta })\right] \{1+o_p(1)\}+o_p(n^{1/3})\\ =&~\frac{n}{2}\left[ \frac{1}{n}\sum _{i=1}^n\psi ^*_i(\theta _1) +\frac{1}{n}\sum _{i=1}^n\frac{\partial \psi ^*_i(\theta _1)}{\partial \theta }n^{-1/3}\right] ^T \left[ \frac{1}{n}\sum _{i=1}^n\psi ^*_i(\bar{\theta })\psi ^{*T}_i(\bar{\theta })\right] ^{-1}\\&~\times \left[ \frac{1}{n}\sum _{i=1}^n\psi ^*_i(\theta _1) +\frac{1}{n}\sum _{i=1}^n\frac{\partial \psi ^*_i(\theta _1)}{\partial \theta }n^{-1/3}\right] \{1+o_p(1)\}+o_p(n^{1/3})\\ =&~\frac{n}{2}\left[ O_{a.s.}\left( n^{-1/2}(\log \log n)^{1/2}\right) +n^{-1/3}\cdot (0,-1)\right] \left[ E\psi ^*_1(\theta _1)\psi ^{*T}_1(\theta _1)\right] ^{-1}\\&~\times \left[ O_{a.s.}\left( n^{-1/2}(\log \log n)^{1/2}\right) +n^{-1/3}\cdot (0,-1)^T\right] \{1+o_p(1)\}+o_p(n^{1/3})\\ \ge&~Cn^{1/3}, ~\text{ in } \text{ probability }. \end{aligned}$$

Similarly to the proof as for \(\eta (\bar{\theta })=O_p(n^{-1/3})\), it can be shown, for the true mean \(\theta _1\), that \(\eta (\theta _1)=O_p(n^{-1/2})\). Then

$$\begin{aligned} \hat{l}_E(\theta _1)=&~\frac{n}{2}\left[ \frac{1}{n}\sum _{i=1}^n\psi ^*_i(\theta _1)\right] ^T \left[ \frac{1}{n}\sum _{i=1}^n\psi ^*_i(\theta _1)\psi ^{*T}_i(\theta _1)\right] ^{-1} \left[ \frac{1}{n}\sum _{i=1}^n\psi ^*_i(\theta _1)\right] \\&\{1+o_p(1)\} +O_p(1)\\ =&~O_p(\log \log n). \end{aligned}$$

Then, it follows from the lower concavity of \(\hat{l}_E(\theta )\) that

$$\begin{aligned} \hat{l}_E(\theta _2)\ge \hat{l}_E(\bar{\theta })\ge Cn^{1/3}>\hat{l}_E(\theta _1)=O(\log \log n), ~ \text{ in } \text{ probability }. \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{\widehat{\mathcal {L}}_{AI}(\theta _2)}{\widehat{\mathcal {L}}_{AI}(\theta _1)} =\frac{n^{-n}\exp \{-\hat{l}_E(\theta _2)\}}{n^{-n}\exp \{-\hat{l}_E(\theta _1)\}} =\exp \{-[\hat{l}_E(\theta _2)-\hat{l}_E(\theta _1)]\}\mathop {\rightarrow }\limits ^{p}0. \end{aligned}$$

Hence, Lemma 5.4 holds for \(\theta ^*=\theta _1\), which completes the proof of this Lemma. \(\square \)

Proof of Theorem 3.1

Note that \(\widehat{\mathcal {L}}_{AI}(\theta )\) has unique maxima \(\hat{\theta }_{ME}\) on \(\mathbb {R}\) by Lemma 5.1. This together with Lemma 5.3 implies that

$$\begin{aligned} \sup _{\theta \in \Omega _1}\widehat{\mathcal {L}}_{AI}(\theta )=\sup _{\theta \ge \theta _0}\widehat{\mathcal {L}}_{AI}(\theta ) =\left\{ \begin{array}{lll} &{}\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME}),&{}\quad \text{ if }~~\hat{\theta }_{ME}>\theta _0;\\ &{}\widehat{\mathcal {L}}_{AI}(\theta _0),&{} \quad \text{ if }~~\hat{\theta }_{ME}\le \theta _0. \end{array} \right. \end{aligned}$$
(5.4)

Hence, we derive that

$$\begin{aligned} T_{01}=&\left( -2\log \frac{\widehat{\mathcal {L}}_{AI}(\theta _0)}{\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})}\right) I(\hat{\theta }_{ME}>\theta _0) +\left( -2\log \frac{\widehat{\mathcal {L}}_{AI}(\theta _0)}{\widehat{\mathcal {L}}_{AI}(\theta _0)}\right) I(\hat{\theta }_{ME}\le \theta _0)\\ =&\left( -2\log \frac{\widehat{\mathcal {L}}_{AI}(\theta _0)}{\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})}\right) I(\hat{\theta }_{ME}>\theta _0)\\ :=&~T_nI(\hat{\theta }_{ME}>\theta _0). \end{aligned}$$

From Lemma 5.3 and \(\sum _{i=1}^n\tilde{p}_i(\hat{\theta }_{ME})(\hat{Y}_i-\hat{\theta }_{ME})=0\), we have \( \sum _{i=1}^n\tilde{p}_i(\hat{\theta }_{ME})\left( \hat{Y_i}-\hat{\theta }_n+O_p(n^{-1/2})\right) =0. \) Then \(\tilde{p}_i(\hat{\theta }_{ME})=n^{-1}+O_p(n^{-3/2})\), which implies

$$\begin{aligned} \widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})=n^{-n}+O_p(n^{-3n/2}). \end{aligned}$$

Using similar arguments as for \(\hat{l}_E(\bar{\theta })\) in the proof of Lemma 5.4 (or the proof of Theorem 2 in Owen (1991) or the proof of Theorem 2 in Xue (2009)), under the null hypothesis \(H_0\), we have

$$\begin{aligned} T_n&=-2\log \frac{\widehat{\mathcal {L}}_{AI}(\theta _0)}{\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})} =-2\log \left( \prod ^n_{i=1}(n\tilde{p}_i(\theta _0)\right) +o_p(1)\nonumber \\&=2\sum _{i=1}^n\log \left( 1+\eta ^T\psi _i(\theta _0)\right) +o_p(1)\nonumber \\&=\left( \frac{1}{\sqrt{n}}\sum _{i=1}^n\psi _i(\theta _0)\right) ^T\Gamma ^{-1}_{n,AI} \left( \frac{1}{\sqrt{n}}\sum _{i=1}^n\psi _i(\theta _0)\right) +o_p(1). \end{aligned}$$

where \( \Gamma _{n,AI}=\left( \begin{array}{cc} \Gamma _{n1}&{}\Gamma _{n2}\\ \Gamma _{n2}&{}\Gamma _{n} \end{array}\right) \) with \(\Gamma _n=\frac{1}{n}\sum _{i=1}^n(\hat{Y}_i-\theta _0)^2,\) \(\Gamma _{n1}=\frac{1}{n}\sum _{i=1}^nA(\mathbf {X}_i)A^T(\mathbf {X}_i)\) and \(\Gamma _{n2}=\frac{1}{n}\sum _{i=1}^nA(\mathbf {X}_i)(\hat{Y}_i-\theta _0)\).

Note that \(\Gamma _{n,AI}\mathop {\rightarrow }\limits ^{p}\Gamma _{AI}\). Then, using Lemma 5.2, we find, for any \(t>0\),

$$\begin{aligned} P\{T_{01}>t\}&=P\{T_n>t,\hat{\theta }_{ME}>\theta _0\}\\&=P\left\{ \left( \frac{1}{\sqrt{n}}\sum _{i=1}^n\psi _i(\theta _0)\right) ^T \Gamma ^{-1}_{n,AI} \left( \frac{1}{\sqrt{n}}\sum _{i=1}^n\psi _i(\theta _0)\right) +o_p(1)>t,\sqrt{n}(\hat{\theta }_{ME}-\theta _0)\Gamma ^{-1/2}>0\right\} \\&\rightarrow P\left\{ \chi ^2_q+Z^2>t,Z>0\right\} =P\left\{ Z^2>t-\chi ^2_q,Z>0\right\} \\&=\frac{1}{2}P\left\{ Z^2>t-\chi ^2_q\right\} =\frac{1}{2}P\left\{ \chi ^2_{q+1}>t\right\} . \end{aligned}$$

Hence \(P\{T_{01}\le t\}=1-\frac{1}{2}P\{\chi ^2_{q+1}>t\}=\frac{1}{2}+\frac{1}{2}P\{\chi ^2_{q+1}\le t\}\), which leads to \(T_{01}\mathop {\rightarrow }\limits ^{d}\frac{1}{2}\chi _0^2+\frac{1}{2}\chi _{q+1}^2\), and the proof of Theorem 3.1 is completed. \(\square \)

Proof of Theorem 3.2

It is obvious that \( \frac{1}{\sqrt{n}}\sum _{i=1}^n\psi _i(\theta ^*)\mathop {\rightarrow }\limits ^{d}N(0,\Gamma _{AI})\) still holds for the true mean \(\theta ^*=\theta _0+n^{-1/2}\Gamma ^{1/2}\tau \). Then by Lemma 5.2, it can be shown that

$$\begin{aligned} P&\{T_{01}>c_\alpha |\theta ^*\}=P\{T_n>c_\alpha ,\hat{\theta }_{ME}>\theta _0|\theta ^*\}\\&=P\left\{ \left( \frac{1}{\sqrt{n}}\sum _{i=1}^n\psi _i(\theta _0)\right) ^T \Gamma ^{-1}_{n,AI} \left( \frac{1}{\sqrt{n}}\sum _{i=1}^n\psi _i(\theta _0)\right) \right. \\&\quad \left. +\,o_p(1)>c_\alpha ,\sqrt{n}(\hat{\theta }_{ME}-\theta _0)\Gamma ^{-1/2}>0\right\} \\&\rightarrow P\{\chi ^2_{q}+(Z+\tau )^2>c_\alpha ,Z+\tau >0\}\\&=\int _0^{c_\alpha }P\left\{ x+(Z+\tau )^2>c_\alpha ,Z+\tau >0\right\} p_{\chi }(x)dx\\&\quad +\int _{c_\alpha }^\infty P\left\{ Z+\tau >0\right\} p_{\chi }(x)dx\\&=\int _0^{c_\alpha }P\left\{ Z>(c_\alpha -x)^{1/2}-\tau \right\} p_{\chi }(x)dx+\Phi (\tau )(1-F_\chi (c_\alpha ))\\&=\int _0^{c_\alpha }\Phi \left( \tau -(c_\alpha -x)^{1/2}\right) p_{\chi }(x)dx+\Phi (\tau )(1-F_\chi (c_\alpha )), \end{aligned}$$

which completes the proof of Theorem 3.2. \(\square \)

Proof of Theorem 3.3

From (5.4), we have

$$\begin{aligned} T_{12}=&\left( -2\log \frac{\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})}{\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})}\right) I(\hat{\theta }_{ME}>\theta _0) +\left( -2\log \frac{\widehat{\mathcal {L}}_{AI}(\theta _0)}{\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})}\right) I(\hat{\theta }_{ME}\le \theta _0)\\ =&\left( -2\log \frac{\widehat{\mathcal {L}}_{AI}(\theta _0)}{\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})}\right) I(\hat{\theta }_{ME}\le \theta _0) :=T_nI(\hat{\theta }_{ME}\le \theta _0). \end{aligned}$$

By using the same method as in the proof of Theorem 3.1, one can derive that

$$\begin{aligned} T_{12}\mathop {\rightarrow }\limits ^{d}\frac{1}{2}\chi ^2_0+\frac{1}{2}\chi ^2_{q+1} ~~\text{ for }~~\theta ^*=\theta _0. \end{aligned}$$

Hence \(\lim _{n\rightarrow \infty }P\{T_{12}>c_\alpha |\theta ^*=\theta _0\}=\alpha \). Moreover, for any fixed true mean \(\theta ^*>\theta _0\), by \(\sqrt{n}(\hat{\theta }_{ME}-\theta ^*)\Gamma ^{-1/2}\mathop {\rightarrow }\limits ^{d}N(0,1),\) we have

$$\begin{aligned} P\{T_{12}>c_\alpha |\theta ^*\}&\le P\{\hat{\theta }_{ME}\le \theta _0|\theta ^*\} \le P\left\{ \sqrt{n}(\hat{\theta }_{ME}-\theta ^*)\Gamma ^{-1/2}\right. \\&\left. \le \sqrt{n}(\theta _0-\theta ^*)\Gamma ^{-1/2}|\theta ^*\right\} \\&\rightarrow P\{Z\le -\infty \}=0. \end{aligned}$$

Thus the proof of Theorem 3.3 is completed. \(\square \)

Proof of Theorem 3.4

Assume that the true mean \(\theta ^*\) is \(\theta _1\). From Lemma 5.1, we have

$$\begin{aligned} \sup _{\theta _1\le \theta \le \theta _2}\widehat{\mathcal {L}}_{AI}(\theta ) =\,&\widehat{\mathcal {L}}_{AI}(\theta _1)I(\hat{\theta }_{ME}<\theta _1) +\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})I(\theta _1\le \hat{\theta }_{ME}\le \theta _2) \nonumber \\&+\widehat{\mathcal {L}}_{AI}(\theta _2)I(\hat{\theta }_{ME}>\theta _2). \end{aligned}$$
(5.5)

Denote \(\mathcal {L}^*=\max \{\widehat{\mathcal {L}}(\theta _1),\widehat{\mathcal {L}}(\theta _2)\}\). Then, by Lemma 5.4 and (5.5), we derive

$$\begin{aligned} T_{34}=&\left( -2\log \frac{\mathcal {L}^*}{\widehat{\mathcal {L}}_{AI}(\theta _1)}\right) I(\hat{\theta }_{ME}\le \theta _1) +\left( -2\log \frac{\mathcal {L}^*}{\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})}\right) I(\theta _1\le \hat{\theta }_{ME}\le \theta _2)\\&+\left( -2\log \frac{\mathcal {L}^*}{\widehat{\mathcal {L}}_{AI}(\theta _2)}\right) I(\hat{\theta }_{ME}>\theta _2)\\ =&\left( -2\log \frac{\widehat{\mathcal {L}}_{AI}(\theta _1)}{\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})}\right) I(\theta _1\le \hat{\theta }_{ME}\le \theta _2)\{1+o_p(1)\}\\&+\left( -2\log \frac{\widehat{\mathcal {L}}_{AI}(\theta _1)}{\widehat{\mathcal {L}}_{AI}(\theta _2)}\right) I(\hat{\theta }_{ME}>\theta _2)\{1+o_p(1)\}+o_p(1)\\ =&\left( -2\log \frac{\widehat{\mathcal {L}}_{AI}(\theta _1)}{\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})}\right) I(\hat{\theta }_{ME}\ge \theta _1)\{1+o_p(1)\}\\&+\left( 2\log \frac{\widehat{\mathcal {L}}_{AI}(\theta _1)}{\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})} -2\log \frac{\widehat{\mathcal {L}}_{AI}(\theta _1)}{\widehat{\mathcal {L}}_{AI}(\theta _2)}\right) I(\hat{\theta }_{ME}>\theta _2)\{1+o_p(1)\}+o_p(1). \end{aligned}$$

By Lemma 5.3, we have

$$\begin{aligned} P(\hat{\theta }_{ME}>\theta _2) =P\left( \sqrt{n}(\hat{\theta }_{ME}-\theta _1)\Gamma ^{-1}>\sqrt{n}(\theta _2-\theta _1)\Gamma ^{-1}\right) \rightarrow P(Z>+\infty )=0. \end{aligned}$$

Hence, from the proof for Theorem 3.1, it follows that

$$\begin{aligned} T_{34}=\left( -2\log \frac{\widehat{\mathcal {L}}_{AI}(\theta _1)}{\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})}\right) I(\hat{\theta }_{ME}\ge \theta _1)\{1+o_p(1)\}+o_p(1) \mathop {\rightarrow }\limits ^{d}\frac{1}{2}\chi _0^2+\frac{1}{2}\chi _{q+1}^2, \end{aligned}$$

which gives the conclusion for the case of \(\theta ^*=\theta _1\). The result for \(\theta ^*=\theta _2\) can be easily obtained by using the similar approach. \(\square \)

Proof of Theorem 3.5

From (5.5), we write

$$\begin{aligned} T_{24}=\left( -2\log \frac{\widehat{\mathcal {L}}_{AI}(\theta _1)}{\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})}\right) I(\hat{\theta }_{ME}<\theta _1) +\left( -2\log \frac{\widehat{\mathcal {L}}_{AI}(\theta _2)}{\widehat{\mathcal {L}}_{AI}(\hat{\theta }_{ME})}\right) I(\hat{\theta }_{ME}>\theta _2). \end{aligned}$$

For \(\theta _1<\theta ^*<\theta _2\), we have \(\sqrt{n}(\theta _1-\theta ^*)\Gamma ^{-1}\rightarrow -\infty \), \(\sqrt{n}(\theta _2-\theta ^*)\Gamma ^{-1}\rightarrow +\infty \), and hence from Lemma 5.3, it follows that

$$\begin{aligned} P\{T_{24}>c_\alpha |\theta ^*\}\le&~P\left\{ \hat{\theta }_{ME}<\theta _1|\theta ^*\right\} +P\left\{ \hat{\theta }_{ME}>\theta _2|\theta ^*\right\} \\ \le&P\left\{ \sqrt{n}(\hat{\theta }_{ME}-\theta ^*)\Gamma ^{-1}<\sqrt{n}(\theta _1-\theta ^*)\Gamma ^{-1}|\theta ^*\right\} \\&~+P\left\{ \sqrt{n}(\hat{\theta }_{ME}-\theta ^*)\Gamma ^{-1}>\sqrt{n}(\theta _2-\theta ^*)\Gamma ^{-1}|\theta ^*\right\} \\ \rightarrow&~ P(Z<-\infty )+P(Z>+\infty )= 0. \end{aligned}$$

Note that if \(\theta ^*=\theta _1\), \(P(\hat{\theta }_{ME}>\theta _2)\rightarrow 0\), and \(P(\hat{\theta }_{ME}<\theta _1)\rightarrow 0\) if \(\theta ^*=\theta _2\). Following the proof for Theorem 3.1, one can conclude that

$$\begin{aligned} T_{24}\mathop {\rightarrow }\limits ^{d}\frac{1}{2}\chi _0^2+\frac{1}{2}\chi _{q+1}^2\quad \text{ for }\quad \theta ^*=\theta _1~ \text{ and }~\theta _2. \end{aligned}$$

Thus \( \lim _{n\rightarrow \infty }P\{T_{24}>c_\alpha |\theta ^*=\theta _1 ~or ~\theta _2\}=\alpha . \)

Hence, the proof of Theorem 3.5 is completed. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, HX., Fan, GL. & Liang, HY. Hypothesis test on response mean with inequality constraints under data missing when covariables are present. Stat Papers 58, 53–75 (2017). https://doi.org/10.1007/s00362-015-0687-x

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-015-0687-x

Keywords

Mathematics Subject Classification

Navigation