Skip to main content
Log in

Fiducial inference in the classical errors-in-variables model

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

For the slope parameter of the classical errors-in-variables model, existing interval estimations with finite length will have confidence level equal to zero because of the Gleser–Hwang effect. Especially when the reliability ratio is low and the sample size is small, the Gleser–Hwang effect is so serious that it leads to the very liberal coverages and the unacceptable lengths of the existing confidence intervals. In this paper, we obtain two new fiducial intervals for the slope. One is based on a fiducial generalized pivotal quantity and we prove that this interval has the correct asymptotic coverage. The other fiducial interval is based on the method of the generalized fiducial distribution. We also construct these two fiducial intervals for the other parameters of interest of the classical errors-in-variables model and introduce these intervals to a hybrid model. Then, we compare these two fiducial intervals with the existing intervals in terms of empirical coverage and average length. Simulation results show that the two proposed fiducial intervals have better frequency performance. Finally, we provide a real data example to illustrate our approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Adcock RJ (1877) Note on the method of least squares. Analyst 4:183–184

    Article  MATH  Google Scholar 

  • Adcock RJ (1878) A problem in least squares. Analyst 5:53–54

    Article  Google Scholar 

  • Buonaccorsi JP (2010) Measurement error: models, methods, and applications. Chapman and Hall/CRC Press, London

    Book  MATH  Google Scholar 

  • Carroll RJ, Ruppert D, Stefanski LA, Crainiceanu CM (2006) Measurement error in nonlinear models: a modern perspective. Chapman and Hall/CRC Press, London

    Book  MATH  Google Scholar 

  • Casella G, Berger RL (2002) Statistical inference. Duxbury, Pacific Grove

    MATH  Google Scholar 

  • Cheng CL, Ness JWV (1999) Statistical regression with measurement error. Arnold, London

    MATH  Google Scholar 

  • Cox DR, Reid N (1987) Parameter orthogonality and approximate conditional inference. J R Stat Soc Ser B 49:1–39

    MATH  MathSciNet  Google Scholar 

  • Creasy MA (1956) Confidence limits for the gradient in the linear functional relationship. J R Stat Soc Ser B 18:65–69

    MATH  MathSciNet  Google Scholar 

  • Dunn G (2004) Statistical evaluation of measurement errors: design and analysis of reliability studies. Arnold, London

    Google Scholar 

  • Durrett R (2010) Probability: theory and examples. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Francq BG, Govaerts BB (2014) Measurement methods comparison with errors-in-variables regressions. From horizontal to vertical OLS regression, review and new perspectives. Chemometr Intell Lab Syst 134:123–139

    Article  Google Scholar 

  • Fuller WA (1987) Measurement error models. Wiley, New York

    Book  MATH  Google Scholar 

  • Gelman A, Carlin JB, Stern HS, Rubin DB (2014) Bayesian data analysis. Taylor & Francis, Boca Raton

    MATH  Google Scholar 

  • Gillard J (2010) An overview of linear structural models in errors in variables regression. Revstat Stat J 8:57–80

    MATH  MathSciNet  Google Scholar 

  • Gillard J, Iles T (2006) Variance covariance matrices for linear regression with errors in both variables. Cardiff University School of Mathematics Technical Report

  • Gleser LJ (1987) Confidence intervals for the slope in a linear errors-in-variables regression model. In: Gupta A (ed) Advances in multivariate statistical analysis, vol 5, pp 85–109

  • Gleser LJ, Hwang JT (1987) The nonexistence of \(100(1-\alpha )\)% confidence sets of finite expected diameter in errors-in-variables and related models. Ann Stat 15:1351–1362

    Article  MATH  MathSciNet  Google Scholar 

  • Hannig J, Iyer H, Patterson P (2006) Fiducial generalized confidence intervals. J Am Stat Assoc 101:254–269

    Article  MATH  MathSciNet  Google Scholar 

  • Hannig J (2009) On generalized fiducial inference. Stat Sin 19:491–544

    MATH  MathSciNet  Google Scholar 

  • Hannig J (2013) Generalized fiducial inference via discretization. Stat Sin 23:489–514

    MATH  MathSciNet  Google Scholar 

  • Kendall MG, Stuart A (1979) The advanced theory of statistics. Hafner, New York

    MATH  Google Scholar 

  • Li X, Xu X, Li G (2007) A fiducial argument for generalized p-value. Sci China Ser A 50:957–966

    Article  MATH  MathSciNet  Google Scholar 

  • Patefield W (1981) Confidence intervals for the slope of a linear functional relationship. Commun Stat Theory Methods 10:1759–1764

    Article  MathSciNet  Google Scholar 

  • Reiersøl O (1950) Identifiability of a linear relation between variables which are subject to error. Econometrica 18:375–389

    Article  MATH  MathSciNet  Google Scholar 

  • Schneeweiss H (1982) Note on creasy’s confidence limits for the gradient in the linear functional relationship. J Multivar Anal 12:155–158

    Article  MATH  MathSciNet  Google Scholar 

  • Stefanski LA (2000) Measurement error models. J Am Stat Assoc 95:1353–1358

    Article  MATH  MathSciNet  Google Scholar 

  • Strike PW (2014) Statistical methods in laboratory medicine. Butterworth-Heinemann, Oxford

    Google Scholar 

  • Thompson JR, Carter RL (2007) An overview of normal theory structural measurement error models. Int Stat Rev 75:183–198

    Article  Google Scholar 

  • Tsai JR (2010) Generalized confidence interval for the slope in linear measurement error model. J Stat Comput Simul 80:927–936

    Article  MATH  MathSciNet  Google Scholar 

  • Tsai JR (2013) Interval estimation for fitting straight line when both variables are subject to error. Comput Stat 28:219–240

    Article  MATH  MathSciNet  Google Scholar 

  • Tsai JR, Liao CT (2011) Method comparison on confidence interval construction for the slope in a linear measurement model with heteroscedastic errors. J Chemom 25:506–513

    Article  Google Scholar 

  • Tsui KW, Weerahandi S (1989) Generalized p-values in significance testing of hypotheses in the presence of nuisance parameters. J Am Stat Assoc 84:602–607

    MathSciNet  Google Scholar 

  • Wandler DV, Hannig J (2011) Fiducial inference on the largest mean of a multivariate normal distribution. J Multivar Anal 102:87–104

    Article  MATH  MathSciNet  Google Scholar 

  • Wang G (2004) Some bayesian methods in the estimation of parameters in the measurement error models and crossover trial. Ph.D. thesis, University of Cincinnati

  • Wang G, Sivaganesan S (2013) Objective priors for parameters in a normal linear regression with measurement error. Commun Stat Theory Methods 42:2694–2713

    Article  MATH  MathSciNet  Google Scholar 

  • Weerahandi S (1993) Generalized confidence intervals. J Am Stat Assoc 88:899–905

    Article  MATH  MathSciNet  Google Scholar 

  • Williams EJ (1959) Regression analysis. Wiley, New York

    MATH  Google Scholar 

  • Williams E (1973) Tests of correlation in multivariate analysis. Bull Int Stat Inst Proc 45:218–232 39th Session

    MathSciNet  Google Scholar 

  • Wong MY (1989) Likelihood estimation of a simple linear regression model when both variables have error. Biometrika 76:141–148

    Article  MATH  MathSciNet  Google Scholar 

  • Xu X, Li G (2006) Fiducial inference in the pivotal family of distributions. Sci China Ser A 49:410–432

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

The authors are very thankful to two anonymous referees for their valuable comments and suggestions which resulted in significant improvement of this paper. They also thank the Editor for encouraging comments. This study was supported by the National Natural Science Foundation of China. (No. 11471035).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xingzhong Xu.

Appendix 1: The derivation of FGPQ for \(\beta _1\)

Appendix 1: The derivation of FGPQ for \(\beta _1\)

Since the structural equation is \((n-1)\mathbf{T }=\mathbf{KWK }^T\), it turns out that

$$\begin{aligned} \left\{ \begin{aligned} (n-1)S_{xx}&=\frac{\sigma _{xx}\sigma _{yy}-\sigma _{xy}^2}{\sigma _{yy}}E_{11}+2\sqrt{\frac{\sigma _{xx}\sigma _{yy}-\sigma _{xy}^2}{\sigma _{yy}}}\frac{\sigma _{xy}}{\sqrt{\sigma _{yy}}}E_{12}+\frac{\sigma _{xy}^2}{\sigma _{yy}}E_{22}\\ (n-1)S_{xy}&=\sqrt{\sigma _{xx}\sigma _{yy}-\sigma _{xy}^2}E_{12}+\sigma _{xy}E_{22}\\ (n-1)S_{yy}&=\sigma _{yy}E_{22} \end{aligned} \right. . \end{aligned}$$

Let \(\varvec{\eta }\triangleq (\sigma _{xx},\sigma _{xy},\sigma _{yy})^T\), \(\mathbf{S }\triangleq (S_{xx},S_{xy},S_{yy})^T\) and \(\mathbf{E }\triangleq (E_{11},E_{12},E_{22})^T\). On the one hand, we can obtain \(\mathbf{E }\) from the above structural equation, i.e.,

$$\begin{aligned} \left\{ \begin{aligned} E_{11}&=\frac{\sigma _{yy}(n-1)S_{xx}}{\sigma _{xx}\sigma _{yy}-\sigma _{xy}^2}-2\frac{\sigma _{xy}(n-1)S_{xy}}{\sigma _{xx}\sigma _{yy}-\sigma _{xy}^2}+\frac{\sigma _{xy}^2(n-1)S_{yy}}{\sigma _{yy}(\sigma _{xx}\sigma _{yy}-\sigma _{xy}^2)}\\ E_{12}&=\frac{(n-1)S_{xy}}{\sqrt{\sigma _{xx}\sigma _{yy}-\sigma _{xy}^2}}-\frac{\sigma _{xy}(n-1)S_{yy}}{\sigma _{yy}\sqrt{\sigma _{xx}\sigma _{yy}-\sigma _{xy}^2}}\\ E_{22}&=\frac{(n-1)S_{yy}}{\sigma _{yy}} \end{aligned} \right. . \end{aligned}$$
(15)

On the other hand, we can derive \(\varvec{\eta }\) from (15), that is

$$\begin{aligned} \left\{ \begin{aligned} \sigma _{xx}&=(n-1)\left[ \frac{E_{22}(S_{xx}S_{yy}-S_{xy}^2)}{S_{yy}(E_{11}E_{22}-E_{12}^2)}+\frac{1}{S_{yy}E_{22}}(S_{xy}-\frac{\sqrt{S_{xx}S_{yy}-S_{xy}^2}}{\sqrt{E_{11}E_{22}-E_{12}^2}}E_{12})^2\right] \\ \sigma _{xy}&=\frac{n-1}{E_{22}}\left( S_{xy}-\frac{\sqrt{S_{xx}S_{yy}-S_{xy}^2}}{\sqrt{E_{11}E_{22}-E_{12}^2}}E_{12}\right) \\ \sigma _{yy}&=\frac{(n-1)S_{yy}}{E_{22}} \end{aligned} \right. . \end{aligned}$$
(16)

Due to \(\lambda =1\) and (2), we have

$$\begin{aligned} \left( \begin{matrix} \sigma _{xx}&{}\quad \sigma _{xy}\\ \sigma _{yx}&{}\quad \sigma _{yy}\\ \end{matrix} \right) = \left( \begin{matrix} \sigma ^{2}+\sigma _{\epsilon }^{2}&{}\quad \beta _1\sigma ^{2}\\ \beta _1\sigma ^{2}&{}\quad \beta _1^{2}\sigma ^{2}+\sigma _{\epsilon }^{2}\\ \end{matrix} \right) . \end{aligned}$$

Thus,

$$\begin{aligned} \beta _1= \left\{ \begin{aligned}&\frac{\sigma _{yy}-\sigma _{xx}+\sqrt{(\sigma _{yy}- \sigma _{xx})^2+4 \sigma _{xy}^2}}{2\sigma _{xy}}~~&\sigma _{xy}\ne 0\\&0&\sigma _{xy}=0 \end{aligned} \right. . \end{aligned}$$
(17)

Combining (15), (16) and (17) with the structural method, we get the FGPQ of \(\beta _1\). Let \((E_{11}^*,E_{12}^*,E_{22}^*)\) be the independent copy of \((E_{11},E_{12},E_{22})\). Replacing \((E_{11},E_{12},E_{22})\) in (16) by \((E_{11}^*,E_{12}^*,E_{22}^*)\), we can get \(\sigma _{xx}^*,\sigma _{xy}^*\) and \(\sigma _{yy}^*\). Note that \(P(\mathscr {\sigma }_{xy}^*=0)=0\), the case of \(\beta _1=0\) in (17) need not to be considered.

1.1 Appendix 2: Proof of the lemma and theorem

Let \(\overset{P}{\longrightarrow }\) denote that a sequence of random variables converges in probability, \(\overset{a.e}{\longrightarrow }\) denote that a sequence of random variables converges almost everywhere, \(\overset{W}{\longrightarrow }\) denote that a sequence of distribution functions converges weakly and \(\overset{L}{\longrightarrow }\) denote that a sequence of random variables converges in law.

Lemma 1

Let \(\{A_n,n=1,\ldots \}\) be a sequence of real random vector and \(A_n\overset{L}{\rightarrow }A\sim N(\mathbf{0 },\varvec{\varOmega })\). \( Z_n\) is a scalar function of \(A_n\) with \(Z_n=f(A_n)\), where f is a continuous function. \(A_n^*\) is an independent copy of \(A_n\), \( Z_n^*=f({A_n^*}') \) and \(\epsilon _n(A_n^*,A_n)\overset{P}{\longrightarrow }0\). Then

$$\begin{aligned} P(Z_n^*\le Z_n+\epsilon _n(A_n^*,A_n)|A_n) \overset{L}{\longrightarrow } U(0, 1). \end{aligned}$$

where U(0, 1) denotes a uniform distribution on interval (0, 1).

Proof

Because \(A_n\overset{L}{\rightarrow }A\sim N(\mathbf{0 },\varvec{\varOmega })\), so there exist \(F_n\) and F, such that \(A_n\sim F_n\) and \( A\sim F\) with \(F_n \overset{W}{\longrightarrow } F\), where F is the distribution of \(N(\mathbf{0 },\varvec{\varOmega })\). Let \( A_n^* \) and \( A^* \) be the independent copy of \( A_n \) and A , respectively. Then \((A_n^*,A_n)\overset{L}{\longrightarrow }(A^*,A)\), where \( (A_n^*,A_n)\sim F_n\times F_n \) and \( (A^*,A)\sim F\times F \). Together with Skorokhod representation theorem, there exist \(A',~{A^*}',~A_n'\) and \({A_n^*}' \), such that \( ({A_n^*}',A_n')\sim F_n\times F_n\) and \(({A^*}',A')\sim F\times F \) with \(({A_n^*}',A_n') \overset{a.e}{\longrightarrow }({A^*}',A') \). Let \(\mathscr {F}_n\triangleq \sigma (A_1', A_2', \ldots , A_n')\), i.e, the minimum \(\sigma \)-field such that \(A'_1, A'_2, \ldots , A'_n\) are measurable. And let \(Z=f(A),Z^*=f(A^*),Z'=f(A'),{Z^*}'=f({A_n^*}'),Z_n'=f(A_n'),{Z_n^*}'=f({A_n^*}') \), then

$$\begin{aligned}&P\left( Z_n^*\le Z_n+\epsilon _n(A_n^*,A_n)|A_n\right) \\&\quad \overset{d}{=}P\left( {Z_n^*}'\le {Z_n}'+\epsilon _n({A_n^*}',A_n')|A_n'\right) \\&\quad =P\left( {Z_n^*}'\le {Z_n'}+\epsilon _n({A_n^*}',A_n')|\mathscr {F}_n\right) \\&\quad =P\left( {Z^*}'+{Z_n^*}'-{Z^*}'\le Z'+{Z_n}'-Z'+\epsilon _n({A_n^*}',A_n')|\mathscr {F}_n\right) \\&\quad =P\left( {Z^*}'\le Z'+\epsilon _n'({A_n^*}',A_n')|\mathscr {F}_n\right) \\&\quad =E\left( I({Z^*}'\le Z'+\epsilon _n'({A_n^*}',A_n'))|\mathscr {F}_n\right) \\&\quad \overset{P}{\longrightarrow } E\left( I({Z^*}'\le Z')|\mathscr {F}_\infty \right) \\&\quad =P\left( {Z^*}'\le Z'|\mathscr {F}_\infty \right) \sim U(0, 1), \end{aligned}$$

where \(\overset{d}{=}\) denotes that both sides of the equality have the same distribution and \(\epsilon _n'({A_n^*}',A_n')=\epsilon _n({A_n^*}',A_n')+Z_n'-Z'-({Z_n^*}'-{Z^*}')\overset{P}{\longrightarrow } 0\). The last but one formula is given by a version of convergence in probability of Theorem 5.5.9 in Durrett (2010) with \(h_n=I({Z^*}'\le Z'+\epsilon _n'({A_n^*}',A_n'))\), \(h=I({Z^*}'\le Z')\) and \(g=1\). It remains to prove that \( h_n\overset{P}{\longrightarrow } h \).

Let \( W={Z^*}'-Z' \), then \( h_n=I(W\le \epsilon _n') \) and \( h=I(W\le 0) \). Because W is a continuous random variable, for any fixed \( b>0 \), there exists a \( c>0 \), such that \( P\{|W|\le c\}<b \). For every \( \delta >0 \),

$$\begin{aligned} \begin{aligned}&P\{|h_n-h|<\delta \}\\&\quad =P\left\{ |I\left( W\le \epsilon _n'\right) -I(W\le 0)|<\delta \right\} \\&\quad \le P\left\{ |I\left( W\le \epsilon _n'\right) -I(W\le 0)|<\delta \cap |\epsilon _n'|<c\cap |W|>c\right\} \\&\qquad + P\left\{ |I\left( W\le \epsilon _n'\right) -I(W\le 0)|<\delta \cap |\epsilon _n'|\ge c\right\} \\&\qquad + P\{|I\left( W\le \epsilon _n'\right) -I(W\le 0)|<\delta \cap |W|\le c\}\\&\quad \le P\{|I\left( W\le \epsilon _n'\right) -I(W\le 0)|<\delta \cap |\epsilon _n'|<c\cap |W|>c\}+ P\{|\epsilon _n'|\ge c\}+b\\&\quad =P\{|I(W\le \epsilon _n')-I(W\le 0)|<\delta \cap |\epsilon _n'|<c\cap W>c\}\\&\qquad +P\{|I(W\le \epsilon _n')-I(W\le 0)|<\delta \cap |\epsilon _n'|<c\cap W<-c\}+ P\{|\epsilon _n'|\ge c\}+b\\&\quad =P\{|\epsilon _n'|\ge c\}+b\xrightarrow []{n\rightarrow \infty }b. \end{aligned} \end{aligned}$$

Let \( b\rightarrow 0 \), and the proof is complete.

Lemma 2

(Fuller 1987, Theorem 1.C.1) Let \(\{\mathbf{Z}_n\}\) be a sequence of independent identically distributed p-dimensional random vectors with mean \(\varvec{\mu }_{\mathbf{Z }}\), covariance matrix \(\varvec{\varSigma }_{\mathbf{ZZ }}\), and finite fourth moments. Let \(\bar{\mathbf{Z }}=n^{-1}\sum _{t=1}^n \mathbf{Z }_t\) be the sample mean, \(m_{\mathbf{ZZ }}=(n-1)^{-1}\sum _{t=1}^n (\mathbf{Z }_t-\bar{\mathbf{Z }})(\mathbf{Z }_t-\bar{\mathbf{Z }})^T\) the sample covariance matrix,

$$\begin{aligned} \begin{aligned} vech~\mathbf{m }_{\mathbf{ZZ }}&=(m_{ZZ11},m_{ZZ21},\ldots ,m_{ZZp1},m_{ZZ22},\ldots ,m_{ZZpp})^T,\\ vech~\varvec{\varSigma }_{\mathbf{ZZ }}&=(\sigma _{ZZ11},\sigma _{ZZ21},\ldots ,\sigma _{ZZp1},\sigma _{ZZ22},\ldots ,\sigma _{ZZpp})^T,\\ \mathbf{a }_t&=\left( \left( \mathbf{Z }_t-\varvec{\mu }_{\mathbf{Z }}\right) ^T,\left[ vech\left\{ \left( \mathbf{Z }_t-\varvec{\mu }_{\mathbf{Z }}\right) \left( \mathbf{Z }_t-\varvec{\mu }_{\mathbf{Z }}\right) ^T-\varvec{\varSigma }_{\mathbf{ZZ }}\right\} \right] ^T\right) ^T, \end{aligned} \end{aligned}$$

and \(\varvec{\varOmega }=E\{\mathbf{a}_t \mathbf{a}_t^T\}\), where vech denotes half-vectorization. Then

$$\begin{aligned} n^{1/2}\left[ (\bar{\mathbf{Z }}-\varvec{\mu }_{\mathbf{Z }})^T,(vech~\mathbf{m }_{\mathbf{ZZ }}-vech~\varvec{\varSigma }_{\mathbf{ZZ }})^T\right] ^T\overset{L}{\longrightarrow }N(\mathbf{0 },\varvec{\varOmega }). \end{aligned}$$

Proof of Theorem

Proof

Because \(P_{\beta _1}(\hat{\beta }_{1_{\alpha /2}}\le \beta _1 \le \hat{\beta }_{1_{1-\alpha /2}})=P_{\beta _1}(\beta _1 \le \hat{\beta }_{1_{1-\alpha /2}})-P_{\beta _1}(\beta _1<\hat{\beta }_{1_{\alpha /2}})\), if we prove that as \(n\rightarrow \infty \),

$$\begin{aligned} P_{\beta _1}(\beta _1 \le \hat{\beta }_{1_\gamma })\rightarrow \gamma , \end{aligned}$$
(18)

the assertion follows. Let \(F(r,\mathbf{s })\) be the distribution function of \(\mathscr {R}_{\beta _1}\) conditional on \(\mathbf{S }=\mathbf{s }\). Taking \(F(r,\mathbf{s })\) on both sides of the inequality in (18), that is,

$$\begin{aligned} P_{\beta _1}\left( F(\beta _1,\mathbf{s }) \le F(\hat{\beta }_{1_\gamma },\mathbf{s })\right)&=P_{\beta _1}\left( F(\beta _1,\mathbf{s })\le \gamma \right) \\&=P_{\beta _1}\left( P(\mathscr {R}_{\beta _1}\le \beta _1|\mathbf{S }=\mathbf{s })\le \gamma \right) . \end{aligned}$$

What is left is to show that \(P_{\beta _1}(P(\mathscr {R}_{\beta _1}\le \beta _1|\mathbf{S })\le \gamma )\rightarrow \gamma \), i.e.,

$$\begin{aligned} P(\mathscr {R}_{\beta _1}\le \beta _1|\mathbf{S })\overset{L}{\longrightarrow } U(0,1). \end{aligned}$$

In particular, by (7), we have

$$\begin{aligned}&P(\mathscr {R}_{\beta _1}\le \beta _1|\mathbf{S })\nonumber \\&\quad =P\left( \left[ \mathscr {R}_{yy}-\mathscr {R}_{xx}+\sqrt{(\mathscr {R}_{yy}- \mathscr {R}_{xx})^2+4 \mathscr {R}_{xy}^2}\right] \Big /(2\mathscr {R}_{xy})\le \beta _1|\mathbf{S })\right) \nonumber \\&\quad =P\left( 2\mathscr {R}_{xy}{\Big /}\left[ \sqrt{(\mathscr {R}_{yy}- \mathscr {R}_{xx})^2+4 \mathscr {R}_{xy}^2}-(\mathscr {R}_{yy}-\mathscr {R}_{xx})\right] \le \beta _1|\mathbf{S })\right) \nonumber \\&\quad =P\left( 2\mathscr {R}_{xy}-\beta _1\left[ \sqrt{(\mathscr {R}_{yy}- \mathscr {R}_{xx})^2+4 \mathscr {R}_{xy}^2}-(\mathscr {R}_{yy}-\mathscr {R}_{xx})\right] \le 0|\mathbf{S }\right) . \end{aligned}$$
(19)

Let \(f\triangleq 2\mathscr {R}_{xy}-\beta _1\left[ \sqrt{(\mathscr {R}_{yy}- \mathscr {R}_{xx})^2+4 \mathscr {R}_{xy}^2}-(\mathscr {R}_{yy}-\mathscr {R}_{xx})\right] \). By first order Taylor expansion of f at \(\mathbf{S }=\mathbf{S }^*=\varvec{\eta }=(\sigma _{xx},\sigma _{xy},\sigma _{yy})\) with

$$\begin{aligned} \begin{aligned} f\big |_{(\varvec{\eta },\varvec{\eta })}&=0,~\frac{\partial f}{\partial S_{xx}^*}\Bigg |_{(\varvec{\eta },\varvec{\eta })}=-\frac{\partial f}{\partial S_{xx}}\Bigg |_{(\varvec{\eta },\varvec{\eta })}=C,\\ \frac{\partial f}{\partial S_{xy}^*}\Bigg |_{(\varvec{\eta },\varvec{\eta })}&=-\frac{\partial f}{\partial S_{xy}}\Bigg |_{(\varvec{\eta },\varvec{\eta })}=\frac{C(\sigma _{yy}-\sigma _{xx})}{\sigma _{xy}},~\frac{\partial f}{\partial S_{yy}^*}\Bigg |_{(\varvec{\eta },\varvec{\eta })}=-\frac{\partial f}{\partial S_{yy}}\Bigg |_{(\varvec{\eta },\varvec{\eta })}=-C, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} C=\frac{2(\sigma _{xx}\sigma _{yy}-\sigma _{xy}^2)\sigma _{yy}\sigma _{xy}}{\sqrt{(\sigma _{yy}- \sigma _{xx})^2+4 \sigma _{xy}^2}}. \end{aligned}$$

Therefore, the probability in (19) is equivalent to

$$\begin{aligned} \begin{aligned} P\Bigg \{&\left. C(S_{xx}^*-\sigma _{xx})+\frac{C(\sigma _{yy}-\sigma _{xx})}{\sigma _{xy}}(S_{xy}^*-\sigma _{xy})-C(S_{yy}^*-\sigma _{yy})\right. \\ -&C(S_{xx}-\sigma _{xx})-\frac{C(\sigma _{yy}-\sigma _{xx})}{\sigma _{xy}}(S_{xy}-\sigma _{xy})+C(S_{yy}-\sigma _{yy}) +\epsilon _n\le 0 \Big |\mathbf{S }\Bigg \}, \end{aligned} \end{aligned}$$
(20)

where

$$\begin{aligned} \begin{aligned} \epsilon _n&=o_p(|S_{xx}-\sigma _{xx}|+|S_{xy}-\sigma _{xy}|+|S_{yy}-\sigma _{yy}|\\&\quad +|S_{xx}^*-\sigma _{xx}|+|S_{xy}^*-\sigma _{xy}|+|S_{yy}^*-\sigma _{yy}|)\\&=o_p(n^{-1/2}). \end{aligned} \end{aligned}$$

Thus, (20) is equivalent to

$$\begin{aligned}&P\Bigg \{\sqrt{n}\left[ \left( S_{xx}^*-\sigma _{xx}\right) +\frac{\sigma _{yy}-\sigma _{xx}}{\sigma _{xy}}\left( S_{xy}^*-\sigma _{xy}\right) -(S_{yy}^*-\sigma _{yy})\right] \nonumber \\&\quad \le \sqrt{n}\left[ \left( S_{xx}-\sigma _{xx}\right) +\frac{\sigma _{yy}-\sigma _{xx}}{\sigma _{xy}}\left( S_{xy}-\sigma _{xy}\right) -(S_{yy}-\sigma _{yy})\right] +\frac{\epsilon _n}{C} \Big |\mathbf{S }\Bigg \}.\qquad \qquad \end{aligned}$$
(21)

By Lemma 2, we conclude that

$$\begin{aligned} A_n\triangleq \sqrt{n}\left( (S_{xx},S_{xy},S_{yy})^T-(\sigma _{xx},\sigma _{xy},\sigma _{yy})^T\right) \overset{L}{\longrightarrow }N_3(\mathbf{0 },\varvec{\varOmega }_{ss}), \end{aligned}$$

where \(\varvec{\varOmega }_{ss}\) is a \(3\times 3\) matrix formed by taking a block of the entries of the last three rows and columns from \(\varvec{\varOmega }\) in Lemma 2 with \(\mathbf{Z }_t=(x_t,~y_t)^T\).

Finally, let \(f(u,v,w)=u+\frac{\sigma _{yy}-\sigma _{xx}}{\sigma _{xy}}v-w\), \(\epsilon _n'(A_n^*,A_n)=\frac{\sqrt{n}\epsilon _n}{C}\overset{P}{\longrightarrow }0\). By Lemma 1, we see at once that (21) converges to U(0, 1) in law.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yan, L., Wang, R. & Xu, X. Fiducial inference in the classical errors-in-variables model. Metrika 80, 93–114 (2017). https://doi.org/10.1007/s00184-016-0593-9

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-016-0593-9

Keywords

Navigation