EIV regression with bounded errors in data: total ‘least squares’ with Chebyshev norm
- 66 Downloads
We consider the linear regression model with stochastic regressors and stochastic errors both in regressors and the dependent variable (“structural EIV model”), where the regressors and errors are assumed to satisfy some interesting and general conditions, different from traditional assumptions on EIV models (such as Deming regression). The most interesting fact is that we need neither independence of errors, nor identical distributions, nor zero means. The first main result is that the TLS estimator, where the traditional Frobenius norm is replaced by the Chebyshev norm, yields a consistent estimator of regression parameters under the assumptions summarized below. The second main result is that we design an algorithm for computation of the estimator, reducing the computation to a family of generalized linear-fractional programming problems (which are easily computable by interior point methods). The conditions under which our estimator works are (said roughly): it is known which regressors are affected by random errors and which are observed exactly; that the regressors satisfy a certain asymptotic regularity condition; all error distributions, both in regressors and in the endogenous variable, are bounded in absolute value by a common bound (but the bound is unknown and is estimated); there is a high probability that we observe a family of data points where the errors are close to the bound. We also generalize the method to the case that the bounds of errors in the dependent variable and regressors are not the same, but their ratios are known or estimable. The assumptions, under which our estimator works, cover many settings where the traditional TLS is inconsistent.
KeywordsErrors-in-variables Measurement error models Total least squares Chebyshev matrix norm Bounded error distributions Generalized linear-fractional programming
Mathematics Subject Classification62J05 65C60 90C32
The work was supported by the Czech Science Foundation under grants P402/13-10660S (M. Hladík), P402/12/G097 (M. Černý) and P403/15/09663S (J. Antoch). J. Antoch also acknowledges the support from the BELSPO IAP P7/06 StUDyS network. We are also obliged to Tomáš Cipra, a senior member of DYME Research Center, for fruitful discussions.
- Gallo PP (1982) Consistency of regression estimates when some variables are subject to error. Commun Stat Theory Methods 11:973–983Google Scholar
- Healy JD (1975) Estimation and tests for unknown linear restrictions in multivariate linear models. PhD thesis, Purdue UniversityGoogle Scholar
- Kukush A, Markovsky I, Van Huffel S (2002) On consistent estimators in linear and bilinear multivariate errors-in-variables models. In: Van Huffel S, Lemmerling P (eds) Total least squares and errors-in-variables modeling: analysis, algorithms and applications. SIAM, Philadelphia, pp 155–164CrossRefGoogle Scholar
- Maddala GS, Nimalendran M (1996) Error-in-variables problems in financial models. In: Maddala GS, Rao CR (eds) Handbook of statistics. Elsevier Science, New YorkGoogle Scholar
- Pešta M (2011) Strongly consistent estimation in dependent errors-in-variables. AUC - Mathematica et Phys 52:69–79Google Scholar
- Pešta M (2013a) Asymptotics for weakly dependent errors-in-variables. Kybernetika 49:692–704Google Scholar
- Pešta M (2013b) Total least squares and bootstrapping with application in calibration. Stat J Theor Appl Stat 47:966–991Google Scholar
- Rao CR (1980) Matrix approximations and reduction of dimensionality in multivariate statistical analysis. In: Krishnaiah PR (ed) Multivariate analysis, vol 5. North-Holland, Amsterdam, pp 3–22Google Scholar