How measurement error affects inference in linear regression

Measurement error biases OLS results. When the measurement error variance in absolute or relative (reliability) form is known, adjustment is simple. We link the (known) estimators for these cases to GMM theory and provide simple derivations of their standard errors. Our focus is on the test statistics. We show monotonic relations between the t-statistics and R2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R^2$$\end{document}s of the (infeasible) estimator if there was no measurement error, the inconsistent OLS estimator, and the consistent estimator that corrects for measurement error and show the relation between the t-value and the magnitude of the assumed measurement error variance or reliability. We also discuss how standard errors can be computed when the measurement error variance or reliability is estimated, rather than known, and we indicate how the estimators generalize to the panel data context, where we have to deal with dependency among observations. By way of illustration, we estimate a hedonic wine price function for different values of the reliability of the proxy used for the wine quality variable.


Introduction
As is well known from econometric textbooks (e.g., Baltagi 2011, sec. 5.3), measurement error in one or more regressors makes OLS estimators of linear regression models inconsistent. Often, the inconsistency will cause a bias toward zero, although this does not need not be the case and the bias can be away from zero (Wansbeek and Meijer 2000, sec. 2.3). But whatever the direction of the bias, the desire "to do something" about it has spawned a huge literature since the 1930s.
One strand in this literature is to limit the problem by deriving (asymptotic) bounds on the estimators, thus limiting the extent of the problem. In case the measurement error is confined to a single regressor, OLS is biased toward zero while reverse regression is biased away from zero, thus offering estimated bounds on the coefficient. This classical result (Frisch 1934) does not extend to the case of multiple mismeasured regressors, and then, outside information in the form of a bound on the measurement error covariance matrix is required to obtain estimators of bounds on the coefficients (Wansbeek and Meijer 2000, secs. 3.4 and 3.5).
But, not surprisingly, the focus in the literature is on coming up with a consistent estimator. One way to achieve this is through an instrumental variable. It may come from outside the model, but can also be found within the model as long as it is identified. This requires nonnormality of the regressors. Then, higher moments of the variables can be used as instruments (Geary 1942;Erickson and Whited 2002).
Another road to consistency lies open when the measurement error variance is known. Then, a consistent estimator is readily obtained by subtracting the measurement error covariance matrix from the covariance matrix of the observed regressors. Unlike in fields like physics and the medical sciences, in economics the measurement error variance is seldom known. Yet, researchers may have an idea about it or just want to understand how their results vary with its magnitude. In practice, it will often not so much be about the absolute magnitude but the magnitude relative to the observed variance, the reliability. For example, published psychological tests are routinely accompanied by a statement of its reliability (Fuller 1987, p. 5) and in their overview of measurement error in economics; Bound et al. (2001) present many of the results as reliabilities or correlation between the observed value and the true value (the square roots of the reliabilities), although they present results on measurement error variances as well. As this illustrates, ideas about reliability may involve fixed numbers, but in practice it will often be about numbers imported from previous research and are hence subject to sampling error, which depending on the relative sample sizes of the prior studies and current study may or may not be negligible. Buonaccorsi (2010, pp. 168-169) provides a critical assessment of the usefulness of externally estimated reliabilities.
In this paper, we consider inference for the linear regression model with measurement error in the context of these three increasingly realistic kinds of prior knowledge: known absolute variances, known reliabilities, and estimated reliabilities (or estimated measurement error variances). We will consider these three consecutively. For each case, we derive a consistent estimator of the regression coefficient and its asymptotic variance, both without and with assuming normality of the measurement error variance.
An interesting issue concerns t-values. We can distinguish three: the t-value when there were no measurement error; the t-value when there is measurement error but it is neglected; and the t-value when the measurement error is accounted for. For the first two cases, known absolute variances and known reliabilities, we show that the t-values decrease while we move along this list. (The generality of the third case, estimated reliability or measurement error variance, defies analysis.) This greatly expands the findings in Meijer and Wansbeek (2000). The issue is relevant for applied researchers for two reasons. First, a regression coefficient can become insignificant due to measurement error and, second, correcting for the measurement error will not make an insignificant coefficient significant.
The paper is organized as follows. In Sect. 2, we consider the case of known measurement error variance. We describe the model, present the adapted estimator when the measurement error variance is known and show that it is a method-ofmoments (MM) estimator. We discuss estimating its variance in general and elaborate this under independence and normality. Section 3 discusses the F and t tests and shows that there is an ordering from high to low between the cases mentioned above (no measurement error, neglected measurement error, measurement error accounted for).
In Sect. 4, we turn to the case where the reliability rather than the measurement error variance is known. We derive the asymptotic variance, without and with assuming normality of the measurement errors. Analogous to Sect. 3, the issue of the ordering of the corresponding test statistics is addressed in Sect. 5.
Section 6 discusses how to handle the situation where the measurement error variance or reliability is not known but is consistently estimable, with a consistently estimated asymptotic variance.
In previous papers (Meijer et al. 2015(Meijer et al. , 2017, we have shown that panel data offer many additional possibilities for identification and estimation of measurement error models, compared to (independent) cross-sectional data. Therefore, in Sect. 7, we investigate whether the analysis up until then can be extended from a cross-sectional to a panel data context, and whether for the case of known or estimated measurement error variances or reliabilities, this makes identification and estimation easier or more difficult.
We then turn, in Sect. 8, to an empirical example. It concerns the Australian wine market. The price of wine is regressed on a number of variables including quality. Results are shown for different values of the reliability of the proxy variable that is used to quantify quality. Some concluding remarks are made in Sect. 9.
Throughout, we consider the linear regression model, which has also received most attention in the measurement error literature. A recent overview of measurement error focusing on nonlinear models is provided by Schennach (2016). Extending our results to nonlinear models is left for future research. It turns out that our estimators of the coefficients and our expressions for their asymptotic variances and the estimators of them are essentially the same as the ones presented in Fuller (1987, sec. 3.1.1), although this relation is far from obvious and our presentation is simpler and more in line with the economic literature. Some of our special cases and extensions are new, and in particular, our main contribution to the literature is given by the results comparing the magnitudes of test statistics.

Measurement error variance known
In this section, we introduce the model that we will study throughout and consider the case where the measurement error variance is known. Most of the results here have been described before in the literature (e.g., Fuller 1987;Wansbeek and Meijer 2000), but we present them here concisely as a reference for the rest of the paper, and we put this in a (Generalized) Method of Moments framework, which simplifies the theoretical analyses.
Consider the following linear regression model with k regressors ξ i , measured with error v i : The y i and x i are observed. The reduced form, after eliminating the unobserved ξ i , is Initially, assume that the observations are i.i.d. and that ε i , ξ i , and v i are mutually independent with means 0, μ, and 0, respectively. We let We collect the y i in the n-vector y and the x i in the n × k-matrix X . LetÂ = X X /n and A = plim As is well known, major implication of this model is that the OLS estimatorβ 0 = (X X ) −1 X y of β converges to β 0 = A −1 (A − )β and is hence inconsistent, except for the trivial case that β = 0, or, equivalently, β 0 = 0. In the following, we assume that β = 0, that is, the model includes at least one mismeasured variable. If is known, the inconsistency is easily removed by using the adapted OLS estimator Let Then the model assumptions imply E(h i ) = 0, so (4) is a set of k valid moment equations. Solvingh = 0, with shows that the estimatorβ in (3) is a method of moments (MM) estimator.

Residual variance
The OLS-based estimator of the residual variance σ 2 , is also inconsistent when there is measurement error. Since E(y so The strictness of the inequality is an implication of the assumption β = 0. Through (6) we obtainσ as a consistent estimator of σ 2 .

Explained variation
We now consider the effect on R 2 and the way to correct it. Let σ 2 y be the population variance of the y i . This is consistently estimated by the sample variance s 2 y . Furthermore, let R 2 0 be the R 2 from the OLS regression and let ρ 2 y ]/σ 2 y be the population R 2 of the regression of y i on ξ i , where μ y = E(y i ). Then So R 2 is underestimated when there is measurement error, but is a consistent estimator of ρ 2 * .

Generalization
The assumptions we have stated above can be weakened without losing consistency of the estimator. Under weak regularity conditions, the MM estimator is consistent if E(h i ) = 0 (or, even weaker, plim n→∞h = 0). A set of sufficient conditions for this is (a) This weaker set allows for dependence across observations (time series, panel data, clustered data) and heteroskedasticity in ε i . It also allows for heteroskedasticity in v i but the assumption varies with ξ i does not seem to offer much additional practical value. However, we will discuss extensions to the case where is consistently estimated later, and in that situation, robustness to heteroskedasticity in v i may be a desirable property.

The asymptotic variance
Since plim n→∞ ∂h/∂β = −(A − ), MM theory implies that the asymptotic variance of β is A consistent estimator of this is withÊ This expression was previously given in section 5.4.2 of Buonaccorsi (2010). Note that (9) is valid under heteroskedasticity of ε i and v i . With clustered data or other types of dependent data, the appropriate clustered or heteroskedasticity and autocorrelation consistent (HAC) covariance matrix replaces the covariance matrix (10). We can elaborate (9) when the measurement errors are normally distributed. Then we obtain, using (2), leading to To make this operational, we need to replace parameters by consistent estimators. In particular, a consistent estimator of σ 2 u iŝ σ 2 u =σ 2 +β β ; it can be straightforwardly verified that this is equal to iû 2 i /n. So is a simple-structured consistent estimator when the measurement errors are normal and ξ i , ε i , and v i are mutually independent.

Ordering of test statistics
We now turn to hypothesis testing. To obtain tractable results, we maintain the hypothesis that the measurement errors are normally distributed. Let U be a k × p matrix of full column rank, with p < k. Letβ be an estimator of β andṼ be an estimator of its asymptotic variance matrix. Then a Wald test statistic for H 0 : U β = 0 is This is compared to a Chi-square distribution with p degrees of freedom. For comparing the test statistics based on different estimators, we compare their probability limits (scaled by n),τ = plim n→∞ n −1T . In this comparison, we include the infeasible OLS estimator based on observing the ξ i . For this infeasible estimator, the inconsistent OLS estimator, and the consistent MM estimator, we obtain in this order with σ 2 , σ 2 0 , and σ 2 u defined in (1), (7), and (2), respectively, and Q † , Q 0 , Q * , and c > 0 implicitly defined; the latter reflects the matrix ββ in the expression of τ * and its precise form is immaterial.

Relation between the test statistics
To handle the Qs, we use the result for matrices F and H such that (F, H ) is nonsingular and F H = 0, and nonsingular S, To prove the result, notice that both sides equal (F, 0) after postmultiplication by the nonsingular matrix (S −1 F, H ). Let G be an orthogonal complement of U and consider the case where G is such that G = 0 or, equivalently, W G = AG, where W = A − ; we will meet two instances of this below. Now, Since σ 2 < σ 2 0 , cf. (7), and σ 2 0 < σ 2 u , cf. (7) and (2), we conclude τ † > τ 0 > τ * .

F and t test
The first instance of G = 0 occurs when testing the null hypothesis that all coefficients except the intercept are zero. The Wald test is then the asymptotic version of the standard F test. Let the kth element of x i be 1 and let e k be the k-th unit vector (the k-th column of I k ). The relevant statistic is obtained by letting U be I k without its last column, with orthocomplement G = e k ; clearly, G = 0. Thus, the null hypothesis is rejected less often when using the OLS estimator based on the observed x i than when using (if we could) the OLS estimator based on the true ξ i . More interestingly and somewhat paradoxically (because the estimated coefficients are typically larger and the estimated residual variance is smaller), the null hypothesis is rejected less often when using the consistent MM estimator than when using the inconsistent OLS estimator based on the x i . Hence, the finding of a significant relation may not survive when measurement error is accounted for. Another interesting aspect of the ordering of the statistics is that it clearly distinguishes between the case where there is no measurement error and the case where there is measurement error but its variance is known. From a first-order perspective, there is no difference as β can be (simply) estimated consistently in both cases, but in the latter case it is harder to detect a significant relationship between the variables.
The other instance of G = 0 arises when there is measurement error in a single regressor only, the first one, say, and the null hypothesis is β 1 = 0. Then is proportional to e 1 e 1 and U = e 1 so G is I k without its first column. The Wald test statistic is then the square of the t test statistic. The same ordering as above applies, with the same comments. This generalizes a result from Meijer and Wansbeek (2000). For the case of regression with a single regressor, the result τ † > τ 0 was already given by Bloch (1978). 2

Known reliability
Information about measurement error variances, if available, is more likely to be of the relative than the absolute form. For example, Fuller (1987, Table 1.1.1), lists the reliability of a number of socioeconomic variables, as computed from repeated measurements by the U.S. Census Bureau. Income, for instance, has a reliability of 85%. Bound et al. (2001, sec. 6) list a large amount of empirical evidence about measurement error in surveys, and most (though not all) of this is presented in terms of correlations or variance ratios, which directly translate into reliabilities. By way of another example, after performing a factor analysis of the independence of central banks, De Haan et al. (2003) produced an indicator of the latent variable "central bank independence" and listed its (estimated) reliability.
In this case, it is natural to assume that the measurement errors of the different variables are independent. So is now a diagonal matrix, and we know The means μ j now enter the picture as unknown parameters, requiring their own moment conditions. Let, for i = 1, . . . , n, withx,W , andh the sample averages. Settingh = 0 and solving for β and μ readily givesμ =x andβ Analogously, the consistent estimator of the error variance σ 2 is noŵ instead of (7). Since we have instead of (9) Expressions (21), (23), and (24) can be found in the Stata manual's description of its eivreg command as of version 16 (StataCorp 2019a). 3

Estimation in a structural equation modeling program
The linear regression model with measurement error is a special case of the general class of structural equation models (SEMs); see, e.g., Wansbeek and Meijer (2000, ch. 8). Most general-purpose statistical software packages have a SEM module, and there are also standalone programs for estimating them. They generally allow simple restrictions on the parameters, so estimating the model with known measurement error variance in such a program is straightforward. Estimating the model with known reliability is slightly less straightforward, however. For example, the sem command in Stata allows for specifying the known reliability, but it then computes (in our notation) and treats this as known, instead of the reliability itself (StataCorp 2019b, p. 577), and Lockwood and McCaffrey (2020) report that this leads to noticeably biased standard errors and propose using the bootstrap or using the theory of M-estimation to obtain correct standard errors for this procedure. However, the proper way to specify known reliability in a SEM is to impose a linear relation between the variance of ξ and the variance of the relevant element(s) of v: var(v i j ) = [(1 − ρ j )/ρ j ] var(ξ i j ), which in Stata's sem procedure can be done through a specification like variance(xi1@c1 e.x1@(0.2*c1/0.8)) where xi1 is ξ , e.x1 is the measurement error of the error-ridden variable x1 (i.e., v 1 ), 0.8 is ρ j (and 0.2 = 1 − ρ j ), and c1 indicates a free parameter. In many other structural equation programs, such linear constraints can be imposed analogously.

Asymptotic variance
Analogous to what we did in Sect. 2.4 for the case of known , we derive an explicit expression for the asymptotic variance ofβ for the case of known reliabilities. We assume homoskedasticity and normality of the v i as we can obtain a manageable expression only then. Let G = diag[(1 − ρ j )β j ] and leṫ with e j the jth unit vector of dimension k. We can now write h 1i as and want to find an expression for E(h 1i h 1i ).
To do so, let P k,k be the symmetric commutation matrix 4 of order k 2 × k 2 . Thus, Using the method of repeated conditioning (Merckens and Wansbeek 1989;Wansbeek and Meijer 2000, p. 366) we readily obtain where " * " denotes the Hadamard (element-wise) product of two matrices of equal dimensions. Collecting terms we obtain So, with hats as usual indicating the substitution of consistent estimators, we get with now, slightly adapting from (2),σ 2 u =σ 2 +β ˆ β = iû 2 i /n, withˆ as given in (22). So the asymptotic variance for the case of known reliabilities is different from the one for the case of known , cf. (14), and quite a bit more complex.

Test statistics in the case of known reliability
The results for the R 2 from the case with known immediately carry over to the case with known reliability, except that in the computation ofρ 2 * ,ˆ is used instead of . The results for the Wald test also carry over, but less trivially so.
For comparing Wald tests, τ 0 and τ † are the same as before, because they do not use any information about measurement error. However, the expression for τ * is different now. Consider first the case of the joint test of whether all coefficients except the constant are zero, that is, the Wald version of the standard F test. As discussed above, this corresponds to U being the first k − 1 columns of I k and its complement being e k . Define = U (A − μμ )U and 1 = U U . That is, these are the variance matrices of x and v, respectively, with their last element (corresponding to the constant) omitted. Then where the last equality follows from Lemma 1 in "Appendix A", and G 1 and 1 are the upper-left (k −1)×(k −1) submatrices of G and , respectively, or, equivalently, G 1 = U GU and 1 = U U . In contrast, It follows that if ≥ σ 2 0 , then τ 0 ≥ τ * . Therefore, we investigate where we have used which again uses Lemma 1. After some algebra, we find that = R S R, where S is a symmetric positive semidefinite matrix, which implies that is a symmetric positive semidefinite matrix and therefore τ † > τ 0 ≥ τ * . The matrices in this expression are The matrix Q k−1 is a symmetric idempotent matrix (e.g., Wansbeek and Meijer 2000, p. 361), as is M L , so it follows that S is symmetric and positive semidefinite. This result generalizes, again after some algebra, to other tests for restrictions of the form U β = 0 that do not involve the constant and that still satisfy G = 0 (with G the orthogonal complement of U ) as in Sect. 3. (Hence, all mismeasured regressors are included in the test.) So, by and large, the results for known measurement error variance carry over to the case of known reliability, but with some additional restrictions.

Estimated reliability
Often, we may not strictly "know" the reliability (or measurement error variance), but we can consistently estimate it. Using the resulting estimate as if it is the known reliability gives consistent estimators of the parameters of interest. However, treating the estimate as the true value leads to an underestimate of the standard errors of the estimators of the coefficients of interest. The estimator of interest is a two-step estimator and the default second-step standard errors do not take the stochastic uncertainty of the first-step estimators into account.
One way to correct this would be to stack the moment conditions of the estimators of the model of interest as discussed in this paper and the moment conditions of the estimator of the measurement error variance (or reliability), using similar techniques as, for example, in Meijer and Wansbeek (2007). As discussed in that paper, if the firststep estimator is overidentified, the generalized method of moments (GMM) estimator from stacking the moment conditions differs slightly from the two-step estimator. This may not be a "problem" at all, as the joint estimator is asymptotically at least as efficient, but it may be computationally or interpretationally more complicated, or less robust to misspecification. To obtain the two-step estimator, the first-step moment conditions have to be replaced by a set of asymptotically equivalent moment conditions that just-identify the estimators, leading to a two-step MM estimator. 5 In some cases, the measurement error variance (or reliability) is estimated from a different sample. In that case, correct standard errors can be obtained by using a relatively straightforward correction to the default standard errors. 6 Specifically, let the parameters from the first step (reliabilities, measurement error variances, possibly additional auxiliary parameters) be collected in the parameter vector κ. Then typi-cally √ m(κ − κ) (where m is the first-step sample size) is asymptotically normally distributed with mean zero and variance matrix V κ , say, and the first step estimation produces a consistent estimatorV κ . The second step moment conditions areh(β;κ) = 0 and treatingκ as if it were the known κ, we obtain the asymptotic variance matrixV β , say, which is of the formĜ −1 βV h (Ĝ β ) −1 , whereĜ β = ∂h/∂β evaluated in (β;κ), andV h is a consistent estimator of E(h i h i ). The corrected variance matrix is obtained by writing with n the second-step sample size, andĜ κ = ∂h/∂κ evaluated in (β;κ), and using the independence of √ nh(β; κ) and √ m(κ − κ), leading tô See, for example, Inoue and Solon (2010) for a similar approach in the case of two-sample instrumental variables estimators, and Wooldridge (2002, p. 356) for an analogous approach for two-step M estimators. It is also possible to arrive at this starting from the formulas in Fuller (1987, chap. 3), but this is more involved. We apply this general theory to the specific case of a single regressor (the first one) with measurement error. First, assume that the measurement error is estimated from an independent sample of size m to beλ, with variancev λ , so = λe 1 e 1 . Since then ∂h/∂λ = β 1 e 1 , the adaptation of the expression given in (9) is Second, assume that the reliability is estimated to beρ 1 , with variancev ρ 1 . Then (19) becomes (24) is

Extension to panel data
So far we have considered the case of a single cross section. We now consider the case of a panel data model, where measurement error issues are equally relevant, see, for example, Baltagi (2005, sec. 10.1). As documented by Meijer et al. (2015Meijer et al. ( , 2017, panel data (with independent cross-sectional units) imply additional opportunities for identifying and estimating measurement error models. We now investigate to what extent the analysis for the cross-sectional case we studied so far still essentially holds in the panel data context. The direct generalization of the cross-sectional model to the panel data case with time dimension T is the following model, where t = 1, . . . , T denotes the time index, and for simplicity we assume a balanced panel. We leave the covariance structure over time of ε it unrestricted. Let t = E(v it v it ) and = T t=1 t . Extending (4) to the panel case, let (27) where y i is the vector that stacks the y it , t = 1, . . . , T , and X i is the T ×k matrix whose tth row is x it . If ε it and ξ it are uncorrelated (contemporaneous exogeneity), E(h it ) = 0 and thus E(h i ) = 0, so this is a valid moment condition and, with X = (X 1 , . . . , X n ) and y = (y 1 , . . . , y n ) ,β is the method-of-moments estimator of β from (27). It is basically the pooled OLS estimator corrected for measurement error by using , supposedly known. The usual robust estimator of its variance takes care of correlation over time and hence covers the random effects case, with the random individual effects implicitly included in ε i . With individual fixed effects, that is, ε it = α i +r it with α i potentially correlated with ξ it , they need to be eliminated, which is typically done by the within transformation or first differencing (e.g., Baltagi 2005, pp. 13, 136). After such a transformation, the resulting data contain combinations of measurement errors from multiple time points: v it − T s=1 v is /T in the case of the within transformation, and v it −v i,t−1 in the case of first differencing. The variances of these terms depend on the t in more complicated ways, and if the measurement errors are serially correlated, they also depend on the covariances between the measurement errors across time. Hence, in order to correct for measurement error, information on the measurement error structure over time has to be known in addition to knowledge of . The simplest (and strongest) assumption would be that t =¯ does not vary over time and that the measurement errors are serially uncorrelated. Then var(v it −v i,t−1 ) = 2¯ and var(v it − T s=1 v is /T ) =¯ (T −1)/T , which leads to straightforward adaptations of (27) for the transformed data.
In the case of knowledge of the reliability, a leading case is also when the reliability is constant over time. First, consider the case without fixed effects. Let, as in the cross-sectional case, all t be diagonal with with A j jt the jth diagonal element of A t = E(x it x it ). Furthermore, let where μ jt is the jth element of μ t = E(ξ it ). Consequently, E(W it ) = t . Let W i = t W it and let M be the T × k matrix with tth row equal to μ t . The moment condition for the cross-sectional case as given in (20) generalizes to So, also in the case of known reliability, the analysis for a single cross-section carries over to the panel data case in a straightforward way. Now, consider the case with fixed effects and assume the measurement errors are serially uncorrelated. Let a tilde denote the within transformation. We then obtain say, with t as in (29). In this case, let and W it as in (30). Then h it is a valid moment for this case. An analogous expression can be obtained in the case of first differencing. In this section, we have only scratched the surface. The presence of panel data allows for a large number of potential assumptions about how the measurement errors evolve over time and how this can be used to estimate the coefficients consistently. Moreover, we have not discussed dynamic panel data, in which the lagged dependent variable is a regressor (e.g., Baltagi 2005, chap. 8), which is associated with a host of econometric issues that we have not discussed here. However, the cases discussed here serve as illustrations of how one can derive consistent estimators for such cases.

Empirical example
To illustrate the above, we estimate a hedonic price function that specifies price of wine as a function of its attributes or characteristics, see Oczkowski and Doucouliagos (2015) for a review and meta-analysis. In part, the literature recognizes that wine quality influences prices, and most studies employ a subjective quality score from a wine guide as an indicator of quality. Only a few studies, however, have recognized the consequent measurement error associated with expert quality scores only reflecting some underlying notion of latent wine quality. Oczkowski (2001) employs an instrumental variable estimator using multiple expert scores to consistently estimate the relation between price and latent quality. In contrast, Lecocq and Visser (2006) do adjust their price-quality estimates for the attenuation bias associated with expert scores; however, their adjustment formula ignores the impact of other (nonquality score) regressors on the attenuation bias, and no adjustments are made for standard errors.
Our example focuses on Australian premium wines available during 2015 and an average quality score from four expert tasters, Geddes (2015), Oliver (2015), Hooke (2015), and Halliday (2015). We estimate the equation where Price i is the recommended retail price in 2015 measured in Australian dollars (Halliday 2015); Q i is an average quality score measured out of 100; Vintage i is the year in which the grapes were harvested; Region i is a series of dummy variables depicting the region from where the grapes were sourced; Variety i is a series of dummies representing the variety, blend or style of wine. Descriptive summary statistics of the data are provided in Table 1.
The quality score is an average of four expert scores, where the scores are standardized using a nonparametric distribution transformation to reflect the Halliday (2015) rating, see Cardebat and Paroissien (2015). Effectively, the other three scores are transformed to have the same quantiles as Halliday (2015). The standardized scores have similar means across the average and individual scores. However, as expected, the standard deviation for the average score (1.62) is smaller than that of the individual expert scores (2.20). The estimated standardized Cronbach's alpha reliability coefficient for the four experts is α = 0.728. 7 The quality variable captures both the preferences of consumers for higher quality wines and the increased costs of producing better quality wines.
The vintage variable captures the preferences held by some consumers for older wines and the increased costs of producing wines which are long-lived and the costs of storing wines. In the sample, approximately 90% of wines come from the 2012, 2013,  (21), allowing Q i to suffer from measurement error, but the other regressors not, for a range of reliability values for Q i from 1.0 (uncorrected least squares) and reducing by 0.1 increments, also including the estimated reliability of 0.728 for the data set. There is a lower limit for the proposed reliability, because the implied covariance matrix of (y i , ξ i ) needs to be positive (semi)definite. Effectively, reliabilities below this limit cannot add any additional explanatory power to the model. 8 This lower limit is the R 2 from the regression of the quality score on the other regressors in (31) and ln(Price i ). In our case, this is R 2 = 0.546, and therefore, we only present estimates for reliabilities of 0.60 and higher.
The estimates of (31) for various reliabilities are reported in Table 2. The standard attenuation bias adjustment is evident with the quality score point estimate (γ ) monotonically rising from 0.211 for no correction to 0.413 for a reliability of 0.60. For the estimated alpha of 0.728, the quality score estimate is 0.316 which constitutes an additional 10.5% increase in prices per quality point compared to the uncorrected estimate. This is very important economically, as correcting for measurement error on average leads to an additional $5.18 (in $AUD) per quality score point.
For the estimated alpha, the corrected quality coefficient estimate is approximately 50% higher than the OLS counterpart. This difference is similar to Oczkowski's (2001) finding for the difference between 2SLS and OLS estimates for latent variable models of wine reputation on price, using Australian wines assessed in 1999 and 2000 (n = 276). Lecocq and Visser (2006) identified the difference between measurement-error corrected and uncorrected estimates of 24% for a 1992 Bordeaux (n = 519) sample, 85% for a 1993 Burgundy (n = 613) sample and 73% for a 2001 Bordeaux (n = 255) sample. In general, the estimates appear to differ across time and samples, but they do point to substantial differences between measurement-error corrected and uncorrected quality-price estimates for wine. The robust standard errors forγ based on (24) and  (31) In Fig. 1, we return to our reference model, but consider the situation when we know either the measurement error variance (left) or the reliability (right) and illustrate graphically the relation between the assumed measurement error variance or reliability and the estimation results. The t-values shown here are based on the robust variance estimates (9) and (24). This shows again that with increasing measurement error variance and decreasing reliability, the coefficient and the R 2 increase, but the tvalue decreases, although the latter not monotonically for an assumed reliability close to 1. The t-value graphs using the estimated variances (14) and (25) based on the normality assumption (not shown) are qualitatively similar, but the t-values are a bit higher-as in Table 2-and their relation with the assumed reliability is monotonic, confirming the theoretical analysis. However, regardless of the specific assumptions made, there is no question that the coefficient of the quality rating remains highly statistically significant.
Up until now, we have assumed ignorance about the reliability of Q i as a proxy of the true quality, Q * i , say. However, we can say more when we are willing to assume that  Coefficient and t-value of the quality rating variable, and the resulting (corrected) R 2 , as a function of measurement error variance (left) and reliability (right) of the quality rating, using robust standard errors the scores Q i1 , Q i2 , Q i3 , and Q i4 given by the four expert tasters, after demeaning, satisfy a one-factor model, where the b m are the factor loadings and the w im are the error terms, with variances ω 2 m and covariances zero. By way of normalization, we set the variance of Q * i equal to one. The case of no measurement error corresponds to ω 2 m = 0 for all m; the experts agree. 9 The quality variable Q i was constructed as the average over the expert scores. So, with bars denoting the average over m, The reliability of Q i as a proxy for Q * i can now be expressed as We estimated the b m and ω 2 m with Stata's sem module using the original scores, and findρ = 0.7286, which is almost identical to the Cronbach's alpha value of 0.728 mentioned earlier. With this reliability, the estimate of the quality rating coefficient is 0.316, while the implied R 2 of the regression is 0.788.

Discussion
It is well known that measurement error is pervasive in economic data and that it tends to bias estimators that do not correct for measurement error in the explanatory variables. We rigorously analyzed the linear regression model with measurement error, where either the variance matrix of the measurement errors is known or the reliabilities of the regressors are known. Although these cases have been discussed in the literature, we bring the results together concisely within the framework of GMM theory. We also discussed some special cases, in particular normality of the measurement errors and measurement error in only a single regressor. For these cases, the expressions simplify greatly. Furthermore, we derived expressions for the related case where measurement error variance or reliability is not known, but consistently estimated, either from the same sample or an independent sample.
Or main focus is on the effects of measurement errors on the t-statistics and hence statistical significance. We compare the t-statistic of the consistent estimator with the t-statistic of the (inconsistent) OLS estimator and the t-statistic of the (infeasible) estimator if there was no measurement error and show that they are ordered with the t-statistic of the consistent estimator being closest to zero and the t-statistic of the (infeasible) estimator being largest in absolute value. This holds for both the case with known measurement error variance and the case with known reliability. We also greatly generalized our earlier finding  that the t-value decreases with the assumed measurement error variance and showed that the t-value also decreases with decreasing assumed reliability of the regressor. These results use normality of the measurement errors, as general results for robust standard errors cannot be obtained. Our empirical results suggest that the results largely carry over to robust inference, but there may be some minor departures from monotonicity.
We have also developed extensions of these estimators to panel data, which comes with a number of additional issues and opportunities. In particular, we now have to consider whether the measurement errors are serially correlated, whether they are stationary, whether there are random or fixed effects in the model of interest, and whether the model is static or dynamic. We have derived estimators for some illustrative cases in static panel data models with and without fixed effects, which also serve as guides to how one could derive estimators in a specific panel data application with more general assumptions.
We illustrated the results by estimating a hedonic regression for the price of Australian wines. We showed the sensitivity of the coefficient of the quality indicator to the assumed reliability of this indicator: This coefficient ranges from 0.2 without measurement error (reliability = 1) to 0.4 when reliability is 0.6. This also has consequences for the implied R 2 of the regression (which goes up with decreased reliability) and the t-statistic of the error-ridden regressor (which goes down with decreased reliability). However, in this particular regression, the coefficient of quality always remains statistically significant.
In the empirical study, the quality indicator was obtained as the average of four independent ratings of the quality of the same wine. By assuming a linear factor analysis model for these four ratings, we were able to estimate the reliability of the quality indicator, which is about 0.73. Taking this as the known reliability, point estimates and other statistics follow from our formulas.

Compliance with ethical standards
Conflict of interest The authors declare that they have no conflicts of interest.
Human and animal rights This article does not contain any studies with human participants or animals performed by any of the authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

A Auxiliary lemma
Lemma 1 Let U be the k × (k − 1) matrix consisting of the first k − 1 columns of I k . If D is a (k − 1) × (k − 1) nonsingular matrix and m is a k-vector such that m = UU m, then X = U DU + mm is nonsingular and U X −1 U = D −1 .