On the variance parameter estimator in general linear models


In the present note we consider general linear models where the covariates may be both random and non-random, and where the only restrictions on the error terms are that they are independent and have finite fourth moments. For this class of models we analyse the variance parameter estimator. In particular we obtain finite sample size bounds for the variance of the variance parameter estimator which are independent of covariate information regardless of whether the covariates are random or not. For the case with random covariates this immediately yields bounds on the unconditional variance of the variance estimator—a situation which in general is analytically intractable. The situation with random covariates is illustrated in an example where a certain vector autoregressive model which appears naturally within the area of insurance mathematics is analysed. Further, the obtained bounds are sharp in the sense that both the lower and upper bound will converge to the same asymptotic limit when scaled with the sample size. By using the derived bounds it is simple to show convergence in mean square of the variance parameter estimator for both random and non-random covariates. Moreover, the derivation of the bounds for the above general linear model is based on a lemma which applies in greater generality. This is illustrated by applying the used techniques to a class of mixed effects models.

rank(X) = p x , with n > p x . Further, let be a symmetric almost surely strictly positive definite n × n matrix. This ensures that we may define 1/2 in the standard way using orthogonalization. The class of GLMs which will be studied in the present note are of the form where β is a p x × 1 vector, σ > 0 is a scalar and e is some random n × 1 vector whose elements are independent. Moreover, we assume that e has, conditional on X and , mean 0 and covariance I, together with common central fourth moments μ 4 ≥ 1 (since μ 4 := E[e 4 i ] ≥ Var(e i ) 2 = 1). Here I denotes the n × n identity matrix. The standard generalized least squares estimator of β, conditional on X and , is given by:β = (X −1 X) −1 X −1 y, see for instance (Seber and Lee 2003, Sec. 3.10) for this and more on the general linear model. Moreover, an unbiased estimator of σ 2 (conditional on X and ), and the estimator we will focus on in the present note, is given by: The results below will of course remain valid, with the obvious changes, if we consider an estimator normalized with some other, non-degenerate, function of n and p x , for instance simply n. It is important to note that when X is assumed to be random it is assumed to be possible to observe perfectly. That is, we are not dealing with an errors-in-variables model, which would lead to problems such as biased estimators. In Example 3 we comment on the situation when the regression coefficients are allowed to be random instead of the covariates, i.e. we consider mixed effects models.
For the results below it will be useful to define which corresponds to the projection matrix associated with the linear model (1), and to also define the idempotent matrices V := −1/2 P 1/2 , and K := I − V .
In particular, using the above it is possible to rewriteσ 2 according tô where p k = rank(K ) = n − p x , see the proof of Proposition 1 for more details. We will henceforth focus on properties of the variance of the variance parameter estimatorσ 2 . One can note that the special case of a sample variance in the non-Gaussian setting, i.e. an intercept only GLM, was treated already in e.g. Cramér (1946, Eq. 27.4.2). Other similar results are obtained in the theory of minimum variance component estimation, see e.g. Rao (1970Rao ( , 1971) and the proof of Seber and Lee (2003, Thm. 3.4). More general results can be found in for instance (Dette et al. 1998) where estimation of the variance parameter in the case of nonparametric regression is treated. Lemma 1 may be seen as a special case of the corresponding expression for the mean squared error (MSE) from Dette et al. (1998, Eq. 6). In Example 3 we discuss extensions to mixed effects models and comment on the results in Li (2012) which extend the analysis in Dette et al. (1998) w.r.t. mixed effects. We will return to these comparisons in more detail below.
The general problem formulation above, of course, relies on the theory of random quadratic forms. For more on this topic, see e.g. Eaton (1983), Mathai et al. (2012) and Seber and Lee (2003) and the references therein.
The results that we obtain for the variance of the variance estimator are based on that Var(σ 2 |X, ) can be calculated explicitly. For the particular situation of interest this variance is obtained using the following result from Plackett (1960, Eq. (2), p. 16) which we state in the following lemma: Lemma 1 (Plackett 1960) Let z be an n ×1 dimensional vector of independent random variables with mean 0, and common variance σ 2 , and common fourth central moment μ 4 , and let W denote an arbitrary n × n matrix. It then holds that N.B. The last sum in (4) corresponds to the sum of the squared diagonal elements of W , which should not be confused with tr(W 2 ). The proof of Lemma 1 is given in Plackett (1960), and a more general version can be found, without proof, in Atiqullah (1962a), whose proof can be found in Seber and Lee (2003, Thm. 1.6). See also the derivation of the SE expressions given in Dette et al. (1998) and Li (2012, Lemma 3).
Further, the main objective of the current note is to obtain finite sample bounds for the variance of the variance estimator of the general linear model defined by (1) when the covariates may be both random and non-random. In order to prove such bounds we will make use of the following lemma: Lemma 2 Given that W from Lemma 1 is idempotent and symmetric it follows that the variance expression (4) may be bounded according to where ν n := σ 4 (μ 4 − 1)(n − p u ) and Note that Lemma 2 only relies on Lemma 1 in terms of the explicit form of the variance given by (4), a fact which will be exploited further in Example 3 given below. The usefulness of Lemma 2 becomes apparent when the decomposition of W is in terms of a U with p u being a constant (much) smaller than n. This is what will be exploited in Corollary 1 and 2 , and which is the motivation for why the bounds in Lemma 2 are expressed in terms of p u instead of p w := n − p u . Further, note that the split between p u ≤ n/2 and p u > n/2 ascertains that all bounds are positive.

Proof of Lemma 2 Since
Further, expanding the square and noting that n i=1 U ii = p u yields Now, since U is idempotent and symmetric it follows that and in turn that Inserting the lower and upper bound into (5) finishes the proof of the lemma for p u ≤ n/2. The corresponding bounds for p u > n/2 are obtained by noting that W 2 ii ≥ 0.
We may now state our main result: Proposition 1 Consider the general linear model given by (1) and let (i) be a random symmetric almost surely positive definite n × n matrix and X be a random n × p x matrix of almost surely full column rank, (ii) the error term components defining the random n × 1 vector e be independent with, conditional on X and , mean 0, variance 1 and common fourth central moments μ 4 .

It then holds that
and where ν n := σ 4 μ 4 −1 n− p x and The proof of the conditional bounds in Proposition 1 is based on thatσ 2 may be expressed according to (3) together with an application of Lemmas 1 and 2. The details are given in the appendix. Moreover, note that the conditional variance bounds forσ 2 are deterministic regardless of whether the covariates X are random or not. This fact combined with thatσ 2 is unbiased conditional on X and , together with an application of variance decomposition proves the unconditional variance bounds in Proposition 1. The full proof of Proposition 1 is given in the appendix.
We can also state a finite sample upper bound on the difference between the conditional and unconditional variances together with convergence of these using the bounds in Proposition 1.
Remark 1 It is of course possible to state the rate of convergence in (7) by noting that hold for all n. Since, Corollary 1 is a convergence result in terms of n where p x is a fixed quantity, the primary interest is on the situation where p x ≤ n/2.
By Corollary 1 it follows that Var(σ 2 ) → 0, uniformly as n → ∞, and therefore we can state the following result.
Corollary 2 If the assumptions of Proposition 1 hold: Then as n → ∞. (2) is a special case of the variance parameter estimator σ 2 O N defined for the nonparametric mixed effects models analysed in Li (2012), see their Eq. (11). One can also note that the mixed effects models treated in Li (2012) is an extension of the model class treated in Dette et al. (1998). An alternative proof of the consistency ofσ 2 stated in Corollary 2 in the case with fixed covariates, hence, follows from Li (2012, Thm. 1) by neglecting the random effects part of their model. Further, as already commented upon above, in Dette et al. (1998) and Li (2012) focus is on the MSE of the variance parameter estimator. Moreover, their MSE expressions are stated in terms of asymptotic equivalence, i.e. using "o(1/n)". Thus, it is not straightforward to use their expressions to obtain bounds similar to those provided by Proposition 1 (or Lemma 2) without carefully inspecting the o(1/n) terms.

Examples
Example 1 Consider the following vector autoregressive model: where all dimensions are in accordance with the linear model given by (1). Model (8) is closely connected to the distribution free Chain-Ladder model, which is a widely used actuarial reserving model, where t denotes time and X t denotes the amount of payments made in the time interval [0, t], see e.g. Mack (1993). More specifically, the Chain-Ladder model assumes that X t is n × 1, (X t ) i ≥ 0 and that t := diag(X t ).
In this situation we may analyse the variance of the variance parameter estimator,σ 2 t , both conditional on X t as well as unconditionally using Proposition 1 and its corollaries. In either situation the above results provide us with finite sample bounds as well as ascertaining thatσ 2 t is a (mean square) consistent estimator of σ 2 t . A practical application of the finite sample bounds is for the Chain-Ladder model w.r.t. the discussion of the appropriateness of using conditional versus unconditional prediction error, see e.g. Buchwalder et al. (2006) and Lindholm et al. (2019). Moreover, the relevance of using finite sample bounds is also apparent in many real-world insurance applications, when highly aggregated data is used, where the sample size often is around n = 10 and p x = 1 n. Further, the Chain-Ladder model assumes a diagonal structure of t which, even though making diag(V ) = diag( P), does not make the variance bounds any more explicit.
For more on other models than the distribution free Chain-Ladder model that are used in an insurance context, see e.g. Kremer (1984) and Lindholm et al. (2017).
Example 2 One example of a more restricted sub-class of the GLMs from (1) is when we assume that = I and explicitly include an intercept, i.e. X contains a column of ones. In this situation it is shown in Seber and Lee (2003, Eq. 10.12) that 1/n ≤ V ii ≤ 1 which makes it is possible to tighten e.g. the lower bound in Lemma 2 when μ 4 > 3 and, hence, tighten the corresponding bound in Proposition 1 to ν n − κ n + μ 4 −3 (n− p)n , and analogously for the upper bound when μ 4 ≤ 3.

Example 3
In order to illustrate the usefulness of the techniques of the present paper we will now exploit the essentially model-free aspects of Lemma 1 and, in particular, Lemma 2. As already noted after stating Lemma 2, the results of Lemma 2 are purely algebraic, given that you somehow have arrived at a variance expression of the form (4). Purely as an illustration, consider the following special case of a mixed effects model introduced in Atiqullah (1962b): where y is n × 1, and where θ is p a × 1, and is n × 1, are independent random vectors whose components all have mean 0 and variances σ 2 θ and σ 2 , respectively, together with the kurtoses μ θ 4 (= γ 2θ + 3) and μ 4 (= γ 2 + 3), and where τ is a p b × 1 vector of fixed effects. Further, A and B are assumed to be of full column rank p a and p b , respectively. Note that compared with (1) the model given by (9) is defined in terms of = I in accordance with Atiqullah (1962b). Further, the following unbiased estimators of σ 2 θ and σ 2 are given in Atiqullah (1962b): and where p g , p h and p r denote the rank of G, H and R. Moreover, in Atiqullah (1962b) it is stated that the introduced variance parameter estimators have the following variances: where ξ = σ 2 θ /σ 2 and where h and r corresponds to the vectors of the diagonal elements of H and R. Regardingσ 2 we may start off by noting that (13) may be re-written according to ii which is on the same form as (4) from Lemma 1, since R is idempotent, and Lemma 2 applies. Moreover, by using the same arguments as those used in the proof of Proposition 1, it follows that p r = n − p a − p b , where both p a and p b are constants, thus ascertaining L 2 -consistency and that the corollaries of Proposition 1 hold as well.
Concerning the variance ofσ 2 θ , the expression of (12) can not be approached using Lemma 2. It is, however, possible to make use of trace inequalities to show that when e.g. μ θ 4 , μ 4 > 3 where p u := rank(U) is a constant, see the appendix. That is, Var(σ 2 θ ) is bounded from below by a positive constant for all n, andσ 2 θ is, hence, not L 2 -consistent. It is, however, possible to obtain sharper finite sample bounds on Var(σ 2 θ ) by using other trace inequalities-a matter not pursued further in the present note. For other examples and a deeper discussion concerning the performance of estimators of variance components, see e.g. Christensen (2019, Ex. 5.1.1 and Sec. 5.4). Even though the above mixed effects example is only intended for illustration purposes, without any claim of practical relevance, it is still worth commenting on the relation to the results in Li (2012). In Li (2012) mixed effects nonparametric regression is considered w.r.t. the MSE of the total variance parameter estimator. That is, the situation with σ = σ θ is considered. Consequently, the decomposition from Atiqullah (1962b) is not covered as a special case. Moreover, by setting σ = σ θ above, the resulting estimator is neither contained in the estimators covered in Li (2012, Thm. 1). For more on differences compared with Li (2012) (and Dette et al. 1998), see Remark 2 above.
Acknowledgements Open access funding provided by Stockholm University. The authors acknowledge beneficial discussion with Rolf Sundberg. The authors are also most grateful to Ronald Christensen for the comments on an earlier version of the paper which resulted in the split of p u ≤ n/2 and p u > n/2 in Lemma 2, and for pointing out that we had missed that R from the current Example 3 is idempotent, which made it possible to more easily connect Example 3 to the results in the current paper and to the reference Christensen (2019). We also want to acknowledge the comments from an anonymous reviewer which pointed out a number of typos and provided comments which we believe have improved the paper.

Compliance with ethical standards
Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Appendix A. Proofs
Proof of Proposition 1 We start by reformulating the linear model of (1) as follows: where y := −1/2 y and X := −1/2 X. Now letˆ e := y − Xβ, it then follows that which may be expressed in terms of e by noting that Hence, we haveσ 2 = σ 2 n − p x e K K e.
Further, since V is idempotent and symmetric it follows that K will inherit these properties as well and we arrive at Finally, since V is idempotent we know that rank and by repeating this argument if follows that tr( P) = tr(I p x ) = p x , that is, This ascertains that the conditions of Lemmas 1 and 2 are fulfilled, which proves the unconditional part of Proposition 1.
The unconditional variance bounds are obtained by first noting thatσ 2 is unbiased and then applying a variance decomposition. This argument of course holds also for biased estimators as long as E[σ 2 |X, ] is nonrandom, e.g. when the normalization constant is n and not n − p x .

Proof of (14)
We will now show thatσ 2 θ is not in general L 2 consistent. If we restrict our attention to μ 4 , μ θ 4 ≥ 3 it follows that (12) may be bounded according to Var(σ 2 θ ) ≥ 2σ 4 (( p r p h + p 2 h ) + p r ξ(ξ tr(U 2 ) + 2 tr(U))) p r tr(U) 2 , since the second factor consists of quadratic forms and non-negative functions/ constants. Further, note that by construction it is assumed that ( A L A) −1 exists, i.e. A L A is positive definite and of full column rank p a = rank(A), from which it follows that 0 < p h = p u = p a = O(1 Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.