Skip to main content
Log in

A test for heteroscedasticity in functional linear models

  • Original Paper
  • Published:
TEST Aims and scope Submit manuscript

Abstract

We propose a new test to validate the assumption of homoscedasticity in a functional linear model. We consider a minimum distance measure of heteroscedasticity in functional data, which is zero in the case where the variance is constant and positive otherwise. We derive an explicit form of the measure, propose an estimator for the quantity, and show that an appropriately standardized version of the estimator is asymptotically normally distributed under both the null (homoscedasticity) and alternative hypotheses. We extend this result for residuals from functional linear models and develop a bootstrap diagnostic test for the presence of heteroscedasticity under the postulated model. Moreover, our approach also allows testing for “relevant” deviations from the homoscedastic variance structure and constructing confidence intervals for the proposed measure. We investigate the performance of our method using extensive numerical simulations and a data example.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pramita Bagchi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 237 KB)

A Technical details

A Technical details

1.1 A.1 Proof of Theorem 2

First, consider the sequence of random elements \((T_n) \in L^2([0,1]^2)\) defined as

$$\begin{aligned} T_n(u,v) = n^{1/2}\Big [A_n(u,v) - B_n^2(u,v) - \int _0^1 c_t^2(u,v)\text {d}u\text {d}v + \Big \{\int _0^1 c_t(u,v) \text {d}t\Big \}^2\Big ]. \end{aligned}$$
(A. 1)

We show that this sequence of random processes converges in distribution to a zero-mean Gaussian process \({\mathcal {G}}\) in \(L^2([0,1]^2)\) with covariance kernel \(\kappa \) as \(n \rightarrow \infty \).

To show this, we will use Theorem 2 from Cremers and Kadelka (1986) and show that the finite-dimensional distributions of \(T_n\) converge to the finite-dimensional distributions of \({\mathcal {G}}\) as \(n \rightarrow \infty \) and that Condition (4.3) from Cremers and Kadelka (1986) is satisfied. By Remarks 3 from Cremers and Kadelka (1986), it is enough to show that there exists a integrable function \(f:[0,1]^2 \mapsto [0,\infty )\) such that

$$\begin{aligned}&E (\vert T_n(u,v)\vert ^2) \le f(u,v), \end{aligned}$$
(A. 2)
$$\begin{aligned}&E (\vert T_n(u,v)\vert ^2) \rightarrow E( \vert {\mathcal {G}}(u,v) \vert ^2),. \end{aligned}$$
(A. 3)

for all \(n \in {\mathbb {N}}\) and for all \(0 \le u,v \le 1\).

Equation (A. 2) follows from our assumption \(E\{X_j^4(u)\} \le K(u)\) for some integrable function K, and (A. 3) is a direct consequence of the finite-dimensional distributional convergence stated in (1).

To show 1, it is enough to show that

$$\begin{aligned} \{T_n(u_1,v_1),\dots ,T_n(u_d,v_d)\} \rightarrow \{{\mathcal {G}}(u_1,v_1),\dots , {\mathcal {G}}(u_d,v_d)\} \end{aligned}$$
(A. 4)

in distribution as \(n \rightarrow \infty \) for all \(d \ge 1\) and almost every \(0 \le u_1,\dots ,u_d, v_1,\dots ,v_d \le 1\).

To this end, we first show that the vector

$$\begin{aligned} I_n(u_1,v_1,\dots , u_d, v_d) = n^{1/2}\left( \begin{array}{l} A_n(u_1,v_1) - \int _0^1 c_t^2(u_1,v_1)\text {d}t\\ \vdots \\ A_n(u_d,v_d) - \int _0^1 c_t^2(u_d,v_d)\text {d}t\\ B_n(u_1,v_1) - \int _0^1 c_t(u_1,v_1)\text {d}t \\ \vdots \\ B_n(u_d,v_d) - \int _0^1 c_t(u_d,v_d)\text {d}t \end{array}\right) \end{aligned}$$

converges in distribution to a multivariate normal variable as \(n \rightarrow \infty \). An application of the multivariate delta method then proves (A. 4).

Without loss of generality, we will prove the convergence of \(I_n\) for \(d=1\). The general case can be established similarly with additional amount of notation. First,

$$\begin{aligned} I_n(u,v) = n^{1/2}\left( \begin{array}{l} A_n(u,v) - E\{A_n(u,v)\}\\ B_n(u,v) - E\{B_n(u,v)\} \end{array}\right) + o(1) = {\tilde{I}}_n(u,v) + o(1). \end{aligned}$$

Therefore, it is enough to show the first term converges in distribution to a multivariate normal random variable. To prove the last assertion, we use the Cramer–Wold device and show that for any \(a \in {\mathbb {R}}^2\), the vector \(a'{\tilde{I}}_n\) converges in distribution to a normal random variable as \(n \rightarrow \infty \). Now,

$$\begin{aligned} a'{\tilde{I}}_n = n^{-1/2}\sum _{j=1}^n\{Y_j - E(Y_j)\}, \end{aligned}$$

where \(Y_j = a_1R_j(u_1) R_j(v_1) R_{j+2}(u_1) R_{j+2}(v_1) + a_2R_j(u_1) R_j(v_1).\) The variables \(Y_j - E(Y_j)\) are m-dependent with \(m=4\). We can use the m-dependent central limit theorem (see the last Corollary from Orey 1958) to establish the claimed convergence. The justification for the application of the last Corollary is presented in the supplement.

Finally,

$$\begin{aligned} \{T_n(u_1,v_1),\dots ,T_n(u_d,v_d)\} = g_d\{I_n(u_1,v_1,\dots ,u_d,v_d)\}, \end{aligned}$$

where \(g_d: {\mathbb {R}}^{2d} \mapsto {\mathbb {R}}\) is defined as \(g(x_1,\dots ,x_d,y_1,\dots ,y_d) = (x_1-y_1^2,\dots ,x_d-y_d^2)\). Thus, an application of multivariate delta method Lehmann and Casella (1998) proves (A. 4).

To obtain the covariance kernel, say \(\kappa \{(u_1,v_1),(u_2,v_2)\}\) of the Gaussian process \({\mathcal {G}}\), we consider \(I_n(u_1,v_1,u_2,v_2)\). The covariance kernel is essentially the off-diagonal entry of the limiting covariance matrix of \(\{{\mathcal {G}}(u_1,v_1),{\mathcal {G}}(u_2,v_2)\}\). This can be calculated by first calculating the asymptotic covariance matrix of \(I_n(u_1,v_1,u_2,v_2)\), which is a \(4 \times 4\) matrix and applying the delta method with \(g_2: {\mathbb {R}}^4 \rightarrow {\mathbb {R}}^2\) where \(g_2(x_1,x_2,x_3,x_4) = (x_1 - x_3^2, x_2 - x_4^2),\) see Theorem 8.22, page 61 from Lehmann and Casella (1998). The details of this calculation are given in the supplement.

Finally, \(n^{1/2}({\widehat{M}}_n - M_0^2) = F(T_n)\), where \(F: L^2([0,1]^2) \rightarrow {\mathbb {R}}\) is a continuous map defined by

$$\begin{aligned} F(g) = \int _0^1 \int _0^1 g (u,v)\text {d}u\text {d}v. \end{aligned}$$

An application of the continuous mapping theorem gives

$$\begin{aligned} n^{1/2}({\widehat{M}}_n - M^2_0) \rightarrow \int _0^1 \int _0^1 {\mathcal {G}}(u,v)\text {d}u \text {d}v \end{aligned}$$

in distribution as \(n \rightarrow \infty \), which in turn implies

$$\begin{aligned} n^{1/2}\left( {\widehat{M}}_n - M^2_0)\right) \rightarrow N\Big [0,\int _0^1\int _0^1\int _0^1\int _0^1 \kappa \{(u_1,v_1),(u_2,v_2)\}\text {d}u_1\text {d}v_1\text {d}u_2\text {d}v_2\Big ] \end{aligned}$$

in distribution as \(n \rightarrow \infty \).

1.2 A.2 Proof of Lemma 2

Let \({\widetilde{A}}_n\) and \({\widetilde{B}}_n\) be the version \(A_n\) and \(B_n\) defined in (4.1), i.e., we define

$$\begin{aligned} {\widehat{R}}_j(u) = \hat{X}_j(u) - \hat{X}_{j-1}(u) \end{aligned}$$

and

$$\begin{aligned} {\widetilde{A}}_n(u,v) =&\frac{1}{4(n-3)} \sum _{j=2}^{n-2} {\widehat{R}}_j(u) {\widehat{R}}_j(v) {\widehat{R}}_{j+2}(u) {\widehat{R}}_{j+2}(v),\\ {{\widetilde{B}}}_n(u,v) =&\frac{1}{2(n-1)}\sum _{j=2}^{n} {\widehat{R}}_j(u) {\widehat{R}}_j(v). \end{aligned}$$

We have to show

$$\begin{aligned} \int _0^1 \int _0^1 {\widetilde{T}}_n(u,v) \text {d}u \text {d}v= \int _0^1 \int _0^1 T_n(u,v) \text {d}u \text {d}v + {o_P(1)}, \end{aligned}$$

where

$$\begin{aligned} {\widetilde{T}}_n(u,v) = n^{1/2}\Big [{\widetilde{A}}_n(u,v) - {\widetilde{B}}_n^2(u,v) - \int _0^1c_t^2(u,v)\text {d}u\text {d}v + \Big \{\int _0^1 c_t(u,v) \text {d}t\Big \}^2 \Big ], \end{aligned}$$

and \(T_n\) is defined as in (A. 1). In particular, we will show that

$$\begin{aligned} n^{1/2}\int _0^1 \int _0^1\Big \{{\widetilde{A}}_n(u,v) - A_n(u,v)\Big \}\text {d}u\text {d}v =o_p(1), \end{aligned}$$
(A. 5)
$$\begin{aligned} n^{1/2}\int _0^1 \int _0^1\Big \{{\widetilde{B}}^2_n(u,v) - {B}^2_n(u,v)\Big \}\text {d}u\text {d}v = o_p(1). \end{aligned}$$
(A. 6)

We will prove (A. 5), and the proof of (A. 6) is similar. To this end, we first write

$$\begin{aligned} \hat{R}_j(u) = R_j(u) + r_{j,n}(u) \end{aligned}$$

and note that by our assertion \(\sup _{j}\sup _{u \in [0,1]}r_{j,n}(u) = o_p(n^{-1/4})\). We write

$$\begin{aligned} n^{1/2}\Big \{{\widetilde{A}}_n(u,v) - A_n(u,v)\Big \} =&n^{1/2} \frac{1}{4(n-3)}\sum _{j=2}^{n-2}r_{j+2,n}(u) R_j(u)R_j(v)R_{j+2}(v)\\&+ n^{1/2} \frac{1}{4(n-3)}\sum _{j=2}^{n-2} r_{j+2,n}(v) R_j(u)R_j(v)R_{j+2}(u)\\&+ n^{1/2}\frac{1}{4(n-3)}\sum _{j=2}^{n-2} r_{j,n}(v)R_j(u)R_{j+2}(u)R_{j+2}(v)\\&+ n^{1/2} \frac{1}{4(n-3)}\sum _{j=2}^{n-2} r_{j,n}(u)R_j(v)R_{j+2}(u)R_{j+2}(v)\\&+ o_p(1). \end{aligned}$$

Then,

$$\begin{aligned}&\left| n^{1/2} \frac{1}{4(n-3)}\sum _{j=2}^{n-2} r_{j+2,n}(u) R_j(u)R_j(v)R_{j+2}(v) \right| \\ \le&\sup _j \sup _{u} \vert r_{j+2,n}(u) \vert \times n^{1/2} \left| \frac{1}{4(n-3)} \sum _{j=2}^{n=2} R_j(u)R_j(v)R_{j+2}(v)\right| . \end{aligned}$$

Finally,

$$\begin{aligned}&E\Big \{\frac{1}{4(n-3)}\sum _{j=2}^{n-2} R_j(u)R_j(v)R_{j+2}(v)\Big \}\\&\quad = \frac{1}{4(n-3)}\sum _{j=2}^{n-2} E\Big \{R_j(u)R_j(v)\Big \} E\Big \{R_{j+2}(v)\Big \} = O(n^{-2\gamma }). \end{aligned}$$

Variance calculations similar to the calculation of \(\nu ^2\) show

$$\begin{aligned} \text {Var}\Big \{n^{1/2}\frac{1}{4(n-3)}\sum _{j=2}^{n-2} R_j(u)R_j(v)R_{j+2}(v)\Big \} = {O(1)}. \end{aligned}$$

Similar bounds can be derived for the other three terms. Noting that \(\sup _j\sup _{u} \vert r_{j+2,n}(u) \vert = o_p(1)\), we conclude that for any \(0 \le u,v \le 1,\)

$$\begin{aligned} n^{1/2}\Big \{{\widetilde{A}}_n(u,v) - A_n(u,v)\Big \} = o_p(1). \end{aligned}$$

Asymptotic tightness of \(n^{1/2}\Big \{{\widetilde{A}}_n(u,v) - A_n(u,v)\Big \} \) follows by condition (4.3) of Cremers and Kadelka (1986) as in the proof of Theorem 2, which in turn proves (A. 5) by continuous mapping theorem.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cameron, J., Bagchi, P. A test for heteroscedasticity in functional linear models. TEST 31, 519–542 (2022). https://doi.org/10.1007/s11749-021-00786-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11749-021-00786-8

Keywords

Mathematics Subject Classification

Navigation