# Smoothing noisy data with spline functions

Estimating the correct degree of smoothing by the method of generalized cross-validation

Article

- Received:

- 1.4k Citations
- 2.3k Downloads

## Summary

Smoothing splines are well known to provide nice curves which smooth discrete, noisy data. We obtain a practical, effective method for estimating the optimum amount of smoothing from the data. Derivatives can be estimated from the data by differentiating the resulting (nearly) optimally smoothed spline.

We consider the model. We provide an estimate\(\hat \lambda\), called the generalized cross-validation estimate, for the minimizer of

*y*_{i}(*t*_{i})+ε_{i},*i*=1, 2, ...,*n*,*t*_{i}∈[0, 1], where*g*∈*W*_{2}^{(m)}={*f*:*f*,*f*′, ...,*f*^{(m−1)}abs. cont.,*f*^{(m)}∈ℒ_{2}[0,1]}, and the {ε_{i}} are random errors with*E*ε_{i}=0,*E*ε_{i}ε_{j}=σ^{2}δ_{ij}. The error variance σ^{2}may be unknown. As an estimate of*g*we take the solution*g*_{n, λ}to the problem: Find*f∈W*_{2}^{(m)}to minimize\(\frac{1}{n}\sum\limits_{j = 1}^n {(f(t_j ) - y_j )^2 + \lambda \int\limits_0^1 {(f^{(m)} (u))^2 du} }\). The function*g*_{n, λ}is a smoothing polynomial spline of degree 2*m*−1. The parameter λ controls the tradeoff between the “roughness” of the solution, as measured by\(\int\limits_0^1 {[f^{(m)} (u)]^2 du}\), and the infidelity to the data as measured by\(\frac{1}{n}\sum\limits_{j = 1}^n {(f(t_j ) - y_j )^2 }\), and so governs the average square error*R(λ; g)=R(λ)*defined by$$R(\lambda ) = \frac{1}{n}\sum\limits_{j = 1}^n {(g_{n,\lambda } (t_j ) - g(t_j ))^2 }$$

*R(λ)*. The estimate\(\hat \lambda\) is the minimizer of*V*(λ) defined by\(V(\lambda ) = \frac{1}{n}\parallel (I - A(\lambda ))y\parallel ^2 /\left[ {\frac{1}{n}{\text{Trace(}}I - A(\lambda ))} \right]^2\), where*y=(y*_{1}, ...,*y*_{n})^{t}and*A*(λ) is the*n*×*n*matrix satisfying*(g*_{n, λ}(*t*_{1}), ...,*g*_{n, λ}(*t*_{n}))^{t}=*A (λ) y*. We prove that there exist a sequence of minimizers\(\tilde \lambda = \tilde \lambda (n)\) of*EV(λ)*, such that as the (regular) mesh*{t*_{i}}_{i=1}^{n}becomes finer,\(\mathop {\lim }\limits_{n \to \infty } ER(\tilde \lambda )/\mathop {\min }\limits_\lambda ER(\lambda ) \downarrow 1\). A Monte Carlo experiment with several smooth*g*'s was tried with*m*=2,*n*=50 and several values of σ^{2}, and typical values of\(R(\hat \lambda )/\mathop {\min }\limits_\lambda R(\lambda )\) were found to be in the range 1.01–1.4. The derivative*g*′ of*g*can be estimated by\(g'_{n,\hat \lambda } (t)\). In the Monte Carlo examples tried, the minimizer of\(R_D (\lambda ) = \frac{1}{n}\sum\limits_{j = 1}^n {(g'_{n,\lambda } (t_j ) - } g'(t_j ))\) tended to be close to the minimizer of*R(λ)*, so that\(\hat \lambda\) was also a good value of the smoothing parameter for estimating the derivative.### Subject Classifications

MOS:65D10 CR:5.17 MOS:65D25## Preview

Unable to display preview. Download preview PDF.

## Copyright information

© Springer-Verlag 1979