Efficient parameter estimation for parabolic SPDEs based on a log-linear model for realized volatilities

We construct estimators for the parameters of a parabolic SPDE with one spatial dimension based on discrete observations of a solution in time and space on a bounded domain. We establish central limit theorems for a high-frequency asymptotic regime. The asymptotic variances are shown to be substantially smaller compared to existing estimation methods. Moreover, asymptotic confidence intervals are directly feasible. Our approach builds upon realized volatilities and their asymptotic illustration as a response of a log-linear model with spatial explanatory variable. This yields efficient estimators based on realized volatilities with optimal rates of convergence and minimal variances. We demonstrate efficiency gains compared to previous estimation methods numerically and in Monte Carlo simulations.


Introduction
Dynamic models based on stochastic partial differential equations (SPDEs) are recently of great interest, in particular their calibration based on statistics, see, for instance, [9], [8], [2] and [1].Bibinger and Trabs [4], Cialenco and Huang [6] and Chong [5] have independently of one another studied the parameter estimation for parabolic SPDEs based on power variation statistics of time increments when a solution of the SPDE is observed discretely in time and space.Bibinger and Trabs [4] pointed out the relation of their estimators to realized volatilities which are well-known as key statistics for financial highfrequency data in econometrics.We develop estimators based on these realized volatilities which significantly improve upon the M-estimation from [4].Our new estimators attain smaller asymptotic variances, they are explicit functions of realized volatilities and we can readily provide asymptotic confidence intervals.Since generalized estimation approaches for small noise asymptotics in Kaino and Uchida [12], rate-optimal estimation for more general observation schemes in Hildebrandt and Trabs [11], long-span asymptotics in Kaino and Uchida [13], and with two spatial dimensions in Tonaki et al. [15] have been built upon the M-estimator from [4], we expect that our new method is of interest to further improve parameter estimation for SPDEs.Our theoretical framework is the same as in [4].We consider for (t, y) ∈ R + × [0, 1] a linear parabolic SPDE dX t (y) = θ 2 ∂ 2 X t (y) ∂y 2 + θ 1 ∂X t (y) ∂y + θ 0 X t (y) dt + σ dB t (y) , with one space dimension.The bounded spatial domain is the unit interval [0, 1], which can be easily generalized to some arbitrary bounded interval.
Although estimation methods in case of an unbounded spatial domain are expected to be similar, the theory is significantly different, see Bibinger and Trabs [3].(B t (y)) is a cylindrical Brownian motion in a Sobolev space on [0, 1].The initial value X 0 (y) = ξ(y) is assumed to be independent of (B t (y)).We work with Dirichlet boundary conditions: X t (0) = X t (1) = 0, for all t ∈ R + .
A specific example is the SPDE used for the term structure model of Cont [7].
Existence and uniqueness of a mild solution of the SPDE (1) written dX t (y) = A θ X t (y) dt + σ dB t (y), with differential operator A θ , which is given by with a Bochner integral and where exp(t A θ ) is the strongly continuous heat semigroup, is a classical result, see Chapter 6.5 in [? ].We focus on parameter estimation based on discrete observations of this solution (X t (y)) on the unit square (t, y) ∈ [0, 1] × [0, 1].The spatial observation points y j , j = 1, . . ., m, have at least distance δ > 0 from the boundaries at which the solution is zero by the Dirichlet conditions.
This asymptotic regime with more observations in time than in space is natural for most applications.Hildebrandt and Trabs [11] and [4] showed that in this regime the realized volatilities are sufficient to estimate the parameters with optimal rate (m n n) −1/2 , while [11] establish different optimal convergence rates when the condition m n / √ n → 0 is violated and propose rate-optimal estimators for this setup based on double increments in space and time.Our proofs and results could be generalized to non-equidistant observations in time, when their distances decay at the same order, but the observation schemes would affect asymptotic variances and thus complicate the results.Instead, there is no difference between equidistant and non-equidistant observations in space, since the spatial covariances will not be used for estimation and are asymptotically negligible for our results under Assumption 1.
While [4] focused first on estimating the volatility when the parameters θ 1 and θ 2 are known, we consider the estimation of the curvature parameter κ in Section 2. We present an estimator for known σ 2 0 and a robustification for the case of unknown σ 2 0 .In Section 3 we develop a novel estimator for both parameters, (σ 2 0 , κ), which improves the M-estimator from Section 4 of [4] significantly.It is based on a log-linear model for RV n (y) with explanatory spatial variable y.Section 4 is on the implementation and numerical results.We draw a numerical comparison of asymptotic variances and show the new estimators' improvement over existing methods.We demonstrate significant efficiency gains for finite-sample applications in a Monte Carlo simulation study.All proofs are given in Section 5.

Curvature estimation
Section 3 of [4] addressed the estimation of σ 2 in (1) when θ 1 and θ 2 are known.Here, we focus on the estimation of κ from (4), first when σ 2 0 is known and then for unknown volatility.The volatility estimator by [4], based on observations in one spatial point y j , used the realized volatility RV n (y j ).The central limit theorem with √ n rate for this estimator from Theorem 3.3 in [4] yields equivalently that with Γ ≈ 0.75 a numerical constant analytically determined by a series of covariances.Since the marginal processes of X t (y) in time have regularity 1/4, the scaling factor 1/ √ n for RV n (y j ) is natural.To estimate κ consistently we need observations in at least two distinct spatial points.A key observation in [4] was that under Assumption 1, realized volatilities in different spatial observation points de-correlate asymptotically.From (5) we can hence write with Z j i.i.d.standard normal and remainders R n,j , which turn out to be asymptotically negligible for the asymptotic distribution of the estimators.The equation and an expansion of the logarithm yield an approximation In the upcoming example we briefly discuss optimal estimation in a related simple statistical model.
The expected value and variance of this mle are .
Note that ς −2 i can be viewed as Fisher information of observing Y i .The efficiency of the mle in this model is implied by standard asymptotic statistics.
If we have independent observations with the same expectation and variances as in Example 1, but not necessarily normally distributed, the estimator μ from (9) can be shown to be the linear unbiased estimator with minimal variance.
Then, the estimator (10) satisfies, as n → ∞, the central limit theorem (clt) Typically δ will be small and the asymptotic variance close to 3Γπ.Assumption 2 poses a mild restriction on the initial condition ξ and is stated at the beginning of Section 5.The logarithm yields a variance stabilizing transformation for (5) and the delta-method readily a clt for log realized volatilities with constant asymptotic variances.This implies a clt for the estimator as ∆ n → 0, and when 1 < m < ∞ is fix.The proof of ( 11) is, however, not obvious and based on an application of a clt for weakly dependent triangular arrays by [14].
If σ 2 0 is unknown the estimator ( 10) is infeasible.Considering differences for different spatial points in ( 7) yields a natural generalization of Example 1 and the estimator (10) for this case: This estimator achieves as well the parametric rate of convergence √ nm n , it is asymptotically unbiased and satisfies a clt.Its asymptotic variance is, however, much larger than the one in (11).
Then, the estimator (12) satisfies, as n → ∞, the clt In particular, consistency of the estimator holds as n → ∞, also if m ≥ 2 remains fix.The clts (11) and ( 13) are feasible, i.e. the asymptotic variances are known constants and do not hinge on any unknown parameters.Hence, asymptotic confidence intervals can be constructed based on the theorems.

Asymptotic log-linear model for realized volatilities and least squares estimation
Applying the logarithm to ( 6) and a first-order Taylor expansion for the rescaled realized volatilities, with Z j i.i.d.standard normal and remainders Rn,j , which turn out to be asymptotically negligible for the asymptotic distribution of the estimators.When we ignore the remainders Rn,j , the estimation of −κ is then directly equivalent to estimating the slope parameter in a simple ordinary linear regression model with normal errors.The intercept parameter in the model (LLM) is a strictly monotone transformation of σ 2 0 .To exploit the analogy of (LLM) to a log-linear model, it is useful to recall some standard results on least squares estimation for linear regression.
with white noise i , homoscedastic with variance Var( i ) = σ 2 , the least squares estimation yields with the sample averages Ȳ = m −1 m j=1 Y j , and x = m −1 m j=1 x j .The estimators (15a) and (15b) are known to be BLUE (best linear unbiased estimators) by the famous Gauß-Markov theorem, i.e. they have minimal variances among all linear and unbiased estimators.In the normal linear model, if i i.i.d.
∼ N (0, σ 2 ), the least squares estimator coincides with the mle and standard results imply asymptotic efficiency.The variance-covariance matrix of ( α, β) is well-known and For the derivation of (15a)-(15e) in this example see, for instance, Example 7.2-1 of [16].
We give this elementary example, since our estimator and the asymptotic variance-covariance matrix of our estimator will be in line with the translation of the example to our model (LLM).
The M-estimation of [4] was based on the parametric regression model with non-standard observation errors (δ n,j ).The proposed estimator arg min was shown to be rate-optimal and asymptotically normally distributed in Theorem 4.2 of [4].In view of the analogy of (LLM) to an ordinary linear regression model, however, it appears clear that the estimation method by [4] is inefficient, since ordinary least squares is applied to a model with heteroscedastic errors.In fact, generalized least squares could render a more efficient estimator related to our new methods.In model ( 16), the variances of δ n,j depend on j via the factor exp(−2κy j ).This induces, moreover, that the asymptotic variance-covariance matrix of the estimator (17) depends on the parameter (σ 2 0 , κ).In line with the least squares estimator from Example 2, the asymptotic distribution of our estimator will not depend on the parameter.
Writing y = m The estimator for the intercept is We shall prove that the OLS-estimator (18a) for κ in our log-linear model is in fact identical to the estimator κn,m from (12).
The estimators (18a) and (18b) satisfy, as n → ∞, the bivariate clt with the asymptotic variance-covariance matrix In particular, consistency of the estimators holds as n → ∞, also if m ≥ 2 remains fix.Different to the typical situation with an unknown noise variance in Example 2, the noise variance in (LLM) is a known constant.Therefore, different to Theorem 4.2 of [4], our central limit theorem is readily feasible and provides asymptotic confidence intervals.
An application of the multivariate delta method yields the bivariate clt for the estimation errors of the two point estimators.
Corollary 4 Under the assumptions of Theorem 3, it holds that where (σ 2 0 ) LS is obtained from α LS (σ 2 0 ) with the inverse of (14), with Here, naturally the parameter σ 2 0 occurs in the asymptotic variance of the estimated volatility and in the asymptotic covariance.The transformation or plug-in, however, still readily yield asymptotic confidence intervals.
4 Numerical illustration and simulations

Numerical comparison of asymptotic variances
The top panel of Figure 1 gives a comparison of the asymptotic variances for curvature estimation, κ, of our new estimators to the minimum contrast estimator of [4].We fix δ = 0.05.While the asymptotic variance-covariance matrix of our new estimator is rather simple and explicit, the one in [4] is more complicated but can be explicitly computed from their Eq.( 21)-( 23).
The uniformly smallest variance is the one of κn,m from (10).It is visualized with the yellow curve which is constant in κ, i.e. the asymptotic variance does not hinge on the parameter.This estimator, however, requires that the volatility σ 2 0 is known.It is thus fair to compare the asymptotic variance of the minimum contrast estimator from [4] only to the least squares estimator based on the log-linear model, since both work for unknown σ 2 0 .While the asymptotic variance of the new estimator, visualized with the black curve, does not hinge on the parameter value, the variance of the estimator by [4] (brown curve) is in particular large when κ has a larger distance to zero.All curves in the top panel of Figure 1 do not depend on the value of σ 2 0 .Our least squares estimator in the log-linear model uniformly dominates the estimator from [4].For δ → 0, the asymptotic variances of the two least squares estimators would coincide in κ = 0.However, due to the different dependence on δ, the asymptotic variance of the estimator from [4] is larger than the one of our new estimator also in κ = 0.The lower panel of Figure 1 shows the ratios of the asymptotic variances of the two least squares estimators for both unknown parameters.Left we see the ratio for curvature estimation determined by the ratio of the black and brown curves from the graphic in the top panel.Right we see the ratio of the asymptotic variances for estimating σ 2 0 , as a function depending on different values of κ.This ratio does not hinge on the value of σ 2 0 .

Monte Carlo simulation study
The simulation of the SPDE is based on its spectral decomposition (21) and an exact simulation of the Ornstein-Uhlenbeck coordinate processes.In [4] a truncation method was suggested to approximate the infinite series x k (t)e k (y), up to some spectral cut-off frequency K which needs to be set sufficiently large.In Kaino and Uchida [13] this procedure was adopted, but they observed that choosing K too small results in a strong systematic bias of simulated estimates.A sufficiently large K depends on the number of observations, but even for moderate sample sizes K = 10 5 was recommended by [13].This leads to tedious, long computation times as reported in [13].A nice improvement for observations on an equidistant grid in time and space has been presented by Hildebrandt [10] using a replacement method instead of the truncation method.The replacement method approximates addends with large frequencies in the Fourier series using a suitable set of independent random vectors instead of simply cutting n,m from (18a) and the estimator from [4], for δ = 0.05, and for different values of κ.Lower panel: Ratio of asymptotic variances of new method using (18a) and (18b) vs. [4], left for estimating κ, right for σ 2 0 .
off these terms.The spectral frequency to start with replacement can be set much smaller than the cut-off K for truncation.We thus use Algorithm 3.2 from [10] here, which allows to simulate a solution of the SPDE with the same precision as the truncation method while reducing the computation time considerably.For instance, for n = 10000 and m = 100, the computation time with a standard computer of the truncation method is almost 6 hours while the replacement method requires less than one minute.In [10] we also find bounds for the total variation distance between approximated and true distribution allowing to select an appropriate trade-off between precision and computation time.We implement the method with 20 • m n as the spectral frequency to start with replacement.We simulate observations on equidistant grid points in time and space.We illustrate results for a spatial resolution with m = 11, and a temporal resolution with n = 1000.This is in line with Assumption 1.We simulated results for m = 100 and n = 10000, as well.Although the ratio of spatial and temporal observations is more critical then, the normalized estimation results were similar.If the condition m n n −1/2 → 0 is violated, however, we see that the variances of the estimators decrease at a slower rate than (n • m n ) −1/2 .While the numerical computation of estimators as in [4] relies on optimization algorithms, the implementation of our estimators is simple, since they rely on explicit transformations of the data.We use the programming language R and provide a package for these simulations on github.We use the standard R density plots with Gaussian kernels and bandwidths selected by Silverman's rule of thumb.The dotted lines give the corresponding densities of the asymptotic limit distributions.We can report that analogous plots for different parameter values of σ 2 0 look (almost) identical.With increasing values of n and m, the fit of the asymptotic distributions becomes more accurate, otherwise the plots look as well similar as long as m ≤ √ n.As expected, the efficiency gains of the new method are much more relevant for larger curvature.In particular, in the third plot from the left for κ = 6, the new estimator outperforms the one from [4] significantly.In the first plot from the left for κ = 1, instead, the two estimators have similar empirical distributions.The fit of the asymptotic normal distributions is reasonably well for all estimators.This is more clearly illustrated in the QQ-normal plots in Figure 4. Using the true value of σ 2 0 , as expected, the estimator κn,m outperforms the other methods.We compare it to our new least squares estimator in the second and fourth plots from the left.
Figure 3 draws a similar comparison of estimated volatilities σ 2 0 , for unknown κ, using the estimator (18b) from the log-linear model and the estimator from [4].While for κ = 1 in the left panel the performance of both methods is similar, for κ = 6 in the right panel our new estimator outperforms the previous one.Figure 5 gives the QQ-normal plots for the estimation of σ 2 0 .All plots are based on 1000 Monte Carlo iterations.The QQ-normal plots compare standardized estimation errors to the standard normal.For the estimator from [4] we use an estimated asymptotic variance based on plug-in, while for our new estimators the asymptotic variances are known constants.5 Proofs of the theorems

Preliminaries
The asymptotic analysis is based on the eigendecomposition of the SPDE.The eigenfunctions (e k ) and eigenvalues (−λ k ) of the self-adjoint differential operator where all eigenvalues are negative and (e k ) k≥1 form an orthonormal basis of the Hilbert space H θ := {f : [0, 1] → R : f θ < ∞} with f, g θ := 1 0 e yθ1/θ2 f (y)g(y) dy and Let ξ ∈ H θ be the initial condition.We impose the same mild regularity condition on X 0 = ξ as in Assumption 2.2 of [4].This assumptions is more general than the one in [11] that (X t (y)) is started in equilibrium and satisfied for all sufficiently regular functions ξ.We refer to Section 2 of [4] for more details on the probabilistic structure.
For the solution X t (y) from (3), we have the spectral decomposition in that the coordinate processes x k satisfy the Ornstein-Uhlenbeck dynamics: where ζ n,i includes the ith squared increment of the realized volatility RV n (y j ).Note that summation over time (increments) is always indexed in i, and summation over spatial points in j.Although this leading term is linear in the realized volatilities, we cannot directly adopt a clt from [4] due to the different structure of the weights.Thus, we require an original proof of the clt for which we can reuse some ingredients from [4].
We begin with the asymptotic variance.We can adopt Lemma 6.4 from [4] and Proposition 6.5 and obtain for any η ∈ (0, 1) that Cov for any spatial points y and u.We obtain that The assumption that y 1 = δ, y m = 1 − δ, is used only for the convergence of the Riemann sum in the last step.For the covariances, we used Assumption 1 Efficient parameter estimation for parabolic SPDEs and an elementary estimate Since ∆ 1/2 n m n log(m n ) → 0 under Assumption 1, the remainders are negligible.Next, we establish a covariance inequality for the empirical characteristic function.There exists a constant C, such that for all t ∈ R: where Let u ≥ 2, the case u = 1 can be derived separately and is in fact easier.By the spectral decomposition (21) The increments of the Ornstein-Uhlenbeck processes (x k (t)) from ( 22) contain terms we can write with the notation (23) for squared increments where A 1 is defined by A 2 and Q v b+u , and B r , r = 1, 2, to include the sums over A r .Analogous terms A r have been considered in Proposition 6.6 of [4].This decomposition is useful, since B 2 is independent of Q b a .Analogously to the proof of Proposition 6.6 in [4], we have for all j that with some constant C, and from Eq. ( 59) of [4] that Thereby, we obtain that Note that for any γ ∈ R 2 , S mn G γ j is uniformly in j bounded by a constant, such that the structure for proving a covariance inequality for the empirical characteristic function and a Lyapunov condition is analogous to the onedimensional case.With ξ γ n,i := γ, Ξ n,i , A 1 (y j ) + A 2 (y j ) y j e κyj G γ j , with the same terms A 1 (y j ) and A 2 (y j ) as in the proof of (29).Therefore, using the same bounds as in the proof of (29), we obtain that  for some constant C. As m 2 n ∆ n → 0, we conclude the Lyapunov condition which together with (34) and the asymptotic variance-covariance structure

Example 2
In a simple linear ordinary regression model

∞
k=1 x k (t)e k (y) in (21) by a finite sum K k=1

Fig. 2
Fig. 2 Comparison of empirical distributions of normalized estimation errors for κ from simulation with n = 1000, m = 11, σ 2 0 = 1, κ = 1 in the left two columns and κ = 6 in the right two columns.Grey is for κLS n,m , brown for the estimator by [4] and yellow for κn,m.

Figure 2
compares empirical distributions of normalized estimation errors 1. of κLS n,m (grey) vs. [4] (brown) and 2. of κn,m with known σ 2 0 (yellow) compared to κLS n,m (grey), for small curvature κ = 1 in the left two columns, and larger curvature κ = 6 in the right two columns.The plots are based on a Monte Carlo simulation with 1000 iterations, and for n = 1000, m = 11, and σ 2 0 = 1.

2
Var mn j=1 RV n (y j ) e κyj y j n (y j ) e κyj + j =l y j y l Cov 1 √ n RV n (y j ) e κyj , 1 √ n RV n (y l ) e κy l