1 Introduction

Under assumption that one path of the first \(N\) Fourier modes of the solution of a Stochastic Partial Differential Equation (SPDE) is observed continuously over a finite time interval, the parameter estimation problem for the drift coefficient has been studied by several authors, starting with the seminal paper [5]. Consistency and asymptotic normality of the MLE type estimators are well understood, at least for equations driven by additive noise; see for instance the survey paper [9] for linear SPDEs, and [3] for nonlinear equations, and references therein. Generally speaking, the statistical inference theory for SPDEs did not go far beyond the fundamental properties of MLE estimators, although important and interesting classes of SPDEs driven by various noises were studied. The first attempt to study hypothesis testing problem for SPDEs is due to [4], where we investigated the simple hypothesis for the drift/viscosity coefficient for stochastic fractional heat equation driven by additive noise, white in time and colored in space. Therein, the authors established ‘the proper asymptotic classes’ of tests for which we can find ‘asymptotically the most powerful tests’—tests with fastest speed of error convergence. Moreover, we provided explicit forms of such tests in two asymptotic regimes: large time asymptotics \(T\rightarrow \infty \), and increasing number of Fourier modes \(N\rightarrow \infty \). By its very nature, the theory developed in [4] is based on asymptotic behavior, \(T,N\rightarrow \infty \), and a follow-up question is how large \(T\) or \(N\) should we take, such that the Type I and Type II errors of these tests are smaller than a given threshold. The main goal of this paper is to develop feasible methods to estimate and control the Type I and Type II errors when \(T\) and \(N\) are finite. Similar to [4], we are interested in Likelihood Ratio type rejection regions \(R_{T}=\{U_T^N: \ln L(\theta _0,\theta _1,U_T^N)\ge \eta T\}\) and \(R_{N}=\{U_T^N: \ln L(\theta _0,\theta _1,U_T^N)\ge \zeta M_N\}\), where \(U_T^N\) is the projected solution on the space generated by the first \(N\) Fourier modes, \(L\) is the likelihood ratio, \(M_N\) is a constant that depends on the first \(N\) eigenvalues of the Laplacian, and \(\eta ,\zeta \) are some constants that depend on \(T\) and \(N\). We will derive explicit expressions for \(\eta \) and \(\zeta \), and thresholds for \(T\), and respectively for \(N\), that will guarantee that the corresponding statistical errors are smaller than a given upper bound. However, this comes at the cost that these tests are no longer the most powerful in the class of tests proposed in [4]. The key ideas, and the proofs of main results, are based on sharp large deviation principles (both in time and spectral spatial component) developed in [4]. On top of the theoretical part, we also present some numerical experiments as a coarse verification of the main theorems. We find some bounds for the numerical approximation errors, that will also serve as a preliminary effort in studying the statistical inferences problems for SPDEs under discrete observations. Finally, we want to mention that the case of large \(T\) and \(N=1\) corresponds to classical one dimensional Ornstein–Uhlenbeck process, and even in this case, to our best knowledge, the obtained results are novel.

The paper is organized as follows. In Sect. 1.1 we set up the problem, introduce some necessary notations, and discuss why for the tests proposed in [4] it is hard to find explicit expressions for \(T\) and \(N\) in order to control the statistical errors. Since sharp large deviation principles from [4] play fundamental role in the derivation of main results, in Sect. 1.2 we briefly present them here too. Section 2 is devoted to the case of large time asymptotics, with number of observable Fourier modes \(N\) being fixed. We show how to choose \(T\) and \(\eta \) such that both Type I and Type II errors, associated with rejection region \(R_T\), are bounded by a given threshold. Similarly, in Sect. 3 we study the case of large \(N\) while keeping the time horizon \(T\) fixed. In Sect. 4 we illustrate the theoretical results by means of numerical simulations. We start, with the description of the numerical methods, and derive some error bounds of the numerical approximations. Consequently, we show that while the thresholds for \(T,N\) derived in Sects. 2 and 3 are conservative, as one may expect, they still provide a robust practical framework for controlling the statistical errors. Finally, in Sect. 5 we discuss the advantages and drawbacks of the current results and briefly elaborate on possible theoretical and practical methods of solving some of the open problems.

1.1 Setup of the problem and some auxiliary results

In this section we will set up the main equation, briefly recall the problem settings of hypothesis testing for the drift coefficient, and present some needed results from [4]. Also here we give the motivations that lead to the proposed problems.

Similar to [4], on a filtered probability space \((\Omega ,\mathcal {F},\{\mathcal {F}_t\}_{t\ge 0},\mathbb {P})\) we considered the following stochastic evolution equation

$$\begin{aligned} \begin{aligned} \mathrm{d }\!U(t,x) + \theta (-\Delta )^\beta U(t,x)\mathrm{d }\!t&= \sigma \sum _{k\in \mathbb {N}} \lambda _k^{-\gamma }h_k(x)\mathrm{d }\!w_k(t),\quad t\in [0,T],\\ \ U(0,x)&= U_0, \ x\in G, \end{aligned} \end{aligned}$$
(1.1)

where \(\theta >0\), \(\beta >0, \ \gamma \ge 0\), \(\sigma \in \mathbb {R}{\setminus }\{0\}\), \(U_0\in H^s(G)\) for some \(s\in \mathbb {R}\), \(w_j\)’s are independent standard Brownian motions, \(G\) is a bounded and smooth domain in \(\mathbb {R}^d\), \(\Delta \) is the Laplace operator on \(G\) with zero boundary conditions, \(h_k\)’s are eigenfunctions of \(\Delta \). It is well known that \(\{h_k\}_{k\in \mathbb {N}}\) form a complete orthonormal system in \(L^2(G)\). We denote by \(\rho _k\) the eigenvalue corresponding to \(h_k\), and put \(\lambda _k:=\sqrt{-\rho _k}, \ k\in \mathbb {N}\). Under some fairly general assumptions, Eq. (1.1) admits a unique solution in the appropriate Sobolev spaces (see for instance [4]).

We assume that all parameters are known, except the drift/viscosity coefficient \(\theta \) which is the parameter of interest, and we use the spectral approach (for more details see the survey paper [9]) to derive MLE type estimators for \(\theta \). In what follows, we denote by \(u_k,k\in \mathbb {N},\) the Fourier coefficient of the solution \(u\) of (1.1) with respect to \(h_k,k\in \mathbb {N}\), i.e. \(u_k(t) = (U(t),h_k)_0, k\in \mathbb {N}\). Let \(H_N\) be the finite dimensional subspace of \(L_2(G)\) generated by \(\{h_k\}_{k=1}^N\), and denote by \(P_N\) the projection operator of \(L_2(G)\) into \(H_N\), and put \(U^N = P_NU\), or equivalently \(U^N:=(u_1,\ldots ,u_N)\). Note that each Fourier mode \(u_k,k\in \mathbb {N}\), is an Ornstein–Uhlenbeck process with dynamics given by

$$\begin{aligned} \mathrm{d }\!u_k = -\theta \lambda _k^{2\beta } u_k\mathrm{d }\!t + \sigma \lambda _k^{-\gamma } \mathrm{d }\!w_k(t), \quad u_k(0) = (U_0,h_k), \ t\ge 0. \end{aligned}$$
(1.2)

We denote by \(\mathbb {P}^{N,T}_{\theta }\) the probability measure on \(C([0,T]; H_N)\backsimeq C([0,T]; \mathbb {R}^N)\) generated by \(U^N\). The measures \(\mathbb {P}^{N,T}_{\theta }, \ \theta >0\), are equivalent to each other, with the Radon–Nikodym derivative, or the Likelihood Ratio, of the form

$$\begin{aligned} L(\theta _0,\theta ;U^N_T)=\frac{\mathbb {P}^{N,T}_{\theta }}{\mathbb {P}^{N,T}_{\theta _{0}}}&= \exp \Bigg (-\!(\theta -\theta _0)\sigma ^{-2}\sum _{k=1}^N\lambda _k^{2\beta +2\gamma } \nonumber \\&\quad \times \, \Bigg (\int _0^Tu_k(t)du_k(t)+\frac{1}{2}(\theta +\theta _0)\lambda _k^{2\beta }\int _0^Tu_k^2(t)dt\Bigg )\Bigg ), \nonumber \\ \end{aligned}$$
(1.3)

where \(U_T^N\) denotes the trajectory of \(U_N\) over the time interval \([0,T]\). Maximizing the Log of the likelihood ratio with respect to \(\theta \), we get the maximum likelihood estimator (MLE)

$$\begin{aligned} \hat{\theta }_T^N = -\frac{\sum _{k=1}^{N}\lambda _k^{2\beta +2\gamma }\int _0^T u_k(t)du_k(t)}{\sum _{k=1}^{N}\lambda _k^{4\beta +2\gamma }\int _0^T u_k^2(t)dt}, \quad N\in \mathbb {N}, T>0. \end{aligned}$$
(1.4)

In [4], we established the strong consistency and asymptotic normality of the MLE, when \(T\) or \(N\) goes to infinity.

In this work we consider a simple hypothesis testing problem for \(\theta \), assuming that the parameter \(\theta \) can take only two values \(\theta _0,\theta _1\), with the null and the alternative hypothesis as follows

$$\begin{aligned}&\fancyscript{H}_0:\quad \theta =\theta _0, \\&\fancyscript{H}_1:\quad \theta =\theta _1. \end{aligned}$$

Without loss of generality, we will assume that \(\theta _1>\theta _0\), and \(\sigma >0\). Throughout, we fix a significant level \(\alpha \in (0,1)\). Suppose that \(R\in \mathcal {B}(C([0,T];\mathbb {R}^N))\) is a rejection region for the test, i.e. if \(U_T^N\in R\) we reject the null and accept the alternative. The quantity \(\mathbb {P}_{\theta _0}^{N,T}(R)\) is called the Type I error of the test \(R\), and respectively \(1-\mathbb {P}_{\theta _1}^{N,T}(R)\) is called the Type II error. Naturally, we seek rejection regions with Type I error smaller than the significance level \(\alpha \):

$$\begin{aligned} \mathcal {K}_{\alpha }:=\left\{ R\in \mathcal {B}(C([0,T];\mathbb {R}^N)): \mathbb {P}^{N,T}_{\theta _0}(R)\le \alpha \right\} \!. \end{aligned}$$

Let us denote by \(R^*\) the rejection region (likelihood ratio test) of the form

$$\begin{aligned} R^*=\{U_T^N: L(\theta _0,\theta _1,U_T^N)\ge c_{\alpha }\}, \end{aligned}$$

where \(c_\alpha \in \mathbb {R}\), such that \(\mathbb {P}^{N,T}_{\theta _0}(L(\theta _0,\theta _1,U_T^N)\ge c_{\alpha })=\alpha \). In [4] we proved that \(R^*\) is the most powerful test (has the smallest Type II error) in the class \(\mathcal {K}_\alpha \),

$$\begin{aligned} \mathbb {P}^{N,T}_{\theta _1}(R)\le \mathbb {P}^{N,T}_{\theta _1}(R^*),\qquad \hbox { for all } R\in \mathcal {K}_{\alpha }. \end{aligned}$$

While this gives a complete theoretical answer to the simple hypothesis testing problem, generally speaking there is no explicit formula for the constant \(c_\alpha \). The main contribution of [4] was to find computable rejection regions, and the appropriate class of tests, by so called asymptotic approach. The authors study two asymptotic regimes: large time asymptotics, while fixing the number of Fourier modes \(N\), and large number of Fourier modes, while time horizon is fixed. We will outline here the case of large time asymptotics. Let \((R_T^\sharp )_{T\in \mathbb {R}_+}\) and \(\mathcal {K}_{\alpha }^\sharp \) be defined as follows:

$$\begin{aligned} \mathcal {K}_{\alpha }^\sharp&= \left\{ (R_T): \limsup _{T\rightarrow \infty }\left( \mathbb {P}^{N,T}_{\theta _0}(R_T)-\alpha \right) \sqrt{T}\le \alpha _1\right\} \!, \\ R_T^\sharp&= \left\{ U_T^N: L(\theta _0,\theta _1,U_T^N)\ge c^\sharp _{\alpha }(T)\right\} \!,\\ c^\sharp _{\alpha }(T)&= \exp \left( -\frac{(\theta _1-\theta _0)^2}{4\theta _0}MT -\frac{\theta _1^2-\theta _0^2}{2\theta _0} \sqrt{\frac{MT}{2\theta _0}}q_{\alpha }\right) \!, \\ M&= \sum _{k=1}^N\lambda _k^{2\beta }, \end{aligned}$$

where \(q_\alpha \) is \(\alpha \)-quantile of standard Gaussian distribution, and \(\alpha _1\) is a constant that depends on \(\alpha \). The class \(\mathcal {K}_\alpha ^\sharp \) essentially consists of tests with Type I errors converging to \(\alpha \) from above with rate at least \(\alpha _1T^{-1/2}\). It was proved that

$$\begin{aligned} \underset{T\rightarrow \infty }{\liminf } \frac{1-\mathbb {P}^{N,T}_{\theta _1}(R_T)}{1-\mathbb {P}^{N,T}_{\theta _1}(R_{T}^\sharp )}\ge 1,\qquad \hbox { for all } (R_T)_{T\in \mathbb {R}_+}\in \mathcal {K}_{\alpha }^\sharp . \end{aligned}$$
(1.5)

In other words, \(R_T^\sharp \) has the fastest rate of convergence of the Type II error, as \(T\rightarrow \infty \), in the class \(\mathcal {K}_{\alpha }^\sharp \). We proved analogous results for \(N\rightarrow \infty \), and \(T\) being fixed, by taking

$$\begin{aligned} {R}_N^\sharp&= \left\{ U_T^N: L(\theta _0,\theta _1,U_T^N)\ge \widetilde{c}_{\alpha }(N)\right\} \!, \quad N\in \mathbb {N}, \\ \widetilde{\mathcal {K}}_\alpha ^\sharp&= \left\{ (R_N): \limsup _{N\rightarrow \infty }\left( \mathbb {P}^{N,T}_{\theta _0}(R_N)-\alpha \right) \sqrt{M}\le \widetilde{\alpha }_1\right\} , \end{aligned}$$

where \(\widetilde{c}_{\alpha }(N)\) is a constant depending on \(N\) and \(\alpha \) only, and \(\widetilde{\alpha }_1\) is a constant that depends on \(\alpha \). We refer the reader to [4] for further details.

However, by their very nature of being asymptotic type results, one cannot assess how large \(T\) (or \(N\)) shall be taken to guarantee that the error is smaller than a desired tolerance. The main goal of this manuscript is to investigate the corresponding error estimates for fixed values of \(T\) and \(N\).

Let us start with some heuristic discussion on why for the tests \(R^\sharp _T\) and \(R^\sharp _N\) one cannot easily find computable expressions for \(T\) or \(N\) that will guarantee certain bounds on statistical errors. As it was shown in [4, Lemma 3.13], for sufficiently large \(T\), we have the following asymptotic expansion under the null hypothesis \(\fancyscript{H}_0\):

$$\begin{aligned} \mathbb {P}^{N,T}_{\theta _0}(R_T^\sharp )=\alpha +\alpha _1T^{-1/2}+O(T^{-1}). \end{aligned}$$

Hence, for \(T\) large enough, we will have the estimate

$$\begin{aligned} \left| \mathbb {P}^{N,T}_{\theta _0}(R_T^\sharp )-\alpha \right| \le C_1T^{-1/2}, \end{aligned}$$

where \(C_1\) is a constant independent of \(T\). Similarly (cf. [4, Lemma 3.21]), we have the asymptotic expansions

$$\begin{aligned} \mathbb {P}^{N,T}_{\theta _0}({R}_N^\sharp )&= \alpha +\widetilde{\alpha }_1M^{-1/2}+o(M^{-1/2}), \quad \hbox { if } \ \beta /d>1/2, \\ \mathbb {P}^{N,T}_{\theta _0}({R}_N^\sharp )&= \alpha + \left( \widetilde{\alpha }_1+\sqrt{\frac{2\beta /d+1}{c^{\beta }}} \widetilde{\alpha }_2\right) M^{-1/2}+o(M^{-1/2}), \quad \hbox { if } \ \beta /d=1/2. \end{aligned}$$

Since \(\lambda _k\sim k^{1/d}\), for \(\beta /d\ge 1/2\), we get

$$\begin{aligned} \left| \mathbb {P}^{N,T}_{\theta _0}( {R}_N^\sharp )-\alpha \right| \le C_2N^{-\beta /d-1/2}, \end{aligned}$$

where \(C_2\) is a constant independent of \(N\).

Due to lack of knowledge of the behavior of higher order terms in the above asymptotics, practically speaking, the above constants \(C_1\) and \(C_2\) cannot be easily determined. The case of large Fourier modes is especially intricate, since the asymptotic expansion of Type I error is done in terms of \(M\) rather than \(N\). To overcome this technical problem, we propose a new test, which may not be asymptotically the most powerful, but which is convenient for the errors’ estimation. Moreover, we validate the obtained results by numerical simulations.

1.2 Sharp large deviation principle

The main results presented in this paper, and the ideas behind them, rely on some results on sharp large deviation bounds obtained in [4]. While the sharp deviations results for large time asymptotics \(T\rightarrow \infty \) are comparable in certain respects with those from Stochastic ODEs (cf. [2, 7, 8]), the results for large number of Fourier modes \(N\rightarrow \infty \) are new, and by analogy we refer to them also as sharp large deviation principle. For convenience, we will briefly present some of needed results here too.

Generally speaking, we seek asymptotics expansion of the form

$$\begin{aligned} T^{-1}\ln \mathbb {E}_{\theta }\left[ \exp \left( \epsilon \ln L(\theta _0,\theta _1,U_T^N)\right) \right] =\mathcal {L}(\epsilon )+T^{-1}\mathcal {H}(\epsilon )+T^{-1}\mathcal {R}(\epsilon ), \end{aligned}$$

for \(\theta =\theta _0\) or \(\theta = \theta _1\), and where \(\mathcal {L}, \ \mathcal {H}\) are some explicit functions of \(\epsilon , N, \theta _0,\theta _1\), and \(\mathcal {R}\) is a residual term. Similarly, we are looking for asymptotic expansion of \(M^{-1}\ln \mathbb {E}_{\theta }\left[ \exp \left( \epsilon \ln L(\theta _0,\theta _1,U_T^N)\right) \right] \), while \(T\) is fixed. With these at hand, we find a convenient representation of probabilities

$$\begin{aligned} \mathbb {P}^{N,T}_{\theta _j}\left( \ln L(\theta _0,\theta _1,U_T^N) \le (\hbox {or}\ge ) \varpi \right) \!, \quad j=0,1, \end{aligned}$$

where \(\varpi \) has the form \(\eta T\) or \(\eta M\) for some constant \(\eta \). Below we will present the explicit expressions for functions \(\mathcal {L},\mathcal {H},\mathcal {R}\). Albeit the formulas are somehow cumbersome, their particular form is less important at this stage.

Along these lines, we adapt the notations

$$\begin{aligned} \mathcal {L}_T^j(\epsilon )&:= T^{-1}\ln \mathbb {E}_{\theta _j}\left[ \exp \left( \epsilon \ln L(\theta _0,\theta _1,U_T^N)\right) \right] \!, \\ \mathcal {L}_N^j(\epsilon )&:= M^{-1}\ln \mathbb {E}_{\theta _j}\left[ \exp \left( \epsilon \ln L(\theta _0,\theta _1,U_T^N)\right) \right] , \end{aligned}$$

for \(j=0,1\). The following expansions hold true

$$\begin{aligned} \mathcal {L}_T^j(\epsilon )&:= M\mathcal {L}_j(\epsilon ) +T^{-1}N\mathcal {H}_j(\epsilon )+T^{-1}\mathcal {R}_j(\epsilon ), \end{aligned}$$
(1.6)
$$\begin{aligned} \mathcal {L}_N^j(\epsilon )&:= T\mathcal {L}_j(\epsilon ) +NM^{-1}\mathcal {H}_j(\epsilon )+M^{-1}\mathcal {R}_j(\epsilon ), \end{aligned}$$
(1.7)

where \(\epsilon >-\frac{\theta _j^2}{\theta _1^2-\theta _0^2}\), and where

$$\begin{aligned} \mathcal {L}_j(\epsilon )&= \frac{1}{2}\left( \theta _j+(\theta _1-\theta _0)\epsilon - \sqrt{\theta _j^2+(\theta _1^2-\theta _0^2)\epsilon }\right) \!, \\ \mathcal {H}_j(\epsilon )&= -\frac{1}{2}\ln \left( \frac{1}{2}+\frac{1}{2}\mathcal {D}_j(\epsilon )\right) \!,\qquad \mathcal {D}_j(\epsilon )=\frac{\theta _j+(\theta _1 -\theta _0)\epsilon }{\sqrt{\theta _j^2+(\theta _1^2-\theta _0^2)\epsilon }},\\ \mathcal {R}_j(\epsilon )&= -\frac{1}{2}\sum _{k=1}^N\ln \left( 1+\frac{1-\mathcal {D}_j(\epsilon )}{1+\mathcal {D}_j(\epsilon )} \exp \left( -2\lambda _k^{2\beta }T\sqrt{\theta _j^2 +(\theta _1^2-\theta _0^2)\epsilon }\right) \right) \!. \end{aligned}$$

Using these results, one can show that the following identities are satisfied,

$$\begin{aligned}&\mathbb {P}^{N,T}_{\theta _j}\left( (-1)^j\ln L(\theta _0,\theta _1,U_T^N)\ge (-1)^j\eta T\right) =A_T^jB_T^j, \end{aligned}$$
(1.8)
$$\begin{aligned}&\mathbb {P}^{N,T}_{\theta _j} \left( (-1)^j\ln L(\theta _0,\theta _1,U_T^N)\ge (-1)^j\eta M\right) =\widetilde{A}_N^j\widetilde{B}_N^j, \end{aligned}$$
(1.9)

with

$$\begin{aligned} A_T^j&= \exp \left[ T(\mathcal {L}_T^j(\epsilon _{\eta }^j)-\eta \epsilon _{\eta }^j)\right] \!,\qquad \widetilde{A}_N^j= \exp \left[ M(\mathcal {L}_N^j(\widetilde{\epsilon }_{\eta }^j) -\eta \widetilde{\epsilon }_{\eta }^j) \right] \!, \nonumber \\ B_T^j&= \mathbb {E}_T^j\left( \exp \left[ -\epsilon _{\eta }^j(\ln L(\theta _0,\theta _1,U_T^N)-\eta T)\right] \mathbb {1}_{\{(-1)^j\ln L(\theta _0,\theta _1,U_T^N)\ge (-1)^j\eta T\}}\right) \!,\nonumber \\ \widetilde{B}_N^j&= \mathbb {E}_N^j\left( \exp \left[ -\widetilde{\epsilon }_{\eta }^j(\ln L(\theta _0,\theta _1,U_T^N)-\eta M)\right] \mathbb {1}_{\{(-1)^j\ln L(\theta _0,\theta _1,U_T^N)\ge (-1)^j\eta M\}}\right) \!,\nonumber \\ \end{aligned}$$
(1.10)

where \(\eta \) is a number which may depend on \(T\) and \(N\), \(\epsilon _{\eta }^j\) and \(\widetilde{\epsilon }_{\eta }^j\) are numbers which depend on \(\eta \), \(\mathbb {E}_T^j\) and \(\mathbb {E}_N^j\) are the expectations under \(\mathbb {Q}_T^j\) and \(\mathbb {Q}_N^j\) respectively with

$$\begin{aligned} \frac{d\mathbb {Q}_T^j}{d\mathbb {P}^{N,T}_{\theta _j}}&= \exp \left( \epsilon _{\eta }^j\ln L(\theta _0,\theta _1,U_T^N)-T\mathcal {L}_T^j(\epsilon _{\eta }^j)\right) \!, \end{aligned}$$
(1.11)
$$\begin{aligned} \frac{d\mathbb {Q}_N^j}{d\mathbb {P}^{N,T}_{\theta _j}}&= \exp \left( \widetilde{\epsilon }_{\eta }^j\ln L(\theta _0,\theta _1,U_T^N)-M\mathcal {L}_N^j(\widetilde{\epsilon }_{\eta }^j)\right) \!. \end{aligned}$$
(1.12)

By taking \(\epsilon _{\eta }^j\) or \(\widetilde{\epsilon }_{\eta }^j\) such that \(M\mathcal {L}_j'(\epsilon _{\eta }^j)=\eta \) or \(T\mathcal {L}_j'(\widetilde{\epsilon }_{\eta }^j)=\eta \), for \(\eta <(\theta _1-\theta _0)M/2\) or \(\eta <(\theta _1-\theta _0)T/2\) respectively, we got

$$\begin{aligned} \epsilon _{\eta }^j&= \frac{(\theta _1^2-\theta _0^2)^2M^2-4\theta _j^2(-2\eta +(\theta _1-\theta _0)M)^2}{4(\theta _1^2-\theta _0^2)(-2\eta +(\theta _1-\theta _0)M)^2}, \end{aligned}$$
(1.13)
$$\begin{aligned} \widetilde{\epsilon }_{\eta }^j&= \frac{(\theta _1^2-\theta _0^2)^2T^2-4\theta _j^2(-2\eta +(\theta _1-\theta _0)T)^2}{4(\theta _1^2-\theta _0^2)(-2\eta +(\theta _1-\theta _0)T)^2}, \end{aligned}$$
(1.14)

and then by direct computations we found that

$$\begin{aligned} A_T^j&= \exp \left( -I_j(\eta )T\right) \exp \left[ N\mathcal {H}_j(\epsilon _{\eta }^j)+\mathcal {R}_j(\epsilon _{\eta }^j)\right] , \end{aligned}$$
(1.15)
$$\begin{aligned} \widetilde{A}_N^j&= \exp \left( -\widetilde{I}_j(\eta )M\right) \exp \left[ N\mathcal {H}_j(\widetilde{\epsilon }_{\eta }^j)+\mathcal {R}_j (\widetilde{\epsilon }_{\eta }^j)\right] , \end{aligned}$$
(1.16)

where

$$\begin{aligned} I_j(\eta )&= -\frac{(4\theta _j\eta +(-1)^j(\theta _1-\theta _0)^2M)^2}{8(2\eta -(\theta _1-\theta _0)M) (\theta _1^2-\theta _0^2)},\nonumber \\ \quad \widetilde{I}_j(\eta )&= -\frac{(4\theta _j\eta +(-1)^j(\theta _1-\theta _0)^2T)^2}{8(2\eta -(\theta _1-\theta _0)T)(\theta _1^2-\theta _0^2)}. \end{aligned}$$
(1.17)

Finally, also in [4] we derived the large deviation principles for considered SPDEs

$$\begin{aligned}&\lim _{T\rightarrow \infty }T^{-1}\ln \mathbb {P}^{N,T}_{\theta _0}\left( T^{-1}\ln L(\theta _0,\theta _1,U_T^N)\ge \eta \right) =-I_0(\eta ),\nonumber \\&\quad \eta \in \left( -\frac{(\theta _1-\theta _0)^2}{4\theta _0}M, \frac{\theta _1-\theta _0}{2}M\right) \!, \end{aligned}$$
(1.18)
$$\begin{aligned}&\lim _{T\rightarrow \infty }T^{-1}\ln \mathbb {P}^{N,T}_{\theta _1}\left( T^{-1}\ln L(\theta _0,\theta _1,U_T^N)\ge \eta \right) =-I_1(\eta ),\nonumber \\&\quad \eta \in \left( \frac{(\theta _1-\theta _0)^2}{4\theta _1}M, \frac{\theta _1-\theta _0}{2}M\right) \!, \end{aligned}$$
(1.19)

It should be mentioned that in [4] the relations (1.6)–(1.19) were derived only under the alternative hypothesis, \(\theta =\theta _1\), however, the corresponding results for \(\theta =\theta _0\) are obtained in a very similar manner. The main difference is that \(\theta _1\) in the PDE obtained by Feynman–Kac formula is replaced by \(\theta _0\), but the method of solving it remains of course the same. We admit that some parts of these derivations may appear technically challenging, but nevertheless we felt unnecessary to mimic them here.

2 The case of large times

Throughout this section, we assume that the number of Fourier modes \(N\) is fixed. Recall that without loss of generality we assume that \(\theta _1>\theta _0\) (the obtained results are symmetric otherwise). We still consider tests of the form \(R_T=\{U_T^N: L(\theta _0,\theta _1,U_T^N)\ge c_\alpha (T)\}\), but for the sake of convenience we write them equivalently as

$$\begin{aligned} R_T=\{U_T^N: \ln L(\theta _0,\theta _1,U_T^N)\ge \eta T\}, \end{aligned}$$
(2.1)

where, unless specified, \(\eta \) is an arbitrary number which may depend on \(N\) and \(T\). Our goal is to find a proper expression for \(\eta \) such that for \(T\) larger than a certain number, the Type I and II errors are always smaller than a chosen threshold. Clearly, we are looking for \(\eta \) that is a bounded function of \(T\). Using the results on large deviations from Sect. 1.2, we will first give an argument how to derive a proper expression of \(\eta \), followed by main results and their detailed proofs.

Following the large deviation principle (1.18), let us assume that \(\eta \) is such that

$$\begin{aligned} -\!\frac{(\theta _1-\theta _0)^2}{4\theta _0}M<\eta <\frac{\theta _1-\theta _0}{2}M. \end{aligned}$$
(2.2)

Then, we have that \(\epsilon _{\eta }^0>0\), and hence \(B_T^0\le 1.\) Consequently, in view of (1.8), to get an upper bound for the Type I error, it is enough to estimate \(A_T^0\). By (1.15), combined with (1.18), we note that \(\exp \left( -I_0(\eta )T\right) \) is the dominant term of asymptotic expansion of Type I error. Since we have an explicit expression of the residual part \(\exp \left[ N\mathcal {H}_0(\epsilon _{\eta }^0)+\mathcal {R}_0(\epsilon _{\eta }^0)\right] \), this suggest that if we simply let the dominant part to be equal to the significance level \(\alpha \), that is

$$\begin{aligned} \exp \left( -I_0(\eta )T\right) =\alpha , \end{aligned}$$
(2.3)

we may be able to control the Type I error by a much simpler function. In fact, by solving Eq. (2.3), that has two solutions, and since \(\eta \) has to satisfy (2.2), we choose

$$\begin{aligned} \eta =-\frac{(\theta _1-\theta _0)^2}{4\theta _0}M +\frac{(\theta _1^2-\theta _0^2)\ln \alpha }{2\theta _0^2T}+ \frac{\theta _1^2-\theta _0^2}{2\theta _0^2} \sqrt{-\theta _0MT^{-1}\ln \alpha +T^{-2}\ln ^2\alpha }.\nonumber \\ \end{aligned}$$
(2.4)

Clearly \(\eta \) is a bounded function of \(T\). Moreover, \(\eta \) indeed satisfies (2.2), a point made clear by (2.6) below.

Next we present the first main result of this paper that shows how large \(T\) has to be so that the Type I error is smaller than a given tolerance level.

Theorem 2.1

Assume that the test statistics has the form

$$\begin{aligned} R^0_T=\left\{ U^N_T: \ln L(\theta _0,\theta _1,U_T^N)\ge \eta T\right\} \!, \end{aligned}$$

where \(\eta \) is given by (2.4). If

$$\begin{aligned} T\ge \max \left\{ -\frac{256\theta _0\ln \alpha }{(\theta _1-\theta _0)^2M}, -\frac{16\ln \alpha }{\theta _0 M}, -\frac{4(1+\varrho )^2(\theta _1 -\theta _0)^2(N+1)^2\ln \alpha }{\varrho ^2(\theta _1+\theta _0)^2 M\theta _0}\right\} \!,\nonumber \\ \end{aligned}$$
(2.5)

then the Type I error has the following bound estimate

$$\begin{aligned} \mathbb {P}^{N,T}_{\theta _0}\left( R^0_T\right) \le (1+\varrho )\alpha , \end{aligned}$$

where \(\varrho \) denotes a given threshold of error toleranceFootnote 1.

Proof

Let us consider

$$\begin{aligned} \Delta \eta :=\eta +\frac{(\theta _1-\theta _0)^2}{4\theta _0}M&= \frac{\theta _1^2-\theta _0^2}{2\theta _0^2}\frac{-\theta _0MT^{-1}\ln \alpha }{-T^{-1}\ln \alpha +\sqrt{-\theta _0MT^{-1}\ln \alpha +T^{-2}\ln ^2\alpha }}\nonumber \\&\le \frac{1}{2}(\theta _1^2-\theta _0^2)\sqrt{-\theta _0^{-3}M\ln \alpha }T^{-1/2}. \end{aligned}$$
(2.6)

Note that \(\Delta \eta >0\), which implies that \(\eta > -(\theta _1-\theta _0)^2M/4\theta _0\). Moreover, since \(\Delta \eta \rightarrow 0\), as \(T\rightarrow \infty \), we also have that \(\eta <(\theta _1-\theta _0)M/2\), for sufficiently large \(T\), and hence (2.2) is satisfied.

Substituting (2.4) into (1.13), by direct evaluations, we deduce

$$\begin{aligned} \epsilon _{\eta }^0&= \frac{2\theta _0(\theta _1^2-\theta _0^2)M\Delta \eta -4\theta _0^2\Delta \eta ^2}{(\theta _1^2-\theta _0^2)((\theta _1^2-\theta _0^2)M/(2\theta _0) -2\Delta \eta )^2}\le \frac{2\theta _0M\Delta \eta }{((\theta _1^2-\theta _0^2)M/(2\theta _0)-2\Delta \eta )^2}.\qquad \quad \,\, \end{aligned}$$
(2.7)

By (2.6) and (2.7), we conclude that, if

$$\begin{aligned} (\theta _1^2-\theta _0^2)\sqrt{-\theta _0^{-3}M\ln \alpha }T^{-1/2} \le (\theta _1^2-\theta _0^2)M/(4\theta _0), \end{aligned}$$
(2.8)

then have the following estimate

$$\begin{aligned} 0<\epsilon _{\eta }^0\le \frac{32\theta _0^3\Delta \eta }{(\theta _1^2-\theta _0^2)^2M}\le \frac{16\sqrt{-\theta _0^3\ln \alpha }}{(\theta _1^2-\theta _0^2)\sqrt{M}}T^{-1/2}. \end{aligned}$$
(2.9)

A straightforward inspection of the derivative of \(\mathcal {D}_0(\epsilon )\) implies that \(\mathcal {D}_0(\epsilon )\) decreases for \(\epsilon <\frac{\theta _0}{\theta _1+\theta _0}\), and goes to 1, as \(\epsilon \rightarrow 0+\). Thus, using (2.9), if

$$\begin{aligned} \frac{16\sqrt{-\theta _0^3\ln \alpha }}{(\theta _1^2-\theta _0^2)\sqrt{M}}T^{-1/2}<\frac{\theta _0}{\theta _1+\theta _0}, \end{aligned}$$
(2.10)

then we can guarantee that \(0<\mathcal {D}_0(\epsilon _{\eta }^0)<1\). From here, under assumption that (2.8) and (2.10) hold true, we have

$$\begin{aligned} \exp \left[ \mathcal {R}_0(\epsilon _{\eta }^0)\right] =\prod _{k=1}^N\left( 1+\frac{1-\mathcal {D}_0(\epsilon _{\eta }^0)}{1 +\mathcal {D}_0(\epsilon _{\eta }^0)} \exp \left( -2\lambda _k^{2\beta }T\sqrt{\theta _0^2 +(\theta _1^2-\theta _0^2) \epsilon _{\eta }^0}\right) \right) ^{-1/2}<1.\nonumber \\ \end{aligned}$$
(2.11)

Due to the fact that \(\sqrt{1+x}<1+x/2\), we get

$$\begin{aligned} \mathcal {D}_0(\epsilon _{\eta }^0)\ge \frac{\theta _0 +(\theta _1-\theta _0)\epsilon _{\eta }^0}{\theta _0+(\theta _1^2-\theta _0^2) \epsilon _{\eta }^0/(2\theta _0)}. \end{aligned}$$

Therefore, under (2.8) and (2.10), we obtain

$$\begin{aligned} \mathcal {D}_0(\epsilon _{\eta }^0)-1&\ge -\frac{(\theta _1-\theta _0)^2}{2\theta _0 \left( \theta _0+(\theta _1^2-\theta _0^2)\epsilon _{\eta }^0/(2\theta _0)\right) } \epsilon _{\eta }^0 \ge -\frac{(\theta _1-\theta _0)^2}{2\theta ^{2}_{0} } \epsilon _{\eta }^0 \\&\ge -\frac{8(\theta _1-\theta _0)\sqrt{-\ln \alpha }}{\left( \theta _1+\theta _0\right) \sqrt{M{\theta _0}}}T^{-1/2}. \end{aligned}$$

From the above, and by means of Bernoulli inequality, we continue

$$\begin{aligned} \exp \left[ N\mathcal {H}_0(\epsilon _{\eta }^0)\right]&= \left( 1+\frac{1}{2}\left( \mathcal {D}_0(\epsilon _{\eta }^0) -1\right) \right) ^{-N/2}\nonumber \\&\le \left( 1+\frac{1}{2}\left( \mathcal {D}_0(\epsilon _{\eta }^0) -1\right) \right) ^{-\lfloor (N+1)/2\rfloor }\nonumber \\&\le \left( 1+\frac{\lfloor (N+1)/2\rfloor }{2} \left( \mathcal {D}_0(\epsilon _{\eta }^0)-1\right) \right) ^{-1}\nonumber \\&\le \left( 1-\frac{2(N+1)(\theta _1-\theta _0)\sqrt{-\ln \alpha }}{\left( \theta _1+\theta _0\right) \sqrt{M\theta _{0}}}T^{-1/2}\right) ^{-1}. \end{aligned}$$
(2.12)

Note that the above inequalities hold true if all the terms in the parenthesis are positive, for which is enough to assume that

$$\begin{aligned} \frac{2(N+1)(\theta _1-\theta _0)\sqrt{-\ln \alpha }}{\left( \theta _1+\theta _0\right) \sqrt{M\theta _{0}}}T^{-1/2}<1. \end{aligned}$$
(2.13)

Recall that \(\epsilon _{\eta }^0>0\), and hence \(B_T^0\le 1\). Using (1.8) and (2.3), combined with (2.11) and (2.12), we conclude that

$$\begin{aligned} \mathbb {P}^{N,T}_{\theta _0}\left( R^0_T\right) =A_T^0B_T^0\le \alpha \left( 1-\frac{2(N+1)(\theta _1-\theta _0)\sqrt{-\ln \alpha }}{\left( \theta _1+\theta _0\right) \sqrt{M\theta _{0}}}T^{-1/2}\right) ^{-1}. \end{aligned}$$

Thus, in order to make the Type I error to satisfy the desire upper bound \(\mathbb {P}^{N,T}_{\theta _0}\left( R^0_T\right) \le (1+\varrho )\alpha \), it is sufficient to require that

$$\begin{aligned} T\ge -\frac{4(1+\varrho )^2(\theta _1 -\theta _0)^2(N+1)^2\ln \alpha }{\varrho ^2(\theta _1+\theta _0)^2M\theta _0}, \end{aligned}$$
(2.14)

under assumption that (2.8), (2.10) and (2.13) hold true, which is satisfied due to original assumption (2.5). This concludes the proof. \(\square \)

Next we will study the estimation of Type II error, as time \(T\) goes to infinity.

Theorem 2.2

Assume that the test \(R^0_T\) is given as in Theorem 2.1. If

$$\begin{aligned} T\ge \max \left\{ -\frac{16(\theta _1^2 +16\theta _0^2)\ln \alpha }{\theta _0(\theta _1 -\theta _0)^2M},-\frac{16\ln \alpha }{\theta _0 M}, -\frac{4(1+\varrho )^2(\theta _1 -\theta _0)^2(N+1)^2\ln \alpha }{\varrho ^2(\theta _1+\theta _0){M\theta _{0}}}\right\} ,\nonumber \\ \end{aligned}$$
(2.15)

then the Type II error admits the following upper bound estimate

$$\begin{aligned} 1-\mathbb {P}^{N,T}_{\theta _1}\left( R^0_T\right) \le (1+\varrho ) \exp \left( -\frac{(\theta _1-\theta _0)^2}{16\theta _0}MT\right) . \end{aligned}$$
(2.16)

Proof

Let \(\eta \) be as in (2.4). By direct evaluations, one can show that

$$\begin{aligned} \mathcal {H}_1(\epsilon _{\eta }^1)=\mathcal {H}_0(\epsilon _{\eta }^0),\qquad \mathcal {R}_1(\epsilon _{\eta }^1)=\mathcal {R}_0(\epsilon _{\eta }^0). \end{aligned}$$

Recall that, from the previous theorem, assuming that (2.5) holds true, we have that

$$\begin{aligned} \exp \left[ N\mathcal {H}_1(\epsilon _{\eta }^1)+\mathcal {R}_1(\epsilon _{\eta }^1)\right] = \exp \left[ N\mathcal {H}_0(\epsilon _{\eta }^0) +\mathcal {R}_0(\epsilon _{\eta }^0)\right] \le 1+\varrho . \end{aligned}$$
(2.17)

In view of (2.6) and (1.17), if we further require that

$$\begin{aligned} (\theta _1^2-\theta _0^2)\sqrt{-\theta _0^{-3}M\ln \alpha }T^{-1/2} \le \frac{(\theta _1^2-\theta _0^2)(\theta _1-\theta _0)}{4\theta _0\theta _1}M, \end{aligned}$$
(2.18)

it can be easily deduced that

$$\begin{aligned} \exp \left( -I_1(\eta )T\right) \le \exp \left( -\frac{(\theta _1 -\theta _0)^2}{16\theta _0}MT\right) . \end{aligned}$$
(2.19)

By (2.9), assuming that (2.10) holds true, we also have that

$$\begin{aligned} \epsilon _{\eta }^1=\epsilon _{\eta }^0-1<\frac{\theta _0}{\theta _1+\theta _0}-1<0, \end{aligned}$$

and hence

$$\begin{aligned} B_T^1=\mathbb {E}_T^1\left( \exp \left[ -\epsilon _{\eta }^1(\ln L(\theta _0,\theta _1,U_T^N)-\eta T)\right] \mathbb {1}_{\{\ln L(\theta _0,\theta _1,U_T^N)\le \eta T\}}\right) <1.\qquad \end{aligned}$$
(2.20)

Note that (1.8)–(1.15) imply that

$$\begin{aligned} 1-\mathbb {P}^{N,T}_{\theta _1}\left( R^0_T\right)&= \mathbb {P}^{N,T}_{\theta _1}\left( \ln L(\theta _0,\theta _1,U_T^N)\le \eta T\right) =A_T^1B_T^1\\&= \exp \left( -I_1(\eta )T\right) \exp \left[ N\mathcal {H}_1(\epsilon _{\eta }^1) +\mathcal {R}_1(\epsilon _{\eta }^1)\right] B_T^1. \end{aligned}$$

Therefore, (2.16) follows from (2.17), (2.19) and (2.20), under assumption that (2.5) and (2.18) are satisfied, which is guaranteed by (2.15). This finishes the proof. \(\square \)

3 The case of large number of Fourier modes

In this section we study the error estimates for the case of large number of Fourier modes \(N\), while the time horizon \(T\) is fixed. The key ideas and the method itself are similar to those developed in the previous section. We consider tests of the form

$$\begin{aligned} R_N=\{U_T^N: \ln L(\theta _0,\theta _1,U_T^N)\ge \zeta M\}, \end{aligned}$$
(3.1)

where \(\zeta \) is some number depending on \(N\) and \(T\), and where as before \(M:=\sum _{k=1}^N\lambda _k^{2\beta }\). The goal is to find \(\zeta \), as a bounded function of \(N\), that will allow to controll the statistical errors when the number of Fourier modes \(N\) is large.

Similarly to \(T\)-part, for \(\zeta >-\frac{(\theta _1-\theta _0)^2}{4\theta _0}T\), we have that \(\widetilde{\epsilon }_{\zeta }^0>0\), and hence \(\widetilde{B}_N^0\le 1\). Thus, it is enough to estimate \(\widetilde{A}_N^0\), and by the same reasons as in Sect. 2, we let \(\exp \left( -\widetilde{I}_0(\zeta )M\right) =\alpha \), and derive that the natural candidate for \(\zeta \) has the following form

$$\begin{aligned} \zeta =-\frac{(\theta _1-\theta _0)^2}{4\theta _0}T +\frac{(\theta _1^2-\theta _0^2) \ln \alpha }{2\theta _0^2M}+ \frac{\theta _1^2-\theta _0^2}{2\theta _0^2}\sqrt{-\theta _0TM^{-1}\ln \alpha + M^{-2}\ln ^2\alpha }.\nonumber \\ \end{aligned}$$
(3.2)

Next we provide the result on how large \(N\) should be (for a fixed \(T\)) to guarantee that Type I and Type II errors are smaller than a given tolerance level.

Theorem 3.1

Consider the test

$$\begin{aligned} R^0_N=\left\{ U^N_T: \ln L(\theta _0,\theta _1,U_T^N)\ge \zeta M\right\} \!, \end{aligned}$$

where \(\zeta \) is given by (3.2).

  1. (i)

    If

    $$\begin{aligned} M&\ge -\frac{16\ln \alpha }{\theta _0T}\max \left\{ \frac{16\theta _0^2}{(\theta _1 -\theta _0)^2},1\right\} \quad \hbox {and}\quad \frac{M}{(N+1)^2}\nonumber \\&\ge -\frac{4(1+\varrho )^2(\theta _1-\theta _0)^2\ln \alpha }{\varrho ^2(\theta _1+\theta _0)^2T\theta _{0}}, \end{aligned}$$
    (3.3)

    then the Type I error has the following upper bound estimate

    $$\begin{aligned} \mathbb {P}^{N,T}_{\theta _0}\left( R^0_N\right) \le (1+\varrho )\alpha , \end{aligned}$$
    (3.4)

    where \(\varrho \) denotes a given threshold of error tolerance.

  2. (ii)

    If

    $$\begin{aligned} M&\ge -\frac{16\ln \alpha }{\theta _0 T} \max \left\{ \frac{(\theta _1^2+16\theta _0^2)}{(\theta _1-\theta _0)^2},1\right\} \quad \hbox {and}\quad \frac{M}{(N+1)^2}\nonumber \\&\ge -\frac{4(1+\varrho )^2(\theta _1-\theta _0)^2\ln \alpha }{\varrho ^2(\theta _1+\theta _0)^2T\theta _{0}}, \end{aligned}$$
    (3.5)

    we have the following estimate for Type II error

    $$\begin{aligned} 1-\mathbb {P}^{N,T}_{\theta _1}\left( R^0_N\right) \le (1+\varrho )\exp \left( -\frac{(\theta _1-\theta _0)^2}{16\theta _0^2}MT\right) . \end{aligned}$$
    (3.6)

The proof is similarFootnote 2 to the proofs of Theorem 2.1 and Theorem 2.2, and we omit it hereFootnote 3.

4 Numerical experiments

In this section we give a simple illustration of theoretical results from previous sections by means of numerical simulations. Besides showing the behavior of Type I and Type II errors for the test \(R^0\) proposed in this paper, we will also display the simulation results for \(R^\sharp \) test mentioned in Sect. 1.1 and discussed in [4]. We start with description of the numerical scheme used for simulation of trajectories of the solution (more precisely of the Fourier modes), and provide a brief argument on the error estimates of the corresponding Monte Carlo experiments associated with this scheme. In the second part of the section, we focus on numerical interpretation of the theoretical results obtained in Sects. 2 and 3.

We use the standard Euler–Maruyama schemeFootnote 4 to numerically approximate the trajectories of the Fourier modes \(u_k(t)\) given by Eq. (1.2), and we apply Monte Carlo method to estimate the Type I and Type II errors. We partition the time interval \([0,T]\) into \(n\) equality spaced time intervals \(0=t_0<t_1<\cdots <t_n=T\), with \(\Delta T=T/n = t_{i}-t_{i-1}\), for \(1\le i\le n\). Let \(m\) denote the number of trials in the Monte Carlo experiment of each Fourier mode. Assume that \(u_k^j(t_i)\) is the true value of the \(k\)-th Fourier mode at time \(t_i\) of the \(j\)-th trial in Monte Carlo simulation. Then, for every \(1\le k\le N\), \(1\le j \le m\), we approximate \(u_k^j(t_i)\) according to the following recursion formula

$$\begin{aligned} \widetilde{u}_k^j(t_{i}) = \widetilde{u}_k^j(t_{i-1}) - \theta \lambda _k^{2\beta } \widetilde{u}_k^j(t_{i-1}) \Delta T + \sigma \lambda _k^{-\gamma } \xi _{k,i}^j,\quad \widetilde{u}_k^j(t_0)= u_k(0), \quad 1\le i \le n.\nonumber \\ \end{aligned}$$
(4.1)

where \(\xi _{k,i}^j\) are i.i.d. Gaussian random variables with zero mean and variance \(\Delta T\). In what follows, we will investigate how to approximate the Type I and Type II errors of \(R^0\) test using \(\widetilde{u}_k^j(t_{i})\)’s, and how the numerical errors are related to \(n\), \(m\), \(T\) and \(N\).

Throughout this section we consider Eq. (1.1), and consequently (4.1), with \(\beta =1\), in one dimensional space \(d=1\), with the random forcing term being the space-time white noise \(\gamma =0, \ \sigma =1\). We also assume that the spacial domain \(G=[0,\pi ]\) and the initial value \(U_0=0\). In this case \(\lambda _k=k, \ k\in \mathbb {N}\). We fix the parameter of interest to be \(\theta _0=0.1\) and \(\theta _1=0.2\). The general case is treated analogously, the authors feel that a complete and detailed analysis of the numerical results are beyond the scope of the current publication. The numerical simulations presented here are intended to show a simple analysis of the proposed methods. We performed simulations for other sets of parameters, and the numerical results were in concordance with the theoretical ones. For example, for the case of large times, if one increases \(N\), then the statistical errors are reaching the threshold for smaller values of \(T\)—more information improves the rate of convergence. Similarly, increasing \(T\) for the case of asymptotics in \(N\), one needs to take fewer Fourier modes to bypass the threshold of the statistical errors. Different ranges and magnitudes of the parameter of interest \(\theta \) were considered, and the outcomes are similar to those presented below. All simulations and computations are done in MATLAB and the source code is available from the authors upon request.

4.1 Description and analysis of the numerical experiments

Throughout \(C\) denotes a constant, whose value may vary from line to line, and whenever the formulas or results are indexed by \(j\), we mean that they hold true for all \(1\le j\le m\). Using (1.3), and by Itō’s formula, we get

$$\begin{aligned} \mathbb {P}^{N,T}_{\theta _0} (R_T^0)&= \mathbb {P}^{N,T}_{\theta _0} (\ln L(\theta _0,\theta _1,U_T^N)\ge \eta T) \nonumber \\&= \mathbb {P}^{N,T}_{\theta _0}\left( -\sum _{k=1}^N\lambda _k^{2\beta +2\gamma } \left( \int _0^Tu_k(t)du_k(t)\right. \right. \nonumber \\&\quad \left. \left. +\,\, \frac{\theta _1+\theta _0}{2\theta _0}\int _0^Tu_k \left( \sigma \lambda _k^{-\gamma }dw_k-du_k\right) \right) \ge \frac{\sigma ^{2}\eta T}{\theta _1-\theta _0}\right) \nonumber \\&= \mathbb {P}^{N,T}_{\theta _0}\left( \sum _{k=1}^N\lambda _k^{2\beta +2\gamma } \left( \frac{\theta _1-\theta _0}{2}\left( u_k^2(T)-\sigma ^2 \lambda _k^{-2\gamma }T\right) \right. \right. \nonumber \\&\quad \left. \left. -\,\, (\theta _1+\theta _0) \sigma \lambda _k^{-\gamma }\int _0^Tu_kdw_k\right) \ge \frac{2\theta _0\sigma ^{2}\eta T}{\theta _1-\theta _0}\right) \nonumber \\&= \mathbb {P}^{N,T}_{\theta _0}\left( \frac{(\theta _1-\theta _0)}{2\sigma (\theta _1+\theta _0)\sqrt{T}}X_T-Y_T/\sqrt{T}\ge \frac{2\theta _0 \sigma \Delta \eta }{\theta _1^2-\theta _0^2 }\sqrt{T}\right) , \end{aligned}$$
(4.2)

where \(\eta \) and \(\Delta \eta \) are given by (2.4) and (2.6) respectively, and

$$\begin{aligned} X_T:=\sum _{k=1}^N\lambda _k^{2\beta +2\gamma }u_k^2(T),\qquad Y_T:=\sum _{k=1}^N\lambda _k^{2\beta +\gamma }\int _0^Tu_kdw_k. \end{aligned}$$

We approximate \(X_T\) and \(Y_T\) as follows

$$\begin{aligned} \widetilde{X}_{n,T}^j:=\sum _{k=1}^N\lambda _k^{2\beta +2\gamma } \widetilde{u}_k^j(t_{n})^2,\qquad \widetilde{Y}_{n,T}^j:=\sum _{k=1}^N\lambda _k^{2\beta +\gamma }\sum _{i=1}^n \widetilde{u}_k^j(t_{i-1}) \xi _{k,i}^j. \end{aligned}$$

Define

$$\begin{aligned} \widetilde{R}_{n,T}^{0,j}:=\left\{ \frac{(\theta _1-\theta _0)}{2\sigma (\theta _1+\theta _0)\sqrt{T}}\widetilde{X}_{n,T}^j-\widetilde{Y}_{n,T}^j/\sqrt{T} \ge \frac{2\theta _0 \sigma \Delta \eta }{\theta _1^2-\theta _0^2 }\sqrt{T}\right\} \!. \end{aligned}$$

Then, naturally, the approximation of \(\mathbb {P}^{N,T}_{\theta _0}(R_T^0)\) is given by

$$\begin{aligned} \widetilde{\mathcal {P}}_{\theta _0}^{m,n,N,T}(R_T^0):= \frac{1}{m}\sum _{j=1}^m \mathbb {1}_{\widetilde{R}_{n,T}^{0,j}}. \end{aligned}$$
(4.3)

Following [1, Chapter 8], one can prove that

$$\begin{aligned} \mathbb {E}\left| \left( Y_T-\widetilde{Y}_{n,T}^j\right) \Bigg /\sqrt{T}\right| ^2=O(\Delta T),\qquad \mathbb {E}\left| X_T-\widetilde{X}_{n,T}^j\right| = O(\Delta T). \end{aligned}$$
(4.4)

Consequently, for any \(\epsilon >0\), we have

$$\begin{aligned} \mathbb {P}^{N,T}_{\theta _0}\left( \widetilde{R}_{n,T}^{0,j}\right)&\le \mathbb {P}^{N,T}_{\theta _0}\left( \frac{(\theta _1-\theta _0)}{2\sigma (\theta _1+\theta _0)\sqrt{T}}X_T-Y_T/\sqrt{T}\ge \frac{2\theta _0 \sigma \Delta \eta }{\theta _1^2-\theta _0^2 }\sqrt{T}-\epsilon \right) \\&\quad +\,\,\mathbb {P}^{N,T}_{\theta _0}\left( \left| Y_T-\widetilde{Y}_{n,T}^j \right| /\sqrt{T}\ge \epsilon /2\right) \nonumber \\&\quad +\,\,\mathbb {P}^{N,T}_{\theta _0}\left( \frac{(\theta _1-\theta _0)}{2\sigma (\theta _1+\theta _0)\sqrt{T}}\left| X_T-\widetilde{X}_{n,T}^j\right| \ge \epsilon /2\right) . \end{aligned}$$

According to [4, Lemma 3.13], for large enough \(T\), the following estimate holds true

$$\begin{aligned} \mathbb {P}^{N,T}_{\theta _0}\left( \frac{(\theta _1-\theta _0)}{2\sigma (\theta _1+\theta _0)\sqrt{T}}X_T-Y_T/\sqrt{T}\ge \frac{2\theta _0 \sigma \Delta \eta }{\theta _1^2-\theta _0^2 }\sqrt{T}-\epsilon \right) \le \mathbb {P}^{N,T}_{\theta _0} (R_T^0)(1+C\epsilon ). \end{aligned}$$

By the above results, and Chebyshev inequality, we conclude that

$$\begin{aligned} \mathbb {P}^{N,T}_{\theta _0}\left( \widetilde{R}_{n,T}^{0,j}\right)&\le \mathbb {P}^{N,T}_{\theta _0} (R_T^0)(1+C\epsilon )+C\epsilon ^{-1} \mathbb {E}\left| X_T-\widetilde{X}_{n,T}^j\right| /\sqrt{T}\nonumber \\&\quad +\, C\epsilon ^{-2}\mathbb {E}\left| \left( Y_T-\widetilde{Y}_{n,T}^j\right) /\sqrt{T}\right| ^2. \end{aligned}$$

Similarly, we have that

$$\begin{aligned} \mathbb {P}^{N,T}_{\theta _0}\left( \widetilde{R}_{n,T}^{0,j}\right)&\ge \mathbb {P}^{N,T}_{\theta _0} (R_T^0)(1-C\epsilon )-C\epsilon ^{-1}\mathbb {E}\left| X_T -\widetilde{X}_{n,T}^j\right| /\sqrt{T}\nonumber \\&\quad -\, C\epsilon ^{-2}\mathbb {E}\left| \left( Y_T-\widetilde{Y}_{n,T}^j\right) /\sqrt{T}\right| ^2. \end{aligned}$$

Combining the above two inequalities, we obtain that, for any \(\epsilon >0\),

$$\begin{aligned} \left| \mathbb {P}^{N,T}_{\theta _0}\left( \widetilde{R}_{n,T}^{0,j}\right) -\mathbb {P}^{N,T}_{\theta _0} (R_T^0) \right|&\le C \epsilon \mathbb {P}^{N,T}_{\theta _0} (R_T^0)+ C\epsilon ^{-1}\mathbb {E}\left| X_T-\widetilde{X}_{n,T}^j\right| /\sqrt{T} \\&\quad +\,\, C\epsilon ^{-2}\mathbb {E}\left| \left( Y_T-\widetilde{Y}_{n,T}^j\right) /\sqrt{T}\right| ^2. \end{aligned}$$

This implies that

$$\begin{aligned} \left| \mathbb {P}^{N,T}_{\theta _0}\left( \widetilde{R}_{n,T}^{0,j}\right) -\mathbb {P}^{N,T}_{\theta _0} (R_T^0) \right| \le C_0\Delta T^{1/3}, \end{aligned}$$
(4.5)

where \(C_0\) is a constant, which is small as long as \(\mathbb {P}^{N,T}_{\theta _0} (R_T^0)\) is small. It is straightforward to check that for large \(T\)

$$\begin{aligned} \hbox {Var}\left( \frac{(\theta _1-\theta _0)}{2\sigma (\theta _1+\theta _0)\sqrt{T}}X_T-Y_T/\sqrt{T}\right) \le C, \end{aligned}$$

where \(C\) is a constant independent of \(T\). From here and using (4.4), one can also show that

$$\begin{aligned} \hbox {Var}\left( \frac{(\theta _1-\theta _0)}{2\sigma (\theta _1+\theta _0)\sqrt{T}}\widetilde{X}_{n,T}^j -\widetilde{Y}_{n,T}^j/\sqrt{T}\right)&= \text {Var}\left( \frac{(\theta _1-\theta _0)}{2\sigma (\theta _1+\theta _0)\sqrt{T}}X_T-Y_T/\sqrt{T}\right) \\&\quad +\,O(\Delta T). \end{aligned}$$

This implies that the error of Monte Carlo simulations can be controlled by \(m^{-1/2}\) uniformly with respect to \(T\) and \(n\). Therefore, we have the following error estimate

$$\begin{aligned} \left| \widetilde{\mathcal {P}}_{\theta _0}^{m,n,N,T}(R_T^0)-\mathbb {P}^{N,T}_{\theta _0} (R_T^0)\right| \le C_1\Delta T^{1/3} + C_2 m^{-1/2}, \end{aligned}$$
(4.6)

which holds true with high probability (confidence interval of the Monte Carlo experiment). Here \(C_1\) is a constant which depends on \(\mathbb {P}^{N,T}_{\theta _0} (R_T^0)\) (usually small), and \(C_2\) is a constant which only depends on the confidence level of Monte Carlo simulations. Thus, the estimator \(\widetilde{\mathcal {P}}_{\theta _0}^{m,n,N,T}(R_T^0)\) can be made arbitrarily close to the true value of \(\mathbb {P}^{N,T}_{\theta _0} (R_T^0)\) with arbitrarily high probability, as long as we take small enough time step \(\Delta T\) and large enough number of trials \(m\) of Monte Carlo simulations.

To approximate the value of \(\mathbb {P}^{N,T}_{\theta _0} (R_T^\sharp )\), similarly to (4.2), we obtain

$$\begin{aligned} \mathbb {P}^{N,T}_{\theta _0}&(R_T^\sharp )= \mathbb {P}^{N,T}_{\theta _0}\left( \frac{(\theta _1-\theta _0)}{2\sigma (\theta _1+\theta _0)\sqrt{T}}X_T-Y_T/\sqrt{T}\ge -\sigma q_\alpha \sqrt{M/2\theta _0}\right) \!, \end{aligned}$$

and we define

$$\begin{aligned} \widetilde{R}_{n,T}^{\sharp ,j}:=\left\{ \frac{(\theta _1-\theta _0)}{2\sigma (\theta _1+\theta _0)\sqrt{T}}\widetilde{X}_{n,T}^j-\widetilde{Y}_{n,T}^j/\sqrt{T} \ge -\sigma q_\alpha \sqrt{M/2\theta _0}\right\} \!. \end{aligned}$$

Then, the approximation of \(\mathbb {P}^{N,T}_{\theta _0}(R_T^\sharp )\) is given by

$$\begin{aligned} \widetilde{\mathcal {P}}_{\theta _0}^{m,n,N,T}(R_T^\sharp ):= \frac{1}{m}\sum _{j=1}^m \mathbb {1}_{\widetilde{R}_{n,T}^{\sharp ,j}}. \end{aligned}$$
(4.7)

Following the same proof we obtain error estimates similar to (4.6) for \(R_T^\sharp \).

Next we will present some numerical results that validate relationship (4.6). In Table 1, we list simulation results of (4.3) for various value of the time step \(\Delta T\) (or number of time steps \(n\)), while keeping fixed time horizon \(T=100\), number of Monte Carlo simulations \(m=20,000\), and number of Fourier modes \(N=3\). For convenience, we present same results in graphical form, Fig. 1.

Fig. 1
figure 1

Type I error as a function of number of time steps \(n\). Graphical interpretation of Table 1

Table 1 Type I error for various time steps \(\Delta T\) (or number of time steps \(n\))

As shown in Fig. 1 the value of \(\widetilde{\mathcal {P}}_{\theta _0}^{m,n,N,T}(R_T^0)\), and respectively \(\widetilde{\mathcal {P}}_{\theta _0}^{m,n,N,T}(R_T^\sharp )\), rapidly decays (approximatively up to the point when \(n=1000\) or \(\Delta T = 0.1\)), and then it steadily approaches a certain ‘asymptotic level’, which, as suggested by (4.6), shall be the true value of \(\mathbb {P}_{\theta _0}^{N,T}(R_{T}^0)\) (or \(\mathbb {P}_{\theta _0}^{N,T}(R_{T}^\sharp )\)). This assumes a reasonable large value of \(m\), in our case \(m=20,000\). When \(\Delta T\) gets smaller, we notice small fluctuations around that ‘asymptotic level’, which are errors induced by the Monte Carlo method, and one can increase the number of trials to locate more precisely that true value. In our case the fluctuations are negligible comparative to the order of \(\alpha \).

Now we fix the time horizon \(T\), and vary the number of Fourier modes \(N\). Similarly to derivation of (4.2), we have

$$\begin{aligned} \mathbb {P}^{N,T}_{\theta _0} (R_N^0)= \mathbb {P}^{N,T}_{\theta _0}\left( \frac{(\theta _1-\theta _0)}{2\sigma (\theta _1+\theta _0)\sqrt{M}}X_T-Y_T/\sqrt{M}\ge \frac{2\theta _0 \sigma \Delta \zeta }{\theta _1^2-\theta _0^2 }\sqrt{M}\right) , \end{aligned}$$

where

$$\begin{aligned} \Delta \zeta =\frac{(\theta _1^2-\theta _0^2)\ln \alpha }{2\theta _0^2M}+ \frac{\theta _1^2-\theta _0^2}{2\theta _0^2}\sqrt{-\theta _0TM^{-1}\ln \alpha +M^{-2}\ln ^2\alpha }. \end{aligned}$$

Next, we define

$$\begin{aligned} \widetilde{R}_{n,N}^{0,j}:=\left\{ \frac{(\theta _1-\theta _0)}{2\sigma (\theta _1+\theta _0)\sqrt{M}}\widetilde{X}_{n,T}^j-\widetilde{Y}_{n,T}^j/\sqrt{M} \ge \frac{2\theta _0 \sigma \Delta \zeta }{\theta _1^2-\theta _0^2 }\sqrt{M}\right\} . \end{aligned}$$

and approximate the probability \(\mathbb {P}^{N,T}_{\theta _0}(R_N^0)\) by

$$\begin{aligned} \widetilde{\mathcal {P}}_{\theta _0}^{m,n,N,T}(R_N^0):= \frac{1}{m}\sum _{j=1}^m \mathbb {1}_{\widetilde{R}_{n,N}^{0,j}}. \end{aligned}$$
(4.8)

One can proveFootnote 5 that for some \(\nu \ge 0\),

$$\begin{aligned} \mathbb {E}\left| \left( Y_T-\widetilde{Y}_{n,T}^j\right) /\sqrt{M}\right| ^2=O(N^{\nu }/n),\qquad \mathbb {E}\left| X_T-\widetilde{X}_{n,T}^j\right| = O(N^{\nu }/n). \end{aligned}$$
(4.9)

Following the same procedure as for large time asymptotics, we get

$$\begin{aligned} \left| \widetilde{\mathcal {P}}_{\theta _0}^{m,n,N,T}(R_N^0)-\mathbb {P}^{N,T}_{\theta _0} (R_N^0)\right| \le C_1N^{\nu /3}n^{-1/3} + C_2 m^{-1/2}, \end{aligned}$$
(4.10)

where \(C_1\) is a constant which depends on \(\mathbb {P}^{N,T}_{\theta _0} (R_N^0)\), and \(C_2\) is a constant which depends on the confidence level of Monte Carlo experiment.

Similar results are obtained for the approximation of \(\mathbb {P}^{N,T}_{\theta _0} (R_N^\sharp )\) and the Type II errors \(\mathbb {P}^{N,T}_{\theta _1} (R_N^0)\), \(\mathbb {P}^{N,T}_{\theta _1} (R_N^\sharp )\), \(\mathbb {P}^{N,T}_{\theta _1} (R_T^0)\) and \(\mathbb {P}^{N,T}_{\theta _1} (R_T^\sharp )\), and for brevity we will omit them here.

We conclude that the errors due to the numerical approximations considered above are negligible. Hence, the numerical methods we propose are suitable for our purposes of computing the statistical errors of \(R_T^0\), \(R_T^\sharp \), \(R_N^0\) and \(R_N^\sharp \) tests, and we will use them for derivation of all numerical results from the next sections.

4.2 Numerical tests for large times

We start with the case of large times \(T\) and fixed \(N\), and the results discussed in Sect. 2. We take that \(N=3\), i.e. we observe one path of the first three Fourier modes of the solution \(u\) over some time interval \([0,T]\). For convenience, we denote by \(T_b^1\), and respectively \(T_b^2\), the lower bound thresholds for \(T\) from Theorem 2.1, relation (2.5), and respectively Theorem 2.2, relation (2.15). In Table 2, we list the Type I error \(\mathbb {P}_{\theta _0}^{N,T}\left( R_{T}^0\right) \), along with corresponding values of \(T_b^1\), for various values of \(\alpha \). Note that for all values of \(\alpha \), the Type I error is smaller than the threshold \((1+\varrho )\alpha \), and as expected, being on conservative side.

Table 2 \(T=T_b^1\) given by Theorem 2.1 and Type I error for various \(\alpha \)

In Table 3 we show that for \(T\ge T_b^1\), the error remains smaller than the chosen bound. In fact, the Type I error is decreasing as \(T\) gets larger, with all other parameters fixed.

Table 3 Type I error for various \(T\ge T_b^1\), with \(T_b^1\) as in Theorem 2.1

As already mentioned, the statistical test \(R^\sharp _T\) derived in [4], while it is asymptotically the most powerful in \(\mathcal {K}^\sharp _\alpha \), it will not guarantee that the statistical errors will be below the threshold for a fixed finite \(T\); only asymptotically it will be smaller than \(\alpha \). Indeed, as Table 3 shows, the Type I error for \(R^\sharp _T\) fluctuates around \(\alpha =0.05\), with no pattern. That was the very reason we proposed the tests \(R^0\).

To illustrate the results from Theorem 2.2, and the behavior of Type II error \(1-\mathbb {P}_{\theta _1}^{N,T}\left( R_{T}^0\right) \), one needs to look at very large values of \(T\), which is beyond our technical possibilities and the goal of this paper. We will only give the results for some reasonable large values of \(T\); see Table 4. Note that indeed the Type II error is decreasing as time \(T\) gets larger. Also here, we show the corresponding results for the test \(R^\sharp _T\).

Table 4 Type II errors for various \(T\); Illustration of Theorem 2.2

4.3 Numerical tests for large number of Fourier modes

Now we do a similar analysis by varying number of Fourier coefficients \(N\), while the time horizon \(T=1\) is fixed. As mentioned above, the case of large \(N\) is much more delicate, and as it turns out, according to the numerical results presented in Table 5, the error bounds for the statistical errors from Theorem 3.1 are on conservative side. The decay of the errors obtained in our numerical simulations is much faster than suggested by theoretical results, which from practical point of view is a desired feature.

Table 5 Type I errors for various \(N\); Theorem 3.1

5 Concluding remarks

On discrete sampling. Eventually, in real life experiments, the random field would be measured/sampled on a discrete grid, both in time and spatial domain. It is true that the main results are based on continuous time sampling, and may appear as being mostly of theoretical interest. However, as argued in the Sect. 4, the main ideas of this paper and [4] have a good prospect to be applied to the case of discrete sampling too. The error bounds of the numerical results presented herein contributes to the preliminary effort of studying the statistical inference problems for SPDEs in the discrete sampling framework. At our best knowledge, there are no results on statistical inference for SPDEs with fully discretely observed data (both in time and space). We outline here how to apply our results to discrete sampling, with strict proofs deffered to our future studies. If we assume that the first \(N\) Fourier modes are observed at some discrete time points, then, to apply the theory presented here, one essentially has to approximate some integrals, including some stochastic integrals, convergence of each is well understood. Of course, the exact rates of convergence still need to be established. The connection between discrete observation in space and the approximation of Fourier coefficients is more intricate. Natural way is to use discrete Fourier transform for such approximations. While intuitively clear that increasing the number of observed spacial points will yield to the computation of larger number of Fourier coefficients, it is less obvious, in our opinion, how to prove consistency of the estimators, asymptotic normality, and corresponding properties from hypothesis testing problem.

On derivation of other tests. We want to mention that the (sharp) large deviations, appropriately used, can lead to other practically important family of tests. In fact, it is not difficult to observe that, if we take \(R_T\) with

$$\begin{aligned} \eta \in \left( -\frac{(\theta _1-\theta _0)^2}{4\theta _0}M,\frac{(\theta _1 -\theta _0)^2}{4\theta _1}M\right) , \end{aligned}$$

then both Type I and Type II errors will go to zero, as \(T\rightarrow \infty \). Clearly, the motivation for doing this is to have both errors as small as possible. Moreover, for such \(\eta \) the statistical errors will go exponentially fast to zero. Of course, this will not be the most powerful test in the sense of [4], since such chosen \(\eta \) will reduce the exponential rate of convergence of Type II error. However, by shrinking the class of tests, one may preserve \(R_T\) to be ‘asymptotically the most powerful’ in the new class. For example, once the asymptotical properties of errors are well understood, one can consider a new class of tests of the form

$$\begin{aligned} \mathcal {K}_\alpha =\left\{ (R_T): \limsup _{T\rightarrow \infty }\left( T^{\alpha _2} \exp \left( I(\eta )T+\eta T\right) \mathbb {P}^{N,T}_{\theta _0}(R_T)-\alpha _0\right) T^{\alpha _3}\le \alpha _1\right\} \!, \end{aligned}$$

where \(\alpha _i\) (\(0\le i\le 3\)) are some parameters to be determined. Then, employing the same methodology as in [4], one can show that \(R_T\) is the most powerful in \(\mathcal {K}_\alpha \), with only slight modification of some technical results. Similar ideas can lead to corresponding results for \(N\rightarrow \infty \).

On composite hypothesis. Despite of the fact simple hypothesis testing problems are rarely used in practice, the efforts of this work, as well as those from [4], should be seen as a starting point of a systematic study of general hypothesis testing problems and goodness of fit tests for stochastic evolution equation in infinite dimensional spaces. As pointed out in [4], the developments of ‘asymptotic theory’ for composite hypothesis testing problem will follow naturally, and consequently one can extend the results of this paper to the case of composite tests.