1 Introduction

The paper is focused on such an important aspect of the study of regression models with correlated observations as an estimation of random noise functional characteristics. When considering this problem the regression function unknown parameter becomes nuisance and complicates the analysis of noise. To neutralise its presence, we must estimate the parameter and then build estimators, say, of spectral density parameter of a stationary random noise using residuals, that is the difference between the values of the observed process and fitted regression function.

So, in the first step we employ the least squares estimator (LSE) for unknown parameter of nonlinear regression, because of its relative simplicity. Asymptotic properties of the LSE in nonlinear regression model were studied by many authors. Numerous results on the subject can be found in monograph by Ivanov and Leonenko (1989), Ivanov (1997).

In the second step we use the residual periodoram to estimate the unknown parameter of the noise spectral density using the Whittle-type contrast process (Whittle 1951, 1953).

The results obtained at this time on the Whittle minimum contrast estimator (MCE) form a developed theory that covers various mathematical models of stochastic processes and random fields. Some publications on the topic are Hannan (1970, 1973), Dunsmuir and Hannan (1976), Guyon (1982), Rosenblatt (1985), Fox and Taqqu (1986), Dahlhaus (1989), Heyde and Gay (1989, 1993), Giraitis and Surgailis (1990), Giraitis and Taqqu (1999), Gao et al. (2001), Gao (2004), Leonenko and Sakhno (2006), Bahamonde and Doukhan (2017), Ginovyan and Sahakyan (2017), AvLeoSaspsoSTLTHUBLIetc, Anh et al. (2004), Bai et al. (2016), Ginovyan et al. (2014), Giraitis et al. (2017).

In the article by Koul and Surgailis (2000) in the linear regression model the asymptotic properties of the Whittle estimator of strongly dependent random noise spectral density parameters were studied in a discrete-time setting.

In the paper by Ivanov and Prykhod’ko (2016) sufficient conditions on consistency and asymptotic normality of the Whittle estimator of the spectral density parameter of the Gaussian stationary random noise in continuous-time nonlinear regression model were obtained using residual periodogram. The current paper continues this research extending it to the case of the Lévy-driven linear random noise and more general classes of regression functions including trigonometric ones. We use the scheme of the proof in the case of Gaussian noise (Ivanov and Prykhod’ko 2016) and some results of the papers (Avram et al. 2010; Anh et al. 2004). For linear random noise the proofs utilize essentially another types of limits theorems. In comparison with Gaussian case it leads to the use of special conditions on linear Lévy-driven random noise, new consistency and asymptotic normality conditions.

In the present publication continues-time model is considered. However, the results obtained can be also used for discrete time observations using the statements like Theorem 3 of Alodat and Olenko (2017) or Lemma 1 of Leonenko and Taufer (2006).

2 Setting

Consider a regression model

$$\begin{aligned} X(t)=g(t,\,\alpha _0)+\varepsilon (t),\ t\ge 0, \end{aligned}$$
(1)

where \(g{:}\;(-\gamma ,\,\infty )\times \mathcal {A}_\gamma \ \rightarrow \ \mathbb {R}\) is a continuous function, \(\mathcal {A}\subset \mathbb {R}^q\) is an open convex set, \(\mathcal {A}_\gamma =\bigcup \limits _{\Vert e\Vert \le 1}\left(\mathcal {A}+\gamma e\right)\), \(\gamma \) is some positive number, \(\alpha _0\in \mathcal {A}\) is a true value of unknown parameter, and \(\varepsilon \) is a random noise described below.

Remark 1

The assumption about domain \((-\gamma ,\,\infty )\) for function g in t is of technical nature and does not effect possible applications. This assumption makes it possible to formulate the condition \(\mathbf{N }_2\), which is used in the proof of Lemma 7.

Throughout the paper \((\Omega ,\,\mathcal {F},\,\hbox {P})\) denotes a complete probability space.

A Lévy process L(t), \(t\ge 0\), is a stochastic process, with independent and stationary increments, continuous in probability, with sample-paths which are right-continuous with left limits (cádlág) and \(L(0)=0\). For a general treatment of Lévy processes we refer to Applebaum (2009) and Sato (1999).

Let \((a,\,b,\,\Pi )\) denote a characteristic triplet of the Lévy process L(t), \(t\ge 0\), that is for all \(t\ge 0\)

$$\begin{aligned} \log \hbox {E}\exp \left\rbrace \mathrm {i}zL(t)\right\lbrace =t\kappa (z) \end{aligned}$$

for all \(z\in \mathbb {R}\), where

$$\begin{aligned} \kappa (z)=\mathrm {i}az-\frac{1}{2}bz^2 +\int \limits _{\mathbb {R}}\,\left(e^{\mathrm {i}zu}-1-\mathrm {i}z\tau (u)\right)\Pi (du),\ z\in \mathbb {R}, \end{aligned}$$
(2)

where \(a\in \mathbb {R}\), \(b\ge 0\), and

The Lévy measure \(\Pi \) in (2) is a Radon measure on \(\mathbb {R}\backslash \{0\}\) such that \(\Pi (\{0\})=0\), and

$$\begin{aligned} \int \limits _{\mathbb {R}}\,\min (1,\,u^2)\Pi (du)<\infty . \end{aligned}$$

It is known that L(t) has finite pth moment for \(p>0\) (\(\hbox {E}|L(t)|^p<\infty \)) if and only if

$$\begin{aligned} \int \limits _{|u|\ge 1}\,|u|^p\Pi (du)<\infty , \end{aligned}$$

and L(t) has finite pth exponential moment for \(p>0\) (\(\hbox {E}\left[e^{pL(t)}\right]<\infty \)) if and only if

$$\begin{aligned} \int \limits _{|u|\ge 1}\,e^{pu}\Pi (du)<\infty , \end{aligned}$$
(3)

see, i.e., Sato Sato (1999), Theorem 25.3.

If L(t), \(t\ge 0\), is a Lévy process with characteristics \((a,\,b,\,\Pi )\), then the process \(-L(t)\), \(t\ge 0\), is also a Lévy process with characteristics \((-a,\,b,\,\tilde{\Pi })\), where \(\tilde{\Pi }(A)=\Pi (-A)\) for each Borel set A, modifying it to be cádlág (Anh et al. 2002).

We introduce a two-sided Lévy process L(t), \(t\in \mathbb {R}\), defined for \(t<0\) to be equal an independent copy of \(-L(-t)\).

Let \(\hat{a}\,:\,\mathbb {R}\rightarrow \mathbb {R}_+\) be a measurable function. We consider the Lévy-driven continuous-time linear (or moving average) stochastic process

$$\begin{aligned} \varepsilon (t)=\int \limits _{\mathbb {R}}\, \hat{a}(t-s)dL(s),\ t\in \mathbb {R}. \end{aligned}$$
(4)

For causal process (4) \(\hat{a}(t)=0,\ t<0\).

In the sequel we assume that

$$\begin{aligned} \hat{a}\in L_1(\mathbb {R})\cap L_2(\mathbb {R})\ \text {or}\ \hat{a}\in L_2(\mathbb {R}) \ \text {with}\ \hbox {E}L(1)=0. \end{aligned}$$
(5)

Under the condition (5) and

$$\begin{aligned} \int \limits _{\mathbb {R}}\,u^2\Pi (du)<\infty , \end{aligned}$$

the stochastic integral in (4) is well-defined in \(L_2(\Omega )\) in the sense of stochastic integration introduced in Rajput and Rosinski (1989).

The popular choices for the kernel in (4) are Gamma type kernels:

  • \(\hat{a}(t)=t^\alpha e^{-\lambda t}\mathbb {I}_{[0,\,\infty )}(t)\), \(\lambda >0\), \(\alpha >-\frac{1}{2}\);

  • \(\hat{a}(t)=e^{-\lambda t}\mathbb {I}_{[0,\,\infty )}(t)\), \(\lambda >0\) (Ornstein-Uhlenbeck process);

  • \(\hat{a}(t)=e^{-\lambda |t|}\), \(\lambda >0\) (well-balanced Ornstein-Uhlenbeck process).

\(\mathbf A _1\). The process \(\varepsilon \) in (1) is a measurable causal linear process of the form (4), where a two-sides Lévy process L is such that \(\hbox {E}L(1)=0\), \(\hat{a}\in L_1(\mathbb {R})\cap L_2(\mathbb {R})\). Moreover the Lévy measure \(\Pi \) of L(1) satisfies (3) for some \(p>0\).

From the condition \(\mathbf{A }_1\) it follows Anh et al. (2002) for any \(r\ge 1\)

$$\begin{aligned} \log \hbox {E}\exp \left\rbrace \mathrm {i}\sum \limits _{j=1}^r\,z_j\varepsilon (t_j)\right\lbrace = \int \limits _{\mathbb {R}}\,\kappa \left(\sum \limits _{j=1}^r\,z_j\hat{a}\left(t_j-s\right)\right)ds. \end{aligned}$$
(6)

In turn from (6) it can be seen that the stochastic process \(\varepsilon \) is stationary in a strict sense.

Denote by

$$\begin{aligned} \begin{aligned} m_r(t_1,\,\ldots ,\,t_r)&=\hbox {E}\varepsilon (t_1)\ldots \varepsilon (t_r),\\ c_r(t_1,\,\ldots ,\,t_r)&=\mathrm {i}^{-r}\left.\dfrac{\partial ^r}{\partial z_1\ldots \partial z_r}\,\log \hbox {E}\exp \left\rbrace \mathrm {i}\sum \limits _{j=1}^r\,z_j\varepsilon (t_j)\right\lbrace \,\right|_{z_1 = \cdots =z_r = 0} \end{aligned} \end{aligned}$$

the moment and cumulant functions correspondingly of order \(r,\ r\ge 1\), of the process \(\varepsilon \). Thus \(m_2(t_1,\,t_2)=B(t_1-t_2)\), where

$$\begin{aligned} B(t)=d_2\int \limits _{\mathbb {R}}\,\hat{a}(t+s)\hat{a}(s)ds,\ t\in \mathbb {R}, \end{aligned}$$

is a covariance function of \(\varepsilon \), and the fourth moment function

$$\begin{aligned} \begin{aligned} m_4(t_1,\,t_2,\,t_3,\,t_4)&=c_4(t_1,\,t_2,\,t_3,\,t_4)+m_2(t_1,\,t_2)m_2(t_3,\,t_4)\\&\ \quad +\,m_2(t_1,\,t_3)m_2(t_2,\,t_4)+m_2(t_1,\,t_4)m_2(t_2,\,t_3). \end{aligned} \end{aligned}$$
(7)

The explicit expression for cumulants of the stochastic process \(\varepsilon \) can be obtained from (6) by direct calculations:

$$\begin{aligned} c_r(t_1,\,\ldots ,\,t_r)=d_r\int \limits _{\mathbb {R}}\,\prod \limits _{j=1}^r\,\hat{a}\left(t_j-s\right)ds, \end{aligned}$$
(8)

where \(d_r\) is the rth cumulant of the random variable L(1). In particular,

$$\begin{aligned} d_2=\hbox {E}L^2(1)=-\kappa ^{(2)}(0),\ \ \ d_4=\hbox {E}L^4(1) - 3\left(\hbox {E}L^2(1)\right)^2. \end{aligned}$$

Under the condition \(\mathbf{A }_1\), the spectral densities of the stationary process \(\varepsilon \) of all orders exist and can be obtained from (8) as

$$\begin{aligned} f_r(\lambda _1,\,\ldots ,\,\lambda _{r-1})=(2\pi )^{-r+1}d_r\cdot a\left(-\sum \limits _{j=1}^{r-1}\,\lambda _j\right)\cdot \prod _{j=1}^{r-1}\,a(\lambda _j), \end{aligned}$$
(9)

where \(a\in L_2(\mathbb {R})\), \(a(\lambda )=\int \limits _{\mathbb {R}}\,\hat{a}(t)e^{-\mathrm {i}\lambda t}dt\), \(\lambda \in \mathbb {R}\), if complex-valued functions \(f_r\in L_1\left(\mathbb {R}^{r-1}\right)\), \(r>2\), see, e.g., Avram et al. (2010) for definitions of the spectral densities of higher order \(f_r,\ r\ge 3\).

For \(r=2\), we denote the spectral density of the second order by

$$\begin{aligned} f(\lambda )=f_2(\lambda )=(2\pi )^{-1}d_2a(\lambda )a(-\lambda )=(2\pi )^{-1}d_2\left|a(\lambda )\right|^2. \end{aligned}$$
\(\mathbf A _2\). (i):

Spectral densities (9) of all orders \(f_r\in L_1(\mathbb {R}^{r-1})\), \(r\ge 2\);

(ii):

\(a(\lambda )=a\left(\lambda ,\,\theta ^{(1)}\right)\), \(d_2=d_2\left(\theta ^{(2)}\right)\), \(\theta =\left(\theta ^{(1)},\,\theta ^{(2)}\right)\in \Theta _\tau \), \(\Theta _\tau =\bigcup \limits _{\Vert e\Vert < 1}(\Theta +\tau e)\), \(\tau >0\) is some number, \(\Theta \subset \mathbb {R}^m\) is a bounded open convex set, that is \(f(\lambda )=f(\lambda ,\,\theta )\), \(\theta \in \Theta _\tau \), and a true value of parameter \(\theta _0\in \Theta \);

(iii):

\(f(\lambda ,\,\theta )>0\), \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\).

In the condition \(\mathbf{A }_2\)(ii) above \(\theta ^{(1)}\) represents parameters of the kernel \(\hat{a}\) in (4), while \(\theta ^{(2)}\) represents parameters of Lévy process.

Remark 2

The last part of the condition \(\mathbf{A }_1\) is fully used in the proof of Lemma 5 and Theorem B.1 in “Appendix B”. The condition \(\mathbf{A }_2\)(i) is fully used just in the proof of Lemma 5. When we refer to these conditions in other places of the text we use them partially: see, for example, Lemma 3, where we need in the existence of \(f_4\) only.

Definition 1

The least squares estimator (LSE) of the parameter \(\alpha _0\in \mathcal {A}\) obtained by observations of the process \(\left\rbrace X(t),\ t\in [0,T]\right\lbrace \) is said to be any random vector \(\widehat{\alpha }_T=(\widehat{\alpha }_{1T},\,\ldots ,\,\widehat{\alpha }_{qT})\in \mathcal {A}^c\) (\(\mathcal {A}^c\) is the closure of \(\mathcal {A}\)), such that

$$\begin{aligned} S_T\left(\widehat{\alpha }_T\right)= \min \limits _{\alpha \in \mathcal {A}^c}\,S_T(\alpha ),\ S_T(\alpha )=\int \limits _0^T\,\left(X(t)-g(t,\,\alpha )\right)^2dt. \end{aligned}$$

We consider the residual periodogram

$$\begin{aligned} I_T(\lambda ,\,\widehat{\alpha }_T)=(2\pi T)^{-1}\left|\int \limits _0^T\,\left(X(t)-g(t,\,\widehat{\alpha }_T)\right) e^{-\mathrm {i}t\lambda }dt\right|^2,\ \lambda \in \mathbb {R}, \end{aligned}$$

and the Whittle contrast field

$$\begin{aligned} U_T(\theta ,\,\widehat{\alpha }_T)=\int \limits _{\mathbb {R}}\,\left(\log f(\lambda ,\,\theta ) +\dfrac{I_T\left(\lambda ,\,\widehat{\alpha }_T\right)}{f(\lambda ,\,\theta )}\right)w(\lambda )d\lambda ,\ \theta \in \Theta ^c, \end{aligned}$$
(10)

where \(w(\lambda ),\ \lambda \in \mathbb {R}\), is an even nonnegative bounded Lebesgue measurable function, for which the intgral (10) is well-defined. The existence of integral (10) follows from the condition \(\mathbf{C }_4\) introduced below.

Definition 2

The minimum contrast estimator (MCE) of the unknown parameter \(\theta _0\in \Theta \) is said to be any random vector \(\widehat{\theta }_T=\left(\widehat{\theta }_{1T},\ldots ,\widehat{\theta }_{mT}\right)\) such that

$$\begin{aligned} U_T\left(\widehat{\theta }_T,\,\widehat{\alpha }_T\right)=\min \limits _{\theta \in \Theta ^c}\,U_T\left(\theta ,\widehat{\alpha }_T\right). \end{aligned}$$

The minimum in the Definition 2 is attained due to integral (10) continuity in \(\theta \in \Theta ^c\) as follows from the condition \(\mathbf{C }_4\) introduced below.

3 Consistency of the minimum contrast estimator

Suppose the function \(g(t,\,\alpha )\) in (1) is continuously differentiable with respect to \(\alpha \in \mathcal {A}^c\) for any \(t\ge 0\), and its derivatives \(g_i(t,\,\alpha )=\dfrac{\partial }{\partial \alpha _i}g(t,\,\alpha )\), \(i=\overline{1,q}\), are locally integrable with respect to t. Let

$$\begin{aligned} d_T(\alpha )=\hbox {diag}\Bigl (d_{iT}(\alpha ),\ i=\overline{1,q}\Bigr ),\ d_{iT}^2(\alpha )=\int \limits _0^T\,g_i^2(t,\,\alpha )dt, \end{aligned}$$

and \(\underset{T\rightarrow \infty }{\liminf }\,T^{-\frac{1}{2}}d_{iT}(\alpha )>0\), \(i=\overline{1,q}\), \(\alpha \in \mathcal {A}\).

Set

$$\begin{aligned} \Phi _T(\alpha _1,\,\alpha _2)=\int \limits _0^T\, (g(t,\,\alpha _1)-g(t,\,\alpha _2))^2dt,\ \alpha _1,\,\alpha _2\in \mathcal {A}^c. \end{aligned}$$

We assume that the following conditions are satisfied.

\(\mathbf{C }_1\).:

The LSE \(\widehat{\alpha }_T\) is a weakly consistent estimator of \(\alpha _0\in \mathcal {A}\) in the sense that

$$\begin{aligned} T^{-\frac{1}{2}}d_T(\alpha _0)\left(\widehat{\alpha }_T-\alpha _0\right)\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \text {as}\ T\rightarrow \infty . \end{aligned}$$
\(\mathbf{C }_2\).:

There exists a constant \(c_0<\infty \) such that for any \(\alpha _0\in \mathcal {A}\) and \(T>T_0\), where \(c_0\) and \(T_0\) may depend on \(\alpha _0\),

$$\begin{aligned} \Phi _T(\alpha ,\,\alpha _0)\le c_0\Vert d_T(\alpha _0)\left(\alpha -\alpha _0\right)\Vert ^2,\ \alpha \in \mathcal {A}^c. \end{aligned}$$

The fulfillment of the conditions C\(_1\) and C\(_2\) is discussed in more detail in “Appendix A”. We need also in 3 more conditions.

\(\mathbf{C }_3\).:

\(f(\lambda ,\,\theta _1)\ne f(\lambda ,\,\theta _2)\) on a set of positive Lebesgue measure once \(\theta _1\ne \theta _2\), \(\theta _1,\theta _2\in \Theta ^c\).

\(\mathbf{C }_4\).:

The functions \(w(\lambda )\log f(\lambda ,\,\theta )\), \(\dfrac{w(\lambda )}{f(\lambda ,\,\theta )}\) are continuous with respect to \(\theta \in \Theta ^c\) almost everywhere in \(\lambda \in \mathbb {R}\), and

(i):

\(w(\lambda )\left|\log f(\lambda ,\,\theta )\right|\le Z_1(\lambda )\), \(\theta \in \Theta ^c\), almost everywhere in \(\lambda \in \mathbb {R}\), and \(Z_1(\cdot )\in L_1(\mathbb {R})\);

(ii):

\(\sup \limits _{\lambda \in \mathbb {R},\,\theta \in \Theta ^c}\,\dfrac{w(\lambda )}{f(\lambda ,\,\theta )} =c_1<\infty \).

\(\mathbf{C }_5\).:

There exists an even positive Lebesgue measurable function \(v(\lambda )\), \(\lambda \in \mathbb {R}\), such that

(i):

\(\dfrac{v(\lambda )}{f(\lambda ,\,\theta )}\) is uniformly continuous in \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\);

(ii):

\(\sup \limits _{\lambda \in \mathbb {R}}\,\dfrac{w(\lambda )}{v(\lambda )}<\infty \).

Theorem 1

Under conditions \(\mathbf{A }_1, \mathbf{A }_2, \mathbf{C }_1\)\(\mathbf{C }_5\)\(\widehat{\theta }_T\ \overset{\hbox {P}}{\longrightarrow }\ \theta \), as \(T\rightarrow \infty \).

To prove the theorem we need some additional assertions.

Lemma 1

Under condition \(\mathbf{A }_1\)

$$\begin{aligned} \nu ^*_T=T^{-1}\int \limits _0^T\,\varepsilon ^2(t)dt\ \overset{\hbox {P}}{\longrightarrow }\ B(0),\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$

Proof

For any \(\rho >0\) by Chebyshev inequality and (7)

$$\begin{aligned} \begin{aligned} \hbox {P}\left\rbrace \left|\nu ^*_T-B(0)\right|\ge \rho \right\lbrace&\le \rho ^{-2}T^{-2}\int \limits _0^T\int \limits _0^T\,c_4(t,t,s,s)dtds+ \\&\quad +\,2\rho ^{-2}T^{-2}\int \limits _0^T\int \limits _0^T\,B^2(t-s)dtds=I_1+I_2. \end{aligned} \end{aligned}$$

From \(\mathbf{A }_1\) it follows that \(I_2=O(T^{-1})\). Using expression (8) for cumulants of the process \(\varepsilon \) we get

$$\begin{aligned} \begin{aligned} I_1&= d_4\rho ^{-2}T^{-2}\int \limits _0^T\int \limits _0^T\int \limits _{\mathbb {R}}\,\hat{a}^2(t-u)\hat{a}^2(s-u)dudtds \\&=d_4\rho ^{-2}T^{-2}\int \limits _0^T\,\left(\int \limits _{\mathbb {R}}\,\hat{a}^2(t-u)\left(\int \limits _0^T\,\hat{a}^2(s-u)ds\right)du\right)dt \le d_4\rho ^{-2}\left\Vert\hat{a}\right\Vert_2^4T^{-1}, \end{aligned} \end{aligned}$$

where \(\left\Vert\hat{a}\right\Vert_2=\left(\int \limits _{\mathbb {R}}\,\hat{a}^2(u)du\right)^{\frac{1}{2}}\), that is \(I_1=O(T^{-1})\) as well. \(\square \)

Let

$$\begin{aligned} \begin{aligned} \mathrm {F}_T^{(k)}\left(u_1,\,\ldots ,\,u_k\right) =\mathrm {F}_T^{(k)}\left(u_1\,\ldots ,\,u_{k-1}\right)&=(2\pi )^{-(k-1)}T^{-1} \int \limits _{[0,T]^k}\, e^{\mathrm {i}\sum \limits _{j=1}^kt_j u_j}dt_1\ldots dt_k\\&=(2\pi )^{-(k-1)}T^{-1}\prod \limits _{i=1}^k\, \dfrac{\sin \frac{Tu_j}{2}}{\frac{u_j}{2}}, \end{aligned} \end{aligned}$$

with \(u_k=-\left(u_1+\ldots +u_{k-1}\right)\), \(u_j\in \mathbb {R}\), \(j=\overline{1,k}\).

The functions \(\mathrm {F}_T^{(k)}\left(u_1,\ldots ,u_k\right)\), \(k\ge 3\), are multidimensional analogues of the Fejér kernel, for \(k=2\) we obtain the usual Fejér kernel.

The next statement bases on the results by Bentkus (1972a, b), Bentkus and Rutkauskas (1973).

Lemma 2

Let function \(G\left(u_1,\,\ldots ,\,u_k\right)\), \(u_k=-\left(u_1+\ldots +u_{k-1}\right)\) be bounded and continuous at the point \(\left(u_1,\,\ldots ,\,u_{k-1}\right)=(0,\,\ldots ,\,0)\). Then

$$\begin{aligned} \lim \limits _{T\rightarrow \infty }\,\int \limits _{\mathbb {R}^{k-1}}\,\mathrm {F}_T^k\left(u_1,\,\ldots ,\,u_{k-1}\right) G\left(u_1,\,\ldots ,\,u_k\right)du_1\ldots du_{k-1}=G(0,\,\ldots ,\,0). \end{aligned}$$

We set

$$\begin{aligned} \begin{aligned} g_T(\lambda ,\,\alpha )&=\int \limits _0^T\,e^{-\mathrm {i}\lambda t}g(t,\,\alpha )dt,\quad s_T(\lambda ,\,\alpha )=g_T(\lambda ,\,\alpha _0)-g_T(\lambda ,\,\alpha ),\\ \varepsilon _T(\lambda )&=\int \limits _0^T\,e^{-\mathrm {i}\lambda t}\varepsilon (t)dt,\quad I_T^{\varepsilon }(\lambda )=(2\pi T)^{-1}\left|\varepsilon _T(\lambda )\right|^2, \end{aligned} \end{aligned}$$

and write the residual periodogram in the form

$$\begin{aligned} I_T\left(\lambda ,\,\widehat{\alpha }_T\right)=I_T^{\varepsilon }(\lambda )+(\pi T)^{-1}\hbox {Re}\left\rbrace \varepsilon _T(\lambda ) \overline{s_T(\lambda ,\,\widehat{\alpha }_T)}\right\lbrace +(2\pi T)^{-1}\left|s_T(\lambda ,\,\widehat{\alpha }_T)\right|^2. \end{aligned}$$

Let \(\varphi =\varphi (\lambda ,\,\theta )\), \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\), be an even Lebesgue measurable with respect to variable \(\lambda \) for each fixed \(\theta \) weight function. We have

$$\begin{aligned} J_T(\varphi ,\,\widehat{\alpha }_T)= & {} \int \limits _{\mathbb {R}}\,I_T(\lambda ,\,\widehat{\alpha }_T)\varphi (\lambda ,\,\theta )d\lambda = \int \limits _{\mathbb {R}}\,I_T^{\varepsilon }(\lambda )\varphi (\lambda ,\,\theta )d\lambda \\&+\,(\pi T)^{-1}\int \limits _{\mathbb {R}}\,\hbox {Re}\left\rbrace \varepsilon _T(\lambda ) \overline{s_T(\lambda ,\,\widehat{\alpha }_T)}\right\lbrace \varphi (\lambda ,\,\theta )d\lambda \\&+\,(2\pi T)^{-1} \int \limits _{\mathbb {R}}\,\left|s_T(\lambda ,\,\widehat{\alpha }_T)\right|^2 \varphi (\lambda ,\,\theta )d\lambda \\= & {} J_T^{\varepsilon }(\varphi )+J_T^{(1)}(\varphi )+J_T^{(2)}(\varphi ). \end{aligned}$$

Suppose

$$\begin{aligned} \varphi (\lambda ,\,\theta )\ge 0,\ \sup \limits _{\lambda \in \mathbb {R},\,\theta \in \Theta ^c}\,\varphi (\lambda ,\,\theta )= c(\varphi )<\infty . \end{aligned}$$
(11)

Then by the Plancherel identity and condition \(\mathbf{C }_2\)

Taking into account conditions \(\mathbf{A }_1, \mathbf{C }_1, \mathbf{C }_2\) and the result of Lemma 1 we obtain

$$\begin{aligned} \sup \limits _{\theta \in \Theta ^c}\,\left|J_T^{(1)}(\varphi )\right|\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
(12)

On the other hand

and again, thanks to \(\mathbf{C }_1, \mathbf{C }_2\),

$$\begin{aligned} \sup \limits _{\theta \in \Theta ^c}\,J_T^{(2)}(\varphi )\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
(13)

Lemma 3

Suppose conditions \(\mathbf{A }_1, \mathbf{A }_2\) are fulfilled and the weight function \(\varphi (\lambda ,\,\theta )\) introduced above satisfies (11). Then, as \(T\rightarrow \infty \),

$$\begin{aligned} J_T^{\varepsilon }(\varphi )\ \overset{\hbox {P}}{\longrightarrow }\ J(\varphi )=\int \limits _{\mathbb {R}}\,f(\lambda ,\,\theta _0) \varphi (\lambda ,\,\theta )d\lambda ,\ \theta \in \Theta ^c. \end{aligned}$$

Proof

The lemma in fact is an application of Lemma 2 in Anh et al. (2002) and Theorem 1 in Anh et al. (2004) reasoning to linear process (4). It is sufficient to prove

$$\begin{aligned} (1)\ \hbox {E}J_T^\varepsilon (\varphi )\ \longrightarrow \ J(\varphi );\ \ \ (2)\ J_T^\varepsilon (\varphi )-\hbox {E}J_T^\varepsilon (\varphi )\ \overset{\hbox {P}}{\longrightarrow }\ 0. \end{aligned}$$

Omitting parameters \(\theta _0\), \(\theta \) in some formulas below we derive

$$\begin{aligned} \begin{aligned} \hbox {E}J_T^{\varepsilon }(\varphi )&=\int \limits _{\mathbb {R}}\,G_2(u)\mathrm {F}_T^{(2)}(u)du,\ \ G_2(u)=\int \limits _{\mathbb {R}}\,f(\lambda +u)\varphi (\lambda )d\lambda ;\\ T\hbox {Var}J_T^\varepsilon (\varphi )&=2\pi \int \limits _{\mathbb {R}^3}\,G_4(u_1,\,u_2,\,u_3) \mathrm {F}_T^{(4)}(u_1,\,u_2,\,u_3) du_1du_2du_3,\\ G_4(u_1,\,u_2,\,u_3)&=2\int \limits _{\mathbb {R}}\,f(\lambda +u_1)f(\lambda -u_3)\varphi (\lambda )\varphi (\lambda +u_1+u_2)d\lambda \\&\quad +\,\int \limits _{\mathbb {R}^2}\,f_4(\lambda +u_1,\,-\lambda +u_2,\,\mu +u_3)\varphi (\lambda ) \varphi (\mu )d\lambda d\mu \\&=2G_4^{(1)}(u_1,\,u_2,\,u_3)+G_4^{(2)}(u_1,\,u_2,\,u_3). \end{aligned} \end{aligned}$$

To apply Lemma 2 we have to show that the functions \(G_2(u)\), \(u\in \mathbb {R}\); \(G_4^{(1)}(\mathrm {u})\), \(G_4^{(2)}(\mathrm {u})\), \(\mathrm {u}=(u_1,\,u_2,\,u_3)\in \mathbb {R}^3\), are bounded and continuous at origins.

Boundedness of \(G_2\) follows from (11). Thanks to (11)

$$\begin{aligned} \underset{\mathrm {u}\in \mathbb {R}^3}{\sup }\,\left|G_4^{(1)}(\mathrm {u})\right|\le c^2(\varphi )\Vert f\Vert _2^2<\infty ,\ \ \Vert f\Vert _2=\left(\int \limits _{\mathbb {R}}\,f^2(\lambda ,\,\theta _0)d\lambda \right)^{\frac{1}{2}}. \end{aligned}$$

On the other hand, by (9)

$$\begin{aligned} |G_4^{(2)}(u_1,\,u_2,\,u_3)|\le & {} d_4(2\pi )^{-3}\int \limits _{\mathbb {R}}\,\left|a(\lambda +u_1)a(-\lambda +u_2)\right|\varphi (\lambda )d\lambda \\&\cdot \int \limits _{\mathbb {R}}\,\left|a(\mu +u_3)a(-\mu -u_1-u_2-u_3)\right|\varphi (\mu )d\mu \\= & {} d_4\cdot (2\pi )^{-3}\cdot I_3\cdot I_4,\\ I_3\le & {} 2\pi c(\varphi )d_2^{-1}\int \limits _{\mathbb {R}}\,f(\lambda ,\,\theta _0)d\lambda =2\pi c(\varphi )d_2^{-1}B(0). \end{aligned}$$

Integral \(I_4\) admits the same upper bound. So,

$$\begin{aligned} \underset{\mathrm {u}\in \mathbb {R}^3}{\sup }\,\left|G_4^{(2)}(\mathrm {u})\right|\le (2\pi )^{-1}\gamma _2c^2(\varphi )B^2(0), \end{aligned}$$

where \(\gamma _2=\dfrac{d_4}{d_2^2}>0\) is the excess of L(1) distribution, and functions \(G_2\), \(G_4^{(1)}\), \(G_4^{(2)}\) are bounded. The continuity at origins of these functions follows from conditions of Lemma 3 as well. \(\square \)

Corollary 1

If \(\varphi (\lambda ,\,\theta )=\dfrac{w(\lambda )}{f(\lambda ,\,\theta )}\), then under conditions \(\mathbf{A }_1, \mathbf{A }_2, \mathbf{C }_1, \mathbf{C }_2\) and \(\mathbf{C }_4\)

$$\begin{aligned} U_T(\theta ,\,\widehat{\alpha }_T)\ \overset{\hbox {P}}{\longrightarrow }\ U(\theta ) =\int \limits _{\mathbb {R}}\,\left(\log f(\lambda ,\,\theta )+ \dfrac{f(\lambda ,\,\theta _0)}{f(\lambda ,\,\theta )}\right)w(\lambda )d\lambda ,\ \theta \in \Theta ^c. \end{aligned}$$

Consider the Whittle contrast function

$$\begin{aligned} K(\theta _0,\,\theta )=U(\theta )-U(\theta _0)=\int \limits _{\mathbb {R}}\, \left(\dfrac{f(\lambda ,\,\theta _0)}{f(\lambda ,\,\theta )}-1- \log \dfrac{f(\lambda ,\,\theta _0)}{f(\lambda ,\,\theta )}\right) w(\lambda )d\lambda \ge 0, \end{aligned}$$

with \(K(\theta _0,\,\theta )=0\) if and only if \(\theta =\theta _0\) due to \(\mathbf{C }_3\).

Lemma 4

If the coditions \(\mathbf{A }_1, \mathbf{A }_2, \mathbf{C }_1, \mathbf{C }_2, \mathbf{C }_4\) and \(\mathbf{C }_5\) are satisfied, then

$$\begin{aligned} \sup \limits _{\theta \in \Theta ^c}\,\left|U_T(\theta ,\,\widehat{\alpha }_T)- U(\theta )\right|\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$

Proof

Let \(\{\theta _j,\ j=\overline{1,N_{\delta }}\}\) be a \(\delta \)-net of the set \(\Theta ^c\). Then

$$\begin{aligned} \begin{aligned}&\sup \limits _{\theta \in \Theta ^c}\left|U_T(\theta ,\,\widehat{\alpha }_T)-U(\theta )\right|\le \\&\quad \le \sup \limits _{\Vert \theta _1-\theta _2\Vert \le \delta } \left|U_T(\theta _1,\,\widehat{\alpha }_T)-U(\theta _1) -(U_T(\theta _2,\,\widehat{\alpha }_T)-U(\theta _2))\right|\\&\qquad +\max \limits _{1\le j\le N_{\delta }} \left|U_T(\theta _j,\,\widehat{\alpha }_T)-U(\theta _j)\right|, \end{aligned} \end{aligned}$$

and for any \(\rho \ge 0\)

$$\begin{aligned} \begin{aligned} \hbox {P}\left\rbrace \sup \limits _{\theta \in \Theta ^c}\,\left| U_T(\theta ,\,\widehat{\alpha }_T)-U(\theta )\right|\ge \rho \right\lbrace \le P_1+P_2, \end{aligned} \end{aligned}$$

with

$$\begin{aligned} P_2=\hbox {P}\left\rbrace \max \limits _{1\le j\le N_{\delta }}\,\left| U_T(\theta _j,\,\widehat{\alpha }_T)-U(\theta _j)\right| \ge \dfrac{\rho }{2}\right\lbrace \ \rightarrow \ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$

by Corollary 1. On the other hand,

$$\begin{aligned} \begin{aligned} P_1&=\hbox {P}\left\rbrace \sup \limits _{\Vert \theta _1-\theta _2\Vert \le \delta }\, \Bigl |U_T(\theta _1,\,\widehat{\alpha }_T)-U(\theta _1)- \left(U_T(\theta _2,\,\widehat{\alpha }_T)-U(\theta _2)\right) \Bigr |\ge \frac{\rho }{2}\right\lbrace \\&\le \hbox {P}\left\rbrace \sup \limits _{\Vert \theta _1-\theta _2\Vert \le \delta }\, \left|\int \limits _{\mathbb {R}}\,I_T^{\varepsilon }(\lambda ) \left(\dfrac{w(\lambda )}{f(\lambda ,\,\theta _1)}- \dfrac{w(\lambda )}{f(\lambda ,\,\theta _2)}\right)d\lambda \right|\right.\\&\quad +\,\sup \limits _{\Vert \theta _1-\theta _2\Vert \le \delta }\, \left|\int \limits _{\mathbb {R}}\,f(\lambda ,\,\theta _0) \left(\dfrac{w(\lambda )}{f(\lambda ,\,\theta _1)} -\dfrac{w(\lambda )}{f(\lambda ,\,\theta _2)}\right)d\lambda \right|\\&\quad +\,2\left.\sup \limits _{\theta \in \Theta ^c} \left|J_T^{(1)}\left(\dfrac{w}{f}\right)\right| +2\sup \limits _{\theta \in \Theta ^c}\,J_T^{(2)}\left(\dfrac{w}{f}\right) \ge \dfrac{\rho }{2}\right\lbrace . \end{aligned} \end{aligned}$$
(14)

By the condition \(\mathbf{C }_5\)(i)

$$\begin{aligned} \sup \limits _{\Vert \theta _1-\theta _2\Vert \le \delta }\, \left|\int \limits _{\mathbb {R}}\, I_T^{\varepsilon }(\lambda ) \left(\dfrac{w(\lambda )}{f(\lambda ,\,\theta _1)} -\dfrac{w(\lambda )}{f(\lambda ,\,\theta _2)}\right)d\lambda \right| \le \eta (\delta )\int \limits _{\mathbb {R}}\, I_T^{\varepsilon }(\lambda )\dfrac{w(\lambda )}{v(\lambda )}d\lambda , \end{aligned}$$

where

$$\begin{aligned} \eta (\delta )=\sup \limits _{\lambda \in \mathbb {R},\, \Vert \theta _1-\theta _2\Vert \le \delta }\, \left|\dfrac{v(\lambda )}{f(\lambda ,\,\theta _1)} -\dfrac{v(\lambda )}{f(\lambda ,\,\theta _2)}\right|\ \rightarrow \ 0,\ \delta \rightarrow 0. \end{aligned}$$

Since by Lemma 3 and the condition \(\mathbf{C }_5\)(ii)

$$\begin{aligned} \int \limits _{\mathbb {R}}\,I_T^{\varepsilon }(\lambda )\dfrac{w(\lambda )}{v(\lambda )} d\lambda \overset{\hbox {P}}{\longrightarrow }\int \limits _{\mathbb {R}}\,f(\lambda , \theta _0)\dfrac{w(\lambda )}{v(\lambda )} d\lambda ,\ \ \text {as}\ \ T\rightarrow \infty , \end{aligned}$$

and the 2nd term under the probability sign in (14) by chosing \(\delta \) can be made arbitrary small, then \(P_1\rightarrow 0\), as \(T\rightarrow 0\), taking into account that the 3rd and the 4th terms converge to zero in probability, thanks to (12) and (13), if \(\varphi =\dfrac{w}{f}\). \(\square \)

Proof of Theorem 1

By Definition 2 for any \(\rho >0\)

$$\begin{aligned} \begin{aligned}&\hbox {P}\left\rbrace \left\Vert\widehat{\theta }_T-\theta _0\right\Vert\ge \rho \right\lbrace =\hbox {P}\left\rbrace \left\Vert\widehat{\theta }_T-\theta _0\right\Vert\ge \rho ;\ U_T(\widehat{\theta }_T,\,\widehat{\alpha }_T)\le U_T(\theta _0,\,\widehat{\alpha }_T)\right\lbrace \\&\quad \le \hbox {P}\left\rbrace \inf \limits _{\Vert \theta -\theta _0\Vert \ge \rho }\, \left(U_T(\theta ,\,\widehat{\alpha }_T) -U_T(\theta _0,\,\widehat{\alpha }_T)\right)\le 0\right\lbrace \\&\quad = \hbox {P}\left\rbrace \inf \limits _{\Vert \theta -\theta _0\Vert \ge \rho }\, \Bigl [U_T(\theta ,\,\widehat{\alpha }_T)-U(\theta ) -(U_T(\theta _0,\,\widehat{\alpha }_T)-U(\theta _0)) +K(\theta _0,\theta )\Bigr ]\le 0\right\lbrace \\&\quad \le \hbox {P}\left\rbrace \inf \limits _{\Vert \theta -\theta _0\Vert \ge \rho }\, \Bigl [U_T(\theta ,\,\widehat{\alpha }_T)-U(\theta ) -(U_T(\theta _0,\,\widehat{\alpha }_T)-U(\theta _0))\Bigr ] +\inf \limits _{\Vert \theta -\theta _0\Vert \ge \rho } K(\theta _0,\theta )\le 0\right\lbrace \\&\quad \le \hbox {P}\left\rbrace \sup \limits _{\theta \in \Theta ^c}\, \left|U_T(\theta ,\,\widehat{\alpha }_T)-U(\theta )\right| +\left|U_T(\theta _0,\,\widehat{\alpha }_T)-U(\theta _0)\right| \ge \inf \limits _{\Vert \theta -\theta _0\Vert \ge \rho }\, K(\theta _0,\,\theta )\right\lbrace \ \rightarrow \ 0, \end{aligned} \end{aligned}$$

when \(T\rightarrow \infty \) due to Lemma 4 and the property of the contrast function K. \(\square \)

4 Asymptotic normality of minimum contrast estimator

The first three conditions relate to properties of the regression function \(g(t,\,\alpha )\) and the LSE \(\widehat{\alpha }_T\). They are commented in “Appendix B”.

\({\mathbf{N }}_1\).:

The normed LSE \(d_T(\alpha _0)\left(\widehat{\alpha }_T-\alpha _0\right)\) is asymptotically, as \(T\rightarrow \infty \), normal \(N(0,\,\Sigma _{_{LSE}})\), \(\Sigma _{_{LSE}}=\left(\Sigma _{_{LSE}}^{ij}\right)_{i,j=1}^q\).

Let us

$$\begin{aligned} g'(t,\,\alpha )=\dfrac{\partial }{\partial t}g(t,\,\alpha );\ \ \ \Phi '_T(\alpha _1,\,\alpha _2) =\int \limits _0^T\,\left(g'(t,\,\alpha _1)-g'(t,\,\alpha _2)\right)^2dt,\ \alpha _1,\,\alpha _2\in \mathcal {A}^c. \end{aligned}$$
\(\mathbf N _2\).:

The function \(g(t,\,\alpha )\) is continuously differentiable with respect to \(t\ge 0\) for any \(\alpha \in \mathcal {A}^c\) and for any \(\alpha _0\in \mathcal {A}\), and \(T>T_0\) there exists a constant \(c_0'\) (\(T_0\) and \(c'_0\) may depend on \(\alpha _0\)) such that

$$\begin{aligned} \Phi _T'(\alpha ,\,\alpha _0) \le c_0'\Bigl \Vert d_T(\alpha _0)\left(\alpha -\alpha _0\right)\Bigr \Vert ^2,\ \alpha \in \mathcal {A}^c. \end{aligned}$$

Let

$$\begin{aligned} g_{il}(t,\,\alpha )= & {} \dfrac{\partial ^2}{\partial \alpha _i \partial \alpha _l}g(t,\,\alpha ),\ \ d_{il,T}^2(\alpha )=\int \limits _0^T\,g_{il}^2(t,\,\alpha )dt,\ \ i,l=\overline{1,q},\\ v(r)= & {} \left\rbrace x\in \mathbb {R}^q\,:\,\Vert x\Vert <r\right\lbrace ,\ r>0. \end{aligned}$$
\(\mathbf N _3\).:

The function \(g(t,\,\alpha )\) is twice continuously differentiable with respect to \(\alpha \in \mathcal {A}^c\) for any \(t\ge 0\), and for any \(R\ge 0\) and all sufficiently large T (\(T>T_0(R)\))

(i):

\(d_{iT}^{-1}(\alpha _0)\sup \limits _{t\in [0,T],\,u\in v^c(R)}\, \left|g_i\left(t,\,\alpha _0+d_T^{-1}(\alpha _0)u\right)\right|\le c^i(R)T^{-\frac{1}{2}}\), \(i=\overline{1,q}\);

(ii):

\(d_{il,T}^{-1}(\alpha _0)\sup \limits _{t\in [0,T],\,u\in v^c(R)}\, \left|g_{il}\left(t,\,\alpha _0+d_T^{-1}(\alpha _0)u\right)\right|\le c^{il}(R)T^{-\frac{1}{2}}\), \(i,l=\overline{1,q}\);

(iii):

\(d_{iT}^{-1}(\alpha _0)d_{lT}^{-1}(\alpha _0) d_{il,T}(\alpha _0)\le \tilde{c}^{il}T^{-\frac{1}{2}}\), \(i,l=\overline{1,q}\),

with positive constants \(c^i\), \(c^{il}\), \(\tilde{c}^{il}\), possibly, depending on \(\alpha _0\).

We assume also that the function \(f(\lambda ,\,\theta )\) is twice differentiable with respect to \(\theta \in \Theta ^c\) for any \(\lambda \in \mathbb {R}\).

Set

$$\begin{aligned} f_i(\lambda ,\,\theta )=\dfrac{\partial }{\partial \theta _i} f(\lambda ,\,\theta ),\ \ \ f_{ij}(\lambda ,\,\theta )=\dfrac{\partial ^2}{\partial \theta _i \partial \theta _j}f(\lambda ,\,\theta ), \end{aligned}$$

and introduce the following conditions.

\(\mathbf{N}_4\).:
(i):

For any \(\theta \in \Theta ^c\) the functions \(\varphi _i(\lambda )=\dfrac{f_i(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )\), \(\lambda \in \mathbb {R}\), \(i=\overline{1,m}\), possess the following properties:

(1):

\(\varphi _i\in L_\infty (\mathbb {R})\cap L_1(\mathbb {R})\);

(2):

\(\overset{+\infty }{\underset{-\infty }{\hbox {Var}}}\,\varphi _i<\infty \);

(3):

\(\underset{\eta \rightarrow 1}{\lim }\,\underset{\lambda \in \mathbb {R}}{\sup }\, \left|\varphi _i(\eta \lambda )-\varphi _i(\lambda )\right|=0\) ;

(4):

\(\varphi _i\) are differentiable and \(\varphi '_i\) are uniformly continuous on \(\mathbb {R}\).

(ii):

\(\dfrac{|f_i(\lambda ,\,\theta )|}{f(\lambda ,\,\theta )}w(\lambda ) \le Z_2(\lambda )\), \(\theta \in \Theta \), \(i=\overline{1,m}\), almost everywhere in \(\lambda \in \mathbb {R}\) and \(Z_2(\cdot )\in L_1(\mathbb {R})\).

(iii):

The functions \(\dfrac{f_i(\lambda ,\,\theta ) f_j(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )\), \(\dfrac{f_{ij}(\lambda ,\,\theta )}{f(\lambda ,\,\theta )}w(\lambda )\) are continuous with respect to \(\theta \in \Theta ^c\) for each \(\lambda \in \mathbb {R}\) and

$$\begin{aligned} \dfrac{f_i^2(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda ) +\dfrac{|f_{ij}(\lambda ,\,\theta )|}{f(\lambda ,\,\theta )}w(\lambda )\le a_{ij}(\lambda ),\ \lambda \in \mathbb {R},\ \theta \in \Theta ^c, \end{aligned}$$

where \(a_{ij}(\cdot )\in L_1(\mathbb {R})\), \(i,j=\overline{1,m}\).

\(\mathbf N _5\).:
(i):

\(\dfrac{f_i^2(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}w(\lambda )\), \(\dfrac{f_{ij}(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )\), \(i,j=\overline{1,m}\), are bounded functions in \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\);

(ii):

There exists an even positive Lebesgue measurable function \(v(\lambda ),\ \lambda \in \mathbb {R}\), such that the functions \(\dfrac{f_i(\lambda ,\,\theta ) f_j(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}v(\lambda )\), \(\dfrac{f_{ij}(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}v(\lambda )\), \(i,j=\overline{1,m}\), are uniformly continuous in \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\);

(iii):

\(\underset{\lambda \in \mathbb {R}}{\sup }\,\dfrac{w(\lambda )}{v(\lambda )}<\infty \).

Conditions \(\mathbf N _5\)(iii) and \(\mathbf C _5\)(ii) look the same, however the function v in these conditions must satisfy different conditions \(\mathbf N _5\)(ii) and \(\mathbf C _5\)(i), and therefore, generally speaking, the functions v in these two conditions can be different.

The next three matrices appear in the formulation of Theorem 2:

$$\begin{aligned} \begin{aligned} W_1(\theta )&=\int \limits _{\mathbb {R}}\,\nabla _\theta \log f(\lambda ,\,\theta )\nabla _\theta '\log f(\lambda ,\,\theta )w(\lambda )d\lambda ,\\ W_2(\theta )&=4\pi \int \limits _{\mathbb {R}}\,\nabla _\theta \log f(\lambda ,\,\theta )\nabla _\theta '\log f(\lambda ,\,\theta ) w^2(\lambda )d\lambda ,\\ V(\theta )&=\gamma _2\int \limits _{\mathbb {R}}\,\nabla _\theta \log f(\lambda ,\,\theta )w(\lambda )d\lambda \int \limits _{\mathbb {R}}\,\nabla _\theta '\log f(\lambda ,\theta )w(\lambda )d\lambda , \end{aligned} \end{aligned}$$

where \(\gamma _2=\dfrac{d_4}{d_2^2}>0\) is the excess of the random variable L(1), \(\nabla _\theta \) is a column vector-gradient, \(\nabla _\theta '\) is a row vector-gradient.

\(\mathbf N _6\). Matrices \(W_1(\theta )\) and \(W_2(\theta )\) are positive definite for \(\theta \in \Theta \).

Theorem 2

Under conditions \(\mathbf{A }_1, \mathbf{A }_2, \mathbf{C }_1\)\(\mathbf{C }_5\) and \(\mathbf{N }_1\)\(\mathbf{N }_6\) the normed MCE \(T^{\frac{1}{2}}(\widehat{\theta }_T-\theta _0)\) is asymptotically, as \(T\rightarrow \infty \), normal with zero mean and covariance matrix

$$\begin{aligned} W(\theta )=W_1^{-1}(\theta _0)\left(W_2(\theta _0)+V(\theta _0)\right) W_1^{-1}(\theta _0). \end{aligned}$$
(15)

The proof of the theorem is preceded by several lemmas. The next statement is Theorem 5.1 Avram et al. (2010) formulated in a form convenient to us.

Lemma 5

Let the stochastic process \(\varepsilon \) satisfies \(\mathbf A _1\), \(\mathbf A _2\), spectral density \(f\in L_p(\mathbb {R})\), a function \(b\in L_q(\mathbb {R})\bigcap L_1(\mathbb {R})\), where \(\dfrac{1}{p}+\dfrac{1}{q}=\dfrac{1}{2}\). Let

$$\begin{aligned} \hat{b}(t)=\int \limits _{\mathbb {R}}\,e^{i\lambda t}b(\lambda )d\lambda \end{aligned}$$
(16)

and

$$\begin{aligned} Q_T=\int \limits _0^T\int \limits _0^T\, \left(\varepsilon (t)\varepsilon (s) -B(t-s)\right)\hat{b}(t-s)dtds. \end{aligned}$$
(17)

Then the central limit theorem holds:

$$\begin{aligned} T^{-\frac{1}{2}}Q_T\ \Rightarrow \ N(0,\,\sigma ^2),\ \ \text {as}\ \ T\rightarrow \infty , \end{aligned}$$

where “\(\Rightarrow \)” means convergence in distributions,

$$\begin{aligned} \sigma ^2=16\pi ^3\int \limits _{\mathbb {R}}\,b^2(\lambda ) f^2(\lambda )d\lambda +\gamma _2\left(2\pi \int \limits _{\mathbb {R}}\, b(\lambda )f(\lambda )d\lambda \right)^2, \end{aligned}$$
(18)

where \(\gamma _2=\dfrac{d_4}{d_2^2}>0\) is the excess of the random variable L(1). In particular, the statement is true for \(p=2\) and \(q=\infty \).

Alternative form of Lemma 5 is given in Bai et al. (2016). We formulate their Theorem 2.1 in the form convenient to us.

Lemma 6

Let the stochastic process \(\varepsilon \) be such that \(\hbox {E}L(1)=0\), \(\hbox {E}L^4(1)<\infty \), and \(Q_T\) be as in (17). Assume that \(\hat{a}\in L_p(\mathbb {R})\cap L_2(\mathbb {R})\), \(\hat{b}\) is of the form (16) with even function \(b\in L_1(\mathbb {R})\) and \(\hat{b}\in L_q(\mathbb {R})\) with

$$\begin{aligned} 1\le p,\,q\le 2,\ \ \dfrac{2}{p}+\dfrac{1}{q}\ge \dfrac{5}{2}, \end{aligned}$$

then

$$\begin{aligned} T^{-\frac{1}{2}}Q_T\ \Rightarrow \ N(0,\,\sigma ^2),\ \ \text {as}\ \ T\rightarrow \infty , \end{aligned}$$

where \(\sigma ^2\) is given in (18).

Remark 3

It is important to note that conditions of Lemma 5 are given in frequency domain, while Lemma 6 employs the time domain conditions.

Theorems similar to Lemmas 5 and 6 can be found in paper by Giraitis et al. (2017), where the case of martingale-differences were considered. Overview of analogous results for different types of processes is given in the paper by Ginovyan et al. (2014).

Set

$$\begin{aligned} \Delta _T(\varphi )=T^{-\frac{1}{2}}\int \limits _{\mathbb {R}}\, \varepsilon _T(\lambda )\overline{s_T(\lambda ,\,\widehat{\alpha }_T)} \varphi (\lambda )d\lambda . \end{aligned}$$

Lemma 7

Suppose the conditions \(\mathbf{A }_1, \mathbf{A }_2, \mathbf{C }_2, \mathbf{N }_1\)\(\mathbf{N }_3\) are fulfilled, \(\varphi (\lambda )\), \(\lambda \in \mathbb {R}\), is a bounded differentiable function satisfying the relation 3) of the condition \(\mathbf{N }_4\)(i), and moreover the derivative \(\varphi '(\lambda )\), \(\lambda \in \mathbb {R}\), is uniformly continuous on \(\mathbb {R}\). Then

$$\begin{aligned} \Delta _T(\varphi )\overset{\hbox {P}}{\longrightarrow }0\ \text {as}\ T\rightarrow \infty . \end{aligned}$$

Proof

Let \(B_\sigma \) be the set of all bounded entire functions on \(\mathbb {R}\) of exponential type \(0\le \sigma <\infty \) (see “Appendix C”), and \(\delta >0\) is an arbitrarily small number. Then there exists a function \(\varphi _\sigma \in B_\sigma \), \(\sigma =\sigma (\delta )\), such that

$$\begin{aligned} \sup \limits _{\lambda \in \mathbb {R}}\, |\varphi (\lambda )-\varphi _\sigma (\lambda )|<\delta . \end{aligned}$$

Let \(T_n(\varphi _\sigma ;\,\lambda )=\sum \limits _{j=-n}^n\, c_j^{(n)}e^{\mathrm {i}j\frac{\sigma }{n}\lambda },\ n\ge 1\), be a sequence of the Levitan polynomials that corresponds to \(\varphi _\sigma \). For any \(\Lambda >0\) there exists \(n_0=n_0(\delta ,\,\Lambda )\) such that for \(n>n_0\)

$$\begin{aligned} \sup \limits _{\lambda \in [-\Lambda ,\Lambda ]}\, |\varphi _\sigma -T_n(\varphi _\sigma ;\,\lambda )|\le \delta . \end{aligned}$$

Write

$$\begin{aligned} \Delta _T(\varphi )= \Delta _T(\varphi -\varphi _\sigma ) +\Delta _T(\varphi _\sigma -T_n)+\Delta _T(T_n), \end{aligned}$$

So, under the condition \(\mathbf{C }_2\), for any \(\rho >0\)

The probability \(P_4\rightarrow 0\), as \(T\rightarrow \infty \), and the probability \(P_3\) under the condition \(\mathbf{N }_1\) for sufficiently large T (we will write \(T>T_0\)) can be made less than a preassigned number by chosing \(\delta >0\) for a fixed \(\rho >0\).

As far as the function \(\varphi _\sigma \in B_\sigma \) and the corresponding sequence of Levitan polynomials \(T_n\) are bounded by the same constant, we obtain

$$\begin{aligned} |\Delta (\varphi _\sigma -T_n)|\le & {} \delta T^{-\frac{1}{2}} \int \limits _{-\Lambda }^{\Lambda }\, \left|\varepsilon _T(\lambda ) \overline{s_T(\lambda ,\,\widehat{\alpha }_T)}\right|d\lambda \\&+\,2c(\varphi _\sigma )T^{-\frac{1}{2}} \int \limits _{\mathbb {R}\backslash [-\Lambda ,\Lambda ]}\, \left|\varepsilon _T(\lambda ) \overline{s_T(\lambda ,\,\widehat{\alpha }_T)}\right|d\lambda =D_1+D_2. \end{aligned}$$

The integral in the term \(D_1\) can be majorized by an integral over \(\mathbb {R}\) and bounded as earlier. We have further

$$\begin{aligned} \overline{s_T(\lambda ,\,\widehat{\alpha }_T)}=(\mathrm {i}\lambda )^{-1} \left[e^{\mathrm {i}\lambda T}(g(T,\,\alpha _0)-g(T,\,\widehat{\alpha }_T)) -(g(0,\,\alpha _0)-g(0,\,\widehat{\alpha }_T)) -\overline{s_T'(\lambda ,\,\widehat{\alpha }_T)}\right], \end{aligned}$$

where \(\overline{s_T'(\lambda ,\,\widehat{\alpha }_T)} =\int \limits _0^T\, e^{-\mathrm {i}\lambda t}(g'(t,\,\alpha _0)-g'(t,\,\widehat{\alpha }_T))dt\).

Under the Lemma conditions

Obviously,

$$\begin{aligned} g(T,\,\widehat{\alpha }_T)- g(T,\,\alpha _0)=\sum \limits _{i=1}^q\,g_i(T,\,\alpha ^*_T), \left(\widehat{\alpha }_{iT}-\alpha _{i0}\right), \end{aligned}$$

\(\alpha ^*_T=\alpha _0+\eta \left(\widehat{\alpha }_T-\alpha _0\right)\), \(\eta \in (0,\,1)\), \(d_T(\alpha _0)\left(\alpha ^*_T-\alpha _0\right)= \eta d_T(\alpha _0)\left(\widehat{\alpha }_T-\alpha _0\right)\), and for any \(\rho >0\) and \(i=\overline{1,q}\)

By condition \(\mathbf N _3\)(i) for any \(R\ge 0\)

$$\begin{aligned} \begin{aligned} P_5&\le \hbox {P}\left\rbrace \left(d_{iT}^{-1}(\alpha _0)\sup \limits _{t\in [0,T],\,\Vert u\Vert \le R}\, \left|g_i\left(t,\,\alpha _0+d_T^{-1}(\alpha _0)u\right)\right|\right)\cdot \left(d_{iT}^{-1}(\alpha _0)\left|\widehat{\alpha }_{iT}-\alpha _{i0}\right|\right)\ge \rho \right\lbrace \\&\le \hbox {P}\left\rbrace T^{-\frac{1}{2}}d_{iT}^{-1}(\alpha _0)\left|\widehat{\alpha }_{iT}-\alpha _{i0}\right|\ge \frac{\rho }{c^i(R)}\right\lbrace \ \rightarrow \ 0,\ \ \text {as}\ \ T\rightarrow \infty , \end{aligned} \end{aligned}$$

according to \(\mathbf{N }_1\) (or \(\mathbf{C }_1\)). On the other hand, by condition \(\mathbf{N }_1\) the value R can be chosen so that for \(T>T_0\) the probability \(P_6\) becomes less that preassigned number.

So,

$$\begin{aligned} g(T,\,\widehat{\alpha }_T)- g(T,\,\alpha _0)\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty , \end{aligned}$$

and, similarly, \(g(0,\,\widehat{\alpha }_T)- g(0,\,\alpha _0)\ \overset{\hbox {P}}{\longrightarrow }\ 0\), as \(T\rightarrow \infty \).

Moreover, for any \(\rho >0\)

and the second probability is equal to zero, if \(\Lambda >\frac{R}{\rho }\).

Thus for any fixed \(\rho >0\), similarly to the probability \(P_3\), the probability \(P_7=\hbox {P}\{D_2\ge \rho \}\) for \(T>T_0\) can be made less than preassigned number by the choice of the value \(\Lambda \).

Consider

$$\begin{aligned} \Delta _T(T_n)= & {} T^{-\frac{1}{2}}\sum \limits _{j=-n}^n\,c_j^{(n)} \int \limits _{\mathbb {R}}\,\varepsilon _T(\lambda ) \overline{s_T(\lambda ,\,\widehat{\alpha }_T)} e^{\mathrm {i}j\frac{\sigma }{n}\lambda }d\lambda ,\\ \overline{s_T(\lambda ,\,\widehat{\alpha }_T)} e^{\mathrm {i}j\frac{\sigma }{n}\lambda }= & {} \int \limits _{\frac{j\sigma }{n}}^{T+\frac{j\sigma }{n}}\, e^{\mathrm {i}\lambda t}\left(g\left(t-j\dfrac{\sigma }{n},\,\alpha _0\right) -g\left(t-j\dfrac{\sigma }{n},\,\widehat{\alpha }_T\right)\right)dt,\ j=\overline{-n,n}. \end{aligned}$$

It means that

$$\begin{aligned} \begin{aligned} \Delta _T(T_n)&=2\pi \sum \limits _{j=1}^n\,c_j^{(n)}T^{-\frac{1}{2}} \int \limits _{\frac{j\sigma }{n}}^T\,\varepsilon (t) \left(g\left(t-j\dfrac{\sigma }{n},\,\alpha _0\right) -g\left(t-j\dfrac{\sigma }{n},\,\widehat{\alpha }_T\right)\right)dt\\&\quad +\,2\pi \sum \limits _{j=-n}^0\,c_j^{(n)}T^{-\frac{1}{2}} \int \limits _0^{T+\frac{j\sigma }{n}}\,\varepsilon (t) \left(g\left(t-j\dfrac{\sigma }{n},\,\alpha _0\right) -g\left(t-j\dfrac{\sigma }{n},\,\widehat{\alpha }_T\right)\right)dt. \end{aligned} \end{aligned}$$

For \(j>0\) consider the value

$$\begin{aligned} \begin{aligned}&T^{-\frac{1}{2}}\int \limits _{\frac{j\sigma }{n}}^T\,\varepsilon (t) \left(g\left(t-j\dfrac{\sigma }{n},\,\widehat{\alpha }_T\right)- g\left(t-j\dfrac{\sigma }{n},\,\alpha _0\right)\right)dt\\&\quad =\sum \limits _{i=1}^q\,\left(T^{-\frac{1}{2}}d_{iT}^{-1}(\alpha _0)\int \limits _{\frac{j\sigma }{n}}^T\, \varepsilon (t)g_i\left(t-j\dfrac{\sigma }{n},\,\alpha _0\right)dt\right) d_{iT}(\alpha _0)(\widehat{\alpha }_{iT} - \alpha _{i0})\\&\qquad +\dfrac{1}{2}\sum \limits _{i,k=1}^q\,\left(T^{-\frac{1}{2}} \int \limits _{\frac{j\sigma }{n}}^T\,\varepsilon (t) g_{ik}\left(t-j\dfrac{\sigma }{n},\,\alpha _T^{*}\right)dt\right) (\widehat{\alpha }_{iT}-\alpha _{i0}) \left(\widehat{\alpha }_{kT}-\alpha _{k0}\right)\\&\quad =S_{1T}+\frac{1}{2}S_{2T}, \end{aligned} \end{aligned}$$

\(\alpha _T^{*}=\alpha _0+\bar{\eta }\left(\widehat{\alpha }_T-\alpha _0\right)\), \(\bar{\eta }\in (0,\,1)\).

Note that for \(i=\overline{1,q}\)

$$\begin{aligned} d_{iT}(\alpha _0)\left(\widehat{\alpha }_{iT}-\alpha _{i0}\right)\Rightarrow N(0,\,\Sigma _{_{LSE}}^{ii}),\ \ \text {as}\ \ T\rightarrow \infty , \end{aligned}$$

by the condition \(\mathbf{N }_1\). Moreover,

$$\begin{aligned} \begin{aligned}&\hbox {E}\left(T^{-\frac{1}{2}}d_{iT}^{-1}(\alpha _0)\int \limits _{\frac{j\sigma }{n}}^T\, \varepsilon (t)g_i\left(t-j\dfrac{\sigma }{n},\,\alpha _0\right)dt\right)^2\\&\quad =T^{-1}d_{iT}^{-2}(\alpha _0)\int \limits _{\frac{j\sigma }{n}}^T\int \limits _{\frac{j\sigma }{n}}^T\,B(t-s) g_i\left(t-j\dfrac{\sigma }{n},\,\alpha _0\right)g_i\left(s-j\dfrac{\sigma }{n},\,\alpha _0\right)dtds\\&\quad \le \left(T^{-2}\int \limits _0^T\int \limits _0^T\,B^2(t-s)dtds\right)^{\frac{1}{2}}=O\left(T^{-\frac{1}{2}}\right), \end{aligned} \end{aligned}$$

since

$$\begin{aligned} T^{-1}\int \limits _0^T\int \limits _0^T\,B^2(t-s)dtds\ \rightarrow \ 2\pi \Vert f\Vert _2^2,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$

It means that the sum \(S_{1T}\overset{\hbox {P}}{\longrightarrow }0\), as \(T\rightarrow \infty \).

For the general term \(S_{2T}^{ik}\) of the sum \(S_{2T}\) and any \(\rho >0\), \(R>0\),

Under condition using assumptions \(\mathbf{N }_3\)(ii) and \(\mathbf{N }_3\)(iii) we get as in the estimation of the probability \(P_5\)

$$\begin{aligned} \begin{aligned} \left|S_{2T}^{ik}\right|&\le \left(T^{-\frac{1}{2}}\int \limits _{\frac{j\sigma }{n}}^T\, |\varepsilon (t)|dt\right)\cdot \left(d_{ik,T}^{-1}(\alpha _0)\sup \limits _{t\in [0,T],\,u\in v^c(R)}\, \left|g_{ik}\left(t,\,\alpha _0+d_T^{-1}(\alpha _0)u\right)\right|\right)\\&\quad \cdot \Bigl (d_{iT}^{-1}(\alpha _0)d_{kT}^{-1}(\alpha _0) d_{ik,T}(\alpha _0)\Bigr )\cdot \left|d_{iT}(\alpha _0)(\widehat{\alpha }_{iT}-\alpha _{i0})\right|\cdot \left|d_{kT}(\alpha _0)(\widehat{\alpha }_{kT}-\alpha _{k0})\right|\\&\le c^{ik}(R)\tilde{c}^{ik}T^{-\frac{3}{2}}\int \limits _0^T\, |\varepsilon (t)|dt\cdot \left|d_{iT}(\alpha _0)(\widehat{\alpha }_{iT}-\alpha _{i0})\right|\cdot \left|d_{kT}(\alpha _0)(\widehat{\alpha }_{kT}-\alpha _{k0})\right|. \end{aligned} \end{aligned}$$

By Lemma 1

$$\begin{aligned} T^{-\frac{3}{2}}\int \limits _0^T\, |\varepsilon (t)|dt\le \frac{1}{2}T^{-\frac{1}{2}}+\frac{1}{2} T^{-\frac{3}{2}}\int \limits _0^T\, \varepsilon ^2(t)dt\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$

So, by condition \(\mathbf{N }_1\)\(P_8\rightarrow 0\), as \(T\rightarrow \infty \), that is \(S_{2T}\ \overset{\hbox {P}}{\longrightarrow }\ 0\), as \(T\rightarrow \infty \). For \(j\le 0\) the reasoning is similar, and

$$\begin{aligned} \Delta _T(T_n)\overset{\hbox {P}}{\longrightarrow }0,\ T\rightarrow \infty . \end{aligned}$$

\(\square \)

Lemma 8

Let the function \(\varphi (\lambda ,\,\theta )w(\lambda )\) be continuous in \(\theta \in \Theta ^c\) for each fixed \(\lambda \in \mathbb {R}\) with

$$\begin{aligned} |\varphi (\lambda ,\,\theta )|\le \varphi (\lambda ),\ \theta \in \Theta ^c,\ \text {and}\ \varphi (\cdot )w(\cdot )\in L_1(\mathbb {R}). \end{aligned}$$

If \(\theta _T^{*}\overset{\hbox {P}}{\longrightarrow }\theta _0\), then

$$\begin{aligned} I\left(\theta _T^{*}\right)=\int \limits _{\mathbb {R}}\, \varphi \left(\lambda ,\,\theta _T^{*}\right)w(\lambda )d\lambda \ \overset{\hbox {P}}{\longrightarrow }\ \int \limits _{\mathbb {R}}\, \varphi (\lambda ,\,\theta _0)w(\lambda )d\lambda =I(\theta _0). \end{aligned}$$

Proof

By a Lebesgue dominated convergence theorem the integral \(I(\theta )\), \(\theta \in \Theta ^c\), is a continuous function. Further argument is standard. For any \(\rho >0\) and \(\varepsilon =\dfrac{\rho }{2}\) we find such a \(\delta >0\), that \(|I(\theta )-I(\theta _0)|<\varepsilon \) as \(\Vert \theta -\theta _0\Vert <\delta \). Then

$$\begin{aligned} \hbox {P}\left\rbrace |I(\theta _T^{*})-I(\theta _0)|\ge \rho \right\lbrace =P_9+P_{10}, \end{aligned}$$

where

$$\begin{aligned} P_9=\hbox {P}\left\rbrace |I(\theta _T^{*})-I(\theta _0)|\ge \dfrac{\rho }{2},\ \Vert \theta _T^{*}-\theta _0\Vert <\delta \right\lbrace =0, \end{aligned}$$

due to the choice of \(\varepsilon \), and

$$\begin{aligned} P_{10}=\hbox {P}\left\rbrace |I(\theta _T^{*})-I(\theta _0)|\ge \dfrac{\rho }{2},\ \Vert \theta _T^{*}-\theta _0\Vert \ge \delta \right\lbrace \ \rightarrow \ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$

\(\square \)

Lemma 9

If the conditions \(\mathbf{A }_1, \mathbf{C }_2\) are satisfied and \(\sup \limits _{\lambda \in \mathbb {R},\,\theta \in \Theta ^c}\, |\varphi (\lambda ,\,\theta )|=c(\varphi )<\infty \), then

$$\begin{aligned} \begin{aligned} T^{-1}\int \limits _{\mathbb {R}}\,\varphi (\lambda ,\,\theta _T^{*}) \varepsilon _T(\lambda )\overline{s_T(\lambda ,\,\widehat{\alpha }_T)} d\lambda \&\overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty ,\\ T^{-1}\int \limits _{\mathbb {R}}\,\varphi (\lambda ,\,\theta _T^{*}) |s_T(\lambda ,\,\widehat{\alpha }_T)|d\lambda \&\overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned} \end{aligned}$$

Proof

These relations are similar to (12), (13), and can be obtained in the same way. \(\square \)

Lemma 10

Let under conditions \(\mathbf{A }_1, \mathbf{A }_2\) there exists an even positive Lebesgue measurable function \(v(\lambda )\), \(\lambda \in \mathbb {R}\), and an even Lebesgue measurable in \(\lambda \) for any fixed \(\theta \in \Theta ^c\) function \(\varphi (\lambda ,\,\theta )\), \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\), such that

(i):

\(\varphi (\lambda ,\,\theta )v(\lambda )\) is uniformly continuous in \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\);

(ii):

\(\underset{\lambda \in \mathbb {R}}{\sup }\,\dfrac{w(\lambda )}{v(\lambda )}<\infty \);

(iii):

\(\underset{\lambda \in \mathbb {R},\ \theta \in \Theta ^c}{\sup }\,|\varphi (\lambda ,\,\theta )|w(\lambda )<\infty \). Suppose also that \(\theta _T^{*}\overset{\hbox {P}}{\longrightarrow }\theta _0\), then, as \(T\rightarrow \infty \),

$$\begin{aligned} \int \limits _{\mathbb {R}}\,I_T^{\varepsilon }(\lambda )\varphi (\lambda ,\,\theta _T^{*})w(\lambda )d\lambda \ \overset{\hbox {P}}{\longrightarrow }\ \int \limits _{\mathbb {R}}\,f(\lambda ,\,\theta _0)\varphi (\lambda ,\,\theta _0)w(\lambda )d\lambda . \end{aligned}$$

Proof

We have

$$\begin{aligned} \begin{aligned} \int \limits _{\mathbb {R}}\, I_T^{\varepsilon }(\lambda )\varphi (\lambda ,\,\theta _T^{*})w(\lambda )d\lambda =\int \limits _{\mathbb {R}}\,I_T^{\varepsilon }(\lambda ) \bigl (\varphi (\lambda ,\,\theta _T^{*}) -\varphi (\lambda ,\,\theta _0)\bigr )v(\lambda ) \dfrac{w(\lambda )}{v(\lambda )}d\lambda&\\ +\int \limits _{\mathbb {R}}\,I_T^{\varepsilon }(\lambda )\varphi (\lambda ,\,\theta _0)w(\lambda )d\lambda&=I_5+I_6. \end{aligned} \end{aligned}$$

By Lemma 3 and the condition (iii)

$$\begin{aligned} I_6\ \overset{\hbox {P}}{\longrightarrow }\ \int \limits _{\mathbb {R}}\, f(\lambda ,\,\theta _0)\varphi (\lambda ,\,\theta _0)w(\lambda ) d\lambda ,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
(19)

On the other hand, for any \(r>0\) under the condition (i) there exists \(\delta =\delta (r)\) such that for \(\left\Vert\theta _T^{*}-\theta _0\right\Vert<\delta \)

$$\begin{aligned} |I_5|\le r \int \limits _{\mathbb {R}}\, I_T^{\varepsilon }\dfrac{w(\lambda )}{v(\lambda )}d\lambda , \end{aligned}$$
(20)

and by the condition (ii)

$$\begin{aligned} \int \limits _{\mathbb {R}}\,I_T^{\varepsilon } \dfrac{w(\lambda )}{v(\lambda )}d\lambda \ \overset{\hbox {P}}{\longrightarrow }\ \int \limits _{\mathbb {R}}\,f(\lambda ,\,\theta _0) \dfrac{w(\lambda )}{v(\lambda )}d\lambda . \end{aligned}$$
(21)

The relations (19)–(21) prove the lemma. \(\square \)

Proof of Theorem 2

By definition of the MCE \(\widehat{\theta }_T\), formally using the Taylor formula, we get

$$\begin{aligned} 0=\nabla _\theta U_T(\widehat{\theta }_T,\,\widehat{\alpha }_T) =\nabla _\theta U_T(\theta _0,\,\widehat{\alpha }_T) +\nabla _\theta \nabla _\theta 'U_T(\theta _T^{*},\,\widehat{\alpha }_T) (\widehat{\theta }_T-\theta _0). \end{aligned}$$
(22)

Since there is no vector Taylor formula, (22) must be taken coordinatewise, that is each row of vector equality (22) depends on its own random vector \(\theta _T^{*}\), such that \(\Vert \theta _T^{*}-\theta _0\Vert \le \Vert \widehat{\theta }_T-\theta _0\Vert \). In turn, from (22) we have formally

$$\begin{aligned} T^{\frac{1}{2}}(\widehat{\theta }_T-\theta _0)=\left(\nabla _\theta \nabla _\theta ' U_T(\theta _T^{*},\,\widehat{\alpha }_T)\right)^{-1} \left(-T^{\frac{1}{2}}\nabla _\theta U_T(\theta _0,\,\widehat{\alpha }_T)\right). \end{aligned}$$

As far as the condition \(\mathbf{N }_4\) implies the possibility of differentiation under the sign of the integrals in (10), then

$$\begin{aligned} \begin{aligned} -T^{\frac{1}{2}}\nabla _\theta&U_T(\theta _0,\,\widehat{\alpha }_T)=-T^{\frac{1}{2}}\int \limits _{\mathbb {R}}\, \left(\nabla _\theta \log f(\lambda ,\,\theta _0)+\nabla _\theta \left(\dfrac{1}{f(\lambda ,\,\theta _0)}\right)I_T(\lambda ,\,\widehat{\alpha }_T)\right) w(\lambda )d\lambda \\&=T^{\frac{1}{2}}\int \limits _{\mathbb {R}}\, \left(\dfrac{\nabla _\theta f(\lambda ,\,\theta _0)}{f^2(\lambda ,\,\theta _0)} I_T^{\varepsilon }(\lambda )-\dfrac{\nabla _\theta f(\lambda ,\,\theta _0)}{f(\lambda ,\,\theta _0)}\right) w(\lambda )d\lambda \\&\quad +\,(2\pi )^{-1}T^{-\frac{1}{2}}\int \limits _{\mathbb {R}}\, \left(2\hbox {Re}\left\rbrace \varepsilon _T(\lambda ) \overline{s_T(\lambda ,\,\widehat{\alpha }_T)}\right\lbrace +|s_T(\lambda ,\,\widehat{\alpha }_T)|^2\right)\dfrac{\nabla _\theta f(\lambda ,\,\theta _0)}{f^2(\lambda ,\,\theta _0)} w(\lambda )d\lambda \\&=A_T^{(1)}+ A_T^{(2)}+ A_T^{(3)}. \end{aligned} \end{aligned}$$
(23)

Similarly

$$\begin{aligned} \begin{aligned} \nabla _\theta \nabla _\theta '&U_T(\theta _T^{*},\,\widehat{\alpha }_T)=\int \limits _{\mathbb {R}}\, \left(\nabla _\theta \nabla _\theta ' \log f(\lambda ,\,\theta _T^{*})+\nabla _\theta \nabla _\theta ' \left(\dfrac{1}{f(\lambda ,\,\theta _T^{*})}\right)I_T(\lambda ,\,\widehat{\alpha }_T)\right) w(\lambda )d\lambda \\&=\int \limits _{\mathbb {R}}\,\left\rbrace \left( \dfrac{\nabla _\theta \nabla _\theta 'f(\lambda ,\,\theta _T^{*})}{f(\lambda ,\,\theta _T^{*})} -\dfrac{\nabla _\theta f(\lambda ,\,\theta _T^{*}) \nabla _\theta ' f(\lambda ,\,\theta _T^{*})}{f^2(\lambda ,\,\theta _T^{*})}\right)\right.\\&\quad +\left(2\dfrac{\nabla _\theta f(\lambda ,\,\theta _T^{*})\nabla _\theta ' f(\lambda ,\,\theta _T^{*})}{f^3(\lambda ,\,\theta _T^{*})} -\dfrac{\nabla _\theta \nabla _\theta 'f(\lambda ,\,\theta _T^{*})}{f^2(\lambda ,\,\theta _T^{*})}\right)\times \\&\quad \times \left.(I_T^{\varepsilon }(\lambda )+(\pi T)^{-1}\hbox {Re}\{\varepsilon _T(\lambda ) \overline{s_T(\lambda ,\,\widehat{\alpha }_T)}\} +(2\pi T)^{-1} |s_T(\lambda ,\,\widehat{\alpha }_T)|^2)\right\lbrace w(\lambda )d\lambda \\&=B_T^{(1)}+B_T^{(2)}+B_T^{(3)}+B_T^{(4)}, \end{aligned} \end{aligned}$$
(24)

where the terms \(B_T^{(3)}\) and \(B_T^{(4)}\) contain values \(\hbox {Re}\{\varepsilon _T(\lambda )\overline{s_T(\lambda ,\widehat{\alpha }_T)}\}\) and \(|s_T(\lambda ,\widehat{\alpha }_T)|^2\), respectively.

Bearing in mind the 1st part of the condition \(\mathbf N _4\)(i), we take in Lemma 7 the functions

$$\begin{aligned} \varphi (\lambda )=\varphi _i(\lambda ) =\dfrac{f_i(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda ),\ i=\overline{1,m}. \end{aligned}$$

Then in the formula (23) \(A_T^{(2)}\ \overset{\hbox {P}}{\longrightarrow }\ 0\), as \(T\rightarrow \infty \).

Consider the term \(A_T^{(3)}=(a_{iT}^{(3)})_{i=1,}^m\), in the sum (23)

$$\begin{aligned} a_{iT}^{(3)}=(2\pi )^{-1}T^{-\frac{1}{2}}\int \limits _{\mathbb {R}}\, |s_T(\lambda ,\,\widehat{\alpha }_T)|^2\varphi _i(\lambda )d\lambda , \end{aligned}$$

where \(\varphi _i(\lambda )\) are as before. Under conditions \(\mathbf{C }_1, \mathbf{C }_2, \mathbf{N }_1\) and (1) of \(\mathbf{N }_4\)(i)\(A_T^{(3)}\ \overset{\hbox {P}}{\longrightarrow }\ 0\), as \(T\rightarrow \infty \), because

$$\begin{aligned} |a_{iT}^{(3)}|\le & {} c(\varphi _i)T^{-\frac{1}{2}} \Phi _T(\widehat{\alpha }_T,\,\alpha _0)\\\le & {} c(\varphi _i)c_0 \Vert T^{-\frac{1}{2}}d_T(\alpha _0)\left(\widehat{\alpha }_T-\alpha _0\right)\Vert \;\Vert d_T(\alpha _0)\left(\widehat{\alpha }_T- \alpha _0\right)\Vert \ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$

Examine the behaviour of the terms \(B_T^{(1)}-B_T^{(4)}\) in formula (24). Under conditions \(\mathbf{C }_1\) and \(\mathbf{N }_4\)(iii) we can use Lemma 8 with functions

$$\begin{aligned} \varphi (\lambda ,\,\theta )=\varphi _{ij}(\lambda ,\,\theta )=\dfrac{f_{ij}(\lambda ,\,\theta )}{f(\lambda ,\,\theta )},\ \dfrac{f_i(\lambda ,\,\theta ) f_j(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )},\ i,j=\overline{1,m}, \end{aligned}$$

to obtain the convergence

$$\begin{aligned} B_T^{(1)}\ \overset{\hbox {P}}{\longrightarrow }\ \int \limits _{\mathbb {R}}\, \left(\dfrac{\nabla _\theta \nabla _\theta 'f(\lambda ,\,\theta _0)}{f(\lambda ,\,\theta _0)}-\dfrac{\nabla _\theta f(\lambda ,\,\theta _0) \nabla _\theta 'f(\lambda ,\,\theta _0)}{f^2(\lambda ,\,\theta _0)}\right) w(\lambda )d\lambda ,\ \text {as}\ T\rightarrow \infty . \end{aligned}$$
(25)

Under the condition \(\mathbf{N }_5\)(i) we can use Lemma 9 with functions

$$\begin{aligned} \varphi (\lambda ,\,\theta )=\varphi _{ij}(\lambda ,\,\theta )= \dfrac{f_{ij}(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )} w(\lambda ),\ \dfrac{f_i(\lambda ,\,\theta ) f_j(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )},\ i,j=\overline{1,m}, \end{aligned}$$

to obtain that

$$\begin{aligned} B_T^{(3)}\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ B_T^{(4)}\ \overset{\hbox {P}}{\longrightarrow }\ 0,\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$

Under conditions \(\mathbf{C }_1\) and \(\mathbf{N }_5\)

$$\begin{aligned} B_T^{(2)}\ \overset{\hbox {P}}{\longrightarrow }\ \int \limits _{\mathbb {R}}\, \left(2\dfrac{\nabla _\theta f(\lambda ,\,\theta _0)\nabla _\theta ' f(\lambda ,\,\theta _0)}{f^2(\lambda ,\,\theta _0)} -\dfrac{\nabla _\theta \nabla _\theta 'f(\lambda ,\,\theta _0)}{f(\lambda ,\,\theta _0)}\right)w(\lambda )d\lambda , \end{aligned}$$
(26)

if we take in Lemma 10 in conditions (i) and (iii)

$$\begin{aligned} \varphi (\lambda ,\,\theta )=\varphi _{ij}(\lambda ,\,\theta )= \dfrac{f_i(\lambda ,\,\theta )f_j(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )},\ \dfrac{f_{ij}(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}\ i,j=\overline{1,m}. \end{aligned}$$

So, under conditions \(\mathbf{C }_1, \mathbf{C }_2, \mathbf{N }_4\)(iii) and \(\mathbf{N }_5\)

$$\begin{aligned} \begin{aligned} \nabla _\theta \nabla _\theta 'U_T(\theta _T^{*},\,\widehat{\alpha }_T)\ \overset{\hbox {P}}{\longrightarrow }\&\int \limits _{\mathbb {R}}\,\dfrac{\nabla _\theta f(\lambda ,\,\theta _0)\nabla _\theta ' f(\lambda ,\,\theta _0)}{f^2(\lambda ,\,\theta _0)}w(\lambda )d\lambda \\ =&\int \limits _{\mathbb {R}}\,\nabla _\theta \log f(\lambda ,\,\theta _0)\nabla _\theta '\log f(\lambda ,\,\theta _0)w(\lambda )d\lambda =W_1(\theta _0), \end{aligned} \end{aligned}$$
(27)

because \(W_1(\theta _0)\) is the sum of the right hand sides of (25) and (26).

From the facts obtained, it follows that for the proof of Theorem 2 it is necessary to study an asymptotic behaviour of vector \(A_T^{(1)}\) from (23):

$$\begin{aligned} A_T^{(1)}= T^{\frac{1}{2}}\int \limits _{\mathbb {R}}\,\left(\dfrac{\nabla _\theta f(\lambda ,\,\theta _0)}{f^2(\lambda ,\,\theta _0)}I_T^{\varepsilon }(\lambda )-\dfrac{\nabla _\theta f(\lambda ,\,\theta _0)}{f(\lambda ,\,\theta _0)}\right) w(\lambda )d\lambda . \end{aligned}$$

We will take

$$\begin{aligned} \begin{aligned} \varphi _i(\lambda )=&\dfrac{f_i(\lambda ,\,\theta _0)}{f^2(\lambda ,\,\theta _0)}w(\lambda ),\ i=\overline{1,m},\\ \Psi (\lambda )=&\sum \limits _{i=1}^m\,u_i\varphi _i(\lambda ),\ \mathrm {u}=\left(u_1,\,\ldots ,\,u_m\right)\in \mathbb {R}^m,\\ Y_T=&\int \limits _{\mathbb {R}}\,I_T^{\varepsilon }(\lambda )\Psi (\lambda )d\lambda ,\ \ \ Y=\int \limits _{\mathbb {R}}\,f(\lambda ,\,\theta _0)\Psi (\lambda )d\lambda , \end{aligned} \end{aligned}$$

and write

$$\begin{aligned} \left<A_T^{(1)},\,\mathrm {u}\right>=T^{\frac{1}{2}}(Y_T-\hbox {E}Y_T)+T^{\frac{1}{2}}(\hbox {E}Y_T-Y). \end{aligned}$$

Under conditions (1) and (2) of \(\mathbf{N }_4\)(i) (Bentkus 1972b; Ibragimov 1963) for any \(u\in \mathbb {R}^m\)

$$\begin{aligned} T^{\frac{1}{2}}(\hbox {E}Y_T-Y)\ \longrightarrow \ 0, \ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
(28)

On the other hand

$$\begin{aligned} T^{\frac{1}{2}}(Y_T-\hbox {E}Y_T)=T^{-\frac{1}{2}}\int \limits _0^T\int \limits _0^T\, \left(\varepsilon (t)\varepsilon (s) -B(t-s)\right)\hat{b}(t-s)dtds \end{aligned}$$

with

$$\begin{aligned} \hat{b}(t)=\int \limits _{\mathbb {R}}\,e^{\mathrm {i}\lambda t}\,(2\pi )^{-1}\Psi (\lambda )d\lambda . \end{aligned}$$

Thus we can apply Lemma 5 taking \(b(\lambda )=(2\pi )^{-1}\Psi (\lambda )\) in the formula (18) to obtain for any \(u\in \mathbb {R}^m\)

$$\begin{aligned} T^{\frac{1}{2}}(Y_T-\hbox {E}Y_T)\ \Rightarrow \ N(0,\,\sigma ^2),\ \ \text {as}\ \ T\rightarrow \infty , \end{aligned}$$
(29)

where

$$\begin{aligned} \begin{aligned} \sigma ^2=\,&4\pi \int \limits _{\mathbb {R}}\,\Psi ^2(\lambda ) f^2(\lambda ,\,\theta _0)d\lambda +\gamma _2\left(\int \limits _{\mathbb {R}}\,\Psi (\lambda ) f(\lambda ,\,\theta _0)d\lambda \right)^2. \end{aligned} \end{aligned}$$

The relations (28) and (29) are equivalent to the convergence

$$\begin{aligned} A_T^{(1)}\ \Rightarrow \ N\left(0,\,W_2(\theta _0)+V(\theta _0)\right),\ \ \text {as}\ \ T\rightarrow \infty . \end{aligned}$$
(30)

From (27) and (30) it follows (15).

Remark 4

From the conditions of Theorem 2 it follows also the fulfillment of Lemma 6 conditions for functions \(\hat{a}\) and \(\hat{b}\). Really by condition \(\mathbf{A }_1\)\(\hat{a}\in L_1(\mathbb {R})\cap L_2(\mathbb {R})\) and we can take \(p=1\) in Lemma 6. On the other hand, if we look at \(b=(2\pi )^{-1}\Psi \) as at an original of the Fourier transform, from \(\mathbf{N }_4\)(i)1) we have \(b\in L_1(\mathbb {R})\cap L_2(\mathbb {R})\). Then according to the Plancherel theorem \(\hat{b}\in L_2(\mathbb {R})\) and we can take \(q=2\) in Lemma 6. Thus

$$\begin{aligned} \frac{2}{p}+\frac{1}{q}=\frac{5}{2}, \end{aligned}$$

and conclusion of Lemma 6 is true.

5 Example: The motion of a pendulum in a turbulent fluid

First of all we review a number of results discussed in Parzen (1962), Anh et al. (2002), and Leonenko and Papić (2019), see also references therein.

We examine the stationary Lévy-driven continuous-time autoregressive process \(\varepsilon (t),\ t\in \mathbb {R}\), of the order two ( CAR(2)-process ) in the under-damped case (see Leonenko and Papić 2019 for details).

The motion of a pendulum is described by the equation

$$\begin{aligned} \ddot{\varepsilon }(t)+2\alpha \dot{\varepsilon }+\left(\omega ^2+\alpha ^2\right)\varepsilon (t)=\dot{L}(t),\ t\in \mathbb {R}, \end{aligned}$$
(31)

in which \(\varepsilon (t)\) is the replacement from its rest position, \(\alpha \) is a damping factor, \(\dfrac{2\pi }{\omega }\) is the damped period of the pendulum (see, i.e., Parzen 1962, pp. 111–113).

We consider the Green function solution of the equation (31), in which \(\dot{L}\) is the Lévy noise, i.e. the derivative of a Lévy process in the distribution sense (see Anh et al. 2002; Leonenko and Papić 2019 for details). The solution can be defined as the linear process

$$\begin{aligned} \varepsilon (t)=\int \limits _{\mathbb {R}}\,\hat{a}(t-s)dL(s),\ t\in \mathbb {R}, \end{aligned}$$

where the Green function

$$\begin{aligned} \hat{a}(t)=e^{-\alpha t}\, \frac{\sin (\omega t)}{\omega }\,\mathbb {I}_{[0,\,\infty )}(t),\ \alpha >0. \end{aligned}$$
(32)

Assuming \(\hbox {E}L(1)=0\), \(d_2=\hbox {E}L^2(1)<\infty \), we obtain

$$\begin{aligned} B(t)=d_2\int \limits _0^\infty \,\hat{a}(t+s)\hat{a}(s)ds=\frac{d_2}{4(\alpha ^2+\omega ^2)}\,e^{-\alpha |t|}\, \left(\frac{\sin (\omega |t|)}{\omega }+\frac{\cos (\omega t)}{\alpha }\right). \end{aligned}$$
(33)

The formula (33) for the covariance function of the process \(\varepsilon \) corresponds to the formula (2.12) in Leonenko and Papić (2019) for the correlation function

$$\begin{aligned} \hbox {Corr}\left(\varepsilon (t),\,\varepsilon (0)\right)=\frac{B(t)}{B(0)}= e^{-\alpha |t|}\,\left(\cos (\omega t)+\frac{\alpha }{\omega }\sin (\omega |t|)\right). \end{aligned}$$

On the other hand for \(\hat{a}(t)\) given by (32)

$$\begin{aligned} a(\lambda )=\int \limits _0^\infty \,e^{-i\lambda t}\hat{a}(t)dt=\frac{1}{\alpha ^2+\omega ^2-\lambda ^2+2i\alpha \lambda }. \end{aligned}$$

Then the positive spectral density of the stationary process \(\varepsilon \) can be written as (compare with Parzen 1962)

$$\begin{aligned} f_2(\lambda )=\frac{d_2}{2\pi }\left|a(\lambda )\right|^2=\frac{d_2}{2\pi }\cdot \frac{1}{\left(\lambda ^2-\alpha ^2-\omega ^2\right)^2+4\alpha ^2\lambda ^2},\ \lambda \in \mathbb {R}. \end{aligned}$$
(34)

It is convenient to rewrite (34) in the form

$$\begin{aligned} f_2(\lambda )=f(\lambda ,\,\theta )=\frac{1}{2\pi }\cdot \frac{\beta }{\left(\lambda ^2-\alpha ^2-\gamma ^2\right)^2+4\alpha ^2\lambda ^2},\ \lambda \in \mathbb {R}, \end{aligned}$$
(35)

where \(\alpha =\theta _1\) is a damping factor, \(\beta =-\varkappa ^{(2)}(0)=d_2(\theta _2)=\theta _2\), \(\gamma =\omega =\theta _3\) is a damped cyclic frequency of the pendulum oscillations. Suppose that

$$\begin{aligned} \theta= & {} \left(\theta _1,\,\theta _2,\,\theta _3\right) = \left(\alpha ,\,\beta ,\,\gamma \right)\in \Theta \\= & {} \left(\underline{\alpha },\,\overline{\alpha }\right)\times \left(\underline{\beta },\,\overline{\beta }\right)\times \left(\underline{\gamma },\,\overline{\gamma }\right),\ \underline{\alpha },\underline{\beta },\underline{\gamma }>0,\ \overline{\alpha },\overline{\beta },\overline{\gamma }<\infty . \end{aligned}$$

The condition \(\mathbf{C }_3\) is fulfilled for spectral density (35).

Assume that

$$\begin{aligned} w(\lambda )=\left(1+\lambda ^2\right)^{-a},\ \lambda \in \mathbb {R},\ a>0. \end{aligned}$$

More precisely the value of a will be chosen below.

Obviously the functions \(w(\lambda )\log f(\lambda ,\,\theta )\), \(\frac{w(\lambda )}{f(\lambda ,\,\theta )}\) are continuous on \(\mathbb {R}\times \Theta ^c\). For any \(\Lambda >0\) the function \(\left|\log f(\lambda ,\,\theta )\right|\) is bounded on the set \([-\Lambda ,\,\Lambda ]\times \Theta ^c\). The number \(\Lambda \) can be chosen so that for \(\mathbb {R}\backslash [-\Lambda ,\,\Lambda ]\)

$$\begin{aligned} 1<\frac{8\pi }{\overline{\beta }}\underline{\alpha }^2\lambda ^2\le f^{-1}(\lambda ,\,\theta ) \le \frac{2\pi }{\underline{\beta }}\left(2\left(\lambda ^4+\left(\overline{\alpha }^2+\overline{\gamma }^2\right)^2\right)+4\overline{\alpha }^2\lambda ^2\right). \end{aligned}$$

Thus the function \(Z_1(\lambda )\) in the condition \(\mathbf{C }_4\)(i) exists.

As for condition \(\mathbf{C }_4\)(ii), if \(a\ge 2\), then

$$\begin{aligned} \sup \limits _{\lambda \in \mathbb {R},\,\theta \in \Theta ^c}\,\frac{w(\lambda )}{f(\lambda ,\,\theta )}<\infty . \end{aligned}$$

As a function v in condition \(\mathbf{C }_5\) we take

$$\begin{aligned} v(\lambda )=\left(1+\lambda ^2\right)^{-b},\ \lambda \in \mathbb {R},\ b>0. \end{aligned}$$

Obviously, if \(a\ge b\), then \(\sup \limits _{\lambda \in \mathbb {R}}\,\frac{w(\lambda )}{v(\lambda )}<\infty \) (condition \(\mathbf{C }_5\)(ii)), and the function \(\frac{v(\lambda )}{f(\lambda ,\,\theta )}\) is uniformly continuous in \((\lambda ,\,\theta )\in \mathbb {R}\times \Theta ^c\), if \(b\ge 2\) (condition \(\mathbf{C }_5\)(i)).

Further it will be helpful to use the notation \(s(\lambda )=\left(\lambda ^2-\alpha ^2-\gamma ^2\right)^2+4\alpha ^2\lambda ^2\). Then

$$\begin{aligned} \begin{aligned} f_\alpha (\lambda ,\,\theta )&=\frac{\partial }{\partial \alpha }\,f(\lambda ,\,\theta )=-\frac{2\alpha \beta }{\pi }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)s^{-2}(\lambda );\\ f_\beta (\lambda ,\,\theta )&=\frac{\partial }{\partial \beta }\,f(\lambda ,\,\theta )=\left(2\pi s(\lambda )\right)^{-2};\\ f_\gamma (\lambda ,\,\theta )&=\frac{\partial }{\partial \gamma }\,f(\lambda ,\,\theta )=\frac{2\beta \gamma }{\pi }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)s^{-2}(\lambda ). \end{aligned} \end{aligned}$$
(36)

To check the condition \(\mathbf{N }_4\)(i)1) consider the functions

$$\begin{aligned} \begin{aligned} \varphi _\alpha (\lambda )&=\frac{f_\alpha (\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda ) =-\frac{4\pi \alpha }{\beta }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)w(\lambda );\\ \varphi _\beta (\lambda )&=\frac{f_\beta (\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )=\frac{2\pi }{\beta ^2}s(\lambda )w(\lambda );\\ \varphi _\gamma (\lambda )&=\frac{f_\gamma (\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda ) =\frac{8\pi \gamma }{\beta }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)w(\lambda ). \end{aligned} \end{aligned}$$
(37)

Then the condition \(\mathbf{N }_4\)(i)1) is satisfied for \(\varphi _\alpha \) and \(\varphi _\gamma \) when \(a>\frac{3}{2}\), for \(\varphi _\beta \) when \(a>\frac{5}{2}\). The same values of a are sufficient also to meet the condition \(\mathbf{N }_4\)(i)2).

To verify \(\mathbf{N }_4\)(i)3) fix \(\theta \in \Theta ^c\) and denote by \(\varphi (\lambda )\), \(\lambda \in \mathbb {R}\), any of the continuous functions \(\varphi _\alpha (\lambda )\), \(\varphi _\beta (\lambda )\), \(\varphi _\gamma (\lambda )\), \(\lambda \in \mathbb {R}\). Suppose \(|1-\eta |<\delta <\frac{1}{2}\). Then

$$\begin{aligned} \sup \limits _{\lambda \in \mathbb {R}}\,\left|\varphi (\eta \lambda )-\varphi (\lambda )\right|= & {} \max \left(\sup \limits _{\eta |\lambda |\le \Lambda }\,\left|\varphi (\eta \lambda )-\varphi (\lambda )\right|,\ \sup \limits _{\eta |\lambda |>\Lambda }\,\left|\varphi (\eta \lambda )-\varphi (\lambda )\right|\right)\\= & {} \max \left(s_1,\,s_2\right), \end{aligned}$$
$$\begin{aligned} s_2\le \sup \limits _{|\lambda |>\Lambda }\,\left|\varphi (\lambda )\right|+\sup \limits _{\eta |\lambda |>\Lambda }\,\left|\varphi (\lambda )\right|= s_3+s_4. \end{aligned}$$

By the properties of the functions \(\varphi \) under assumption \(a>\frac{5}{2}\) for any \(\varepsilon >0\) there exists \(\Lambda =\Lambda (\varepsilon )>0\) such that for \(|\lambda |>\frac{2}{3}\Lambda \)\(|\varphi (\lambda )|<\frac{\varepsilon }{2}\). So, \(s_3\le \frac{\varepsilon }{2}\). We have also \(s_4\le \underset{|\lambda |>\frac{2}{3}\Lambda }{\sup }\,|\varphi (\lambda )|\le \frac{\varepsilon }{2}\). On the other hand,

$$\begin{aligned} s_1\le \sup \limits _{|\lambda |<2\Lambda }\,\left|\varphi (\eta \lambda )-\varphi (\lambda )\right|,\ \ |\eta \lambda -\lambda |\le 2\Lambda \delta =\delta ', \end{aligned}$$

and by the proper choice of \(\delta \)

$$\begin{aligned} s_1\le \sup \limits _{\begin{array}{c} \lambda _1,\lambda _2\in [-2\Lambda ,\,2\Lambda ]\\ \left|\lambda _1-\lambda _2\right|<\delta ' \end{array}}\, \left|\varphi (\lambda _1)-\varphi (\lambda _2)\right|<\varepsilon , \end{aligned}$$

and condition \(\mathbf{N }_4\)(i)(3) is met.

Using (37) we get for any \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),

$$\begin{aligned} \begin{aligned} \varphi _\alpha '(\lambda )&=-\frac{8\pi \alpha }{\beta }\,\lambda w(\lambda )-\frac{4\pi \alpha }{\beta }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)w'(\lambda ) =O\left(\lambda ^{-2a+1}\right);\\ \varphi _\beta '(\lambda )&=\frac{2\pi }{\beta ^2}\bigl (s'(\lambda )w(\lambda )+s(\lambda )w'(\lambda )\bigr )=O\left(\lambda ^{-2a+3}\right);\\ \varphi _\gamma '(\lambda )&=\frac{16\pi \gamma }{\beta }\,\lambda w(\lambda )+\frac{8\pi \gamma }{\beta }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)w'(\lambda ) =O\left(\lambda ^{-2a+1}\right). \end{aligned} \end{aligned}$$

Therefore for \(a>\frac{3}{2}\) these derivatives are uniformly continuous on \(\mathbb {R}\) (condition \(\mathbf{N }_4\)(i)4). So, to satisfy condition \(\mathbf{N }_4\)(i) we can take weight function \(w(\lambda )\) with \(a>\frac{5}{2}\).

The check of assumption \(\mathbf{N }_4\)(ii) is similar to the check of \(\mathbf{C }_4\)(i).

As \(\lambda \rightarrow \infty \), uniformly in \(\theta \in \Theta ^c\)

$$\begin{aligned} \begin{aligned} \frac{\left|f_\alpha (\lambda ,\,\theta )\right|}{f(\lambda ,\,\theta )}w(\lambda )&=\left|\varphi _\alpha (\lambda )\right|f(\lambda ,\,\theta )w(\lambda ) =2\alpha \left(\lambda ^2+\alpha ^2+\gamma ^2\right)s^{-1}(\lambda )w(\lambda )=O\left(\lambda ^{-2a-2}\right);\\ \frac{\left|f_\beta (\lambda ,\,\theta )\right|}{f(\lambda ,\,\theta )}w(\lambda )&=\varphi _\beta (\lambda )f(\lambda ,\,\theta )w(\lambda ) =\beta ^{-1}w(\lambda )=O\left(\lambda ^{-2a}\right);\\ \frac{\left|f_\gamma (\lambda ,\,\theta )\right|}{f(\lambda ,\,\theta )}w(\lambda )&=\left|\varphi _\gamma (\lambda )\right|f(\lambda ,\,\theta )w(\lambda ) =4\gamma \left|\lambda ^2-\alpha ^2-\gamma ^2\right|s^{-1}(\lambda )w(\lambda )=O\left(\lambda ^{-2a-2}\right). \end{aligned} \end{aligned}$$
(38)

On the other hand, for any \(\Lambda >0\) the functions (38) are bounded on the sets \([-\Lambda ,\,\Lambda ]\times \Theta ^c\).

To check \(\mathbf{N }_4\)(iii) note first of all that the functions uniformly in \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),

$$\begin{aligned} \begin{aligned} \frac{f_\alpha ^2(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\varphi _\alpha (\lambda )f(\lambda ,\,\theta ) =8\alpha ^2\left(\lambda ^2+\alpha ^2+\gamma ^2\right)^2s^{-2}(\lambda )w(\lambda )=O\left(\lambda ^{-2a-4}\right);\\ \frac{f_\beta ^2(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\varphi _\beta (\lambda )f(\lambda ,\,\theta ) =\beta ^{-2}w(\lambda )=O\left(\lambda ^{-2a}\right);\\ \frac{f_\gamma ^2(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\varphi _\gamma (\lambda )f(\lambda ,\,\theta ) =16\gamma ^2\left(\lambda ^2-\alpha ^2-\gamma ^2\right)^2s^{-2}(\lambda )w(\lambda )=O\left(\lambda ^{-2a-4}\right). \end{aligned} \end{aligned}$$
(39)

These functions are continuous on \(\mathbb {R}\times \Theta ^c\), as well as the functions

$$\begin{aligned} \begin{aligned} \frac{f_\alpha (\lambda ,\,\theta )f_\beta (\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\varphi _\alpha (\lambda )f_\beta (\lambda ,\,\theta ) =-\frac{2\alpha }{\beta }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)s^{-1}(\lambda )w(\lambda );\\ \frac{f_\alpha (\lambda ,\,\theta )f_\gamma (\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\varphi _\alpha (\lambda )f_\gamma (\lambda ,\,\theta ) =-8\alpha \gamma \left(\lambda ^4-\left(\alpha ^2+\gamma ^2\right)^2\right)s^{-2}(\lambda )w(\lambda );\\ \frac{f_\beta (\lambda ,\,\theta )f_\gamma (\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\varphi _\beta (\lambda )f_\gamma (\lambda ,\,\theta ) =\frac{4\gamma }{\beta }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)s^{-1}(\lambda )w(\lambda ). \end{aligned} \end{aligned}$$
(40)

Moreover, uniformly in \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),

$$\begin{aligned} \begin{aligned} \frac{f_{\alpha \alpha }(\lambda ,\,\theta )}{f(\lambda ,\,\theta )}w(\lambda )&=-4\left(\lambda ^2+3\alpha ^2+\gamma ^2\right)s^{-1}(\lambda )w(\lambda ) +8\alpha \left(\lambda ^2+\alpha ^2+\gamma ^2\right)s^{-2}(\lambda )s_\alpha '(\lambda )w(\lambda )\\&=O\left(\lambda ^{-2a-2}\right);\\ \frac{f_{\beta \beta }(\lambda ,\,\theta )}{f(\lambda ,\,\theta )}w(\lambda )&=0;\\ \frac{f_{\gamma \gamma }(\lambda ,\,\theta )}{f(\lambda ,\,\theta )}w(\lambda )&=4\left(\lambda ^2-\alpha ^2-3\gamma ^2\right)s^{-1}(\lambda )w(\lambda ) -8\gamma \left(\lambda ^2-\alpha ^2-\gamma ^2\right)s^{-2}(\lambda )s_\gamma '(\lambda )w(\lambda )\\&=O\left(\lambda ^{-2a-2}\right);\\ \frac{f_{\alpha \beta }(\lambda ,\,\theta )}{f(\lambda ,\,\theta )}w(\lambda )&=-\frac{4\alpha }{\beta }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)s^{-1}(\lambda )w(\lambda ) =O\left(\lambda ^{-2a-2}\right);\\ \frac{f_{\alpha \gamma }(\lambda ,\,\theta )}{f(\lambda ,\,\theta )}w(\lambda )&=-8\alpha \gamma s^{-1}(\lambda )w(\lambda )+ 16\alpha \gamma \left(\lambda ^4-\left(\alpha ^2+\gamma ^2\right)^2\right)s^{-2}(\lambda )w(\lambda ) =O\left(\lambda ^{-2a-4}\right);\\ \frac{f_{\beta \gamma }(\lambda ,\,\theta )}{f(\lambda ,\,\theta )}w(\lambda )&=\frac{4\gamma }{\beta }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)s^{-1}(\lambda )w(\lambda ) =O\left(\lambda ^{-2a-2}\right). \end{aligned} \end{aligned}$$
(41)

Note that the functions (41) are continuous on \(\mathbb {R}\times \Theta ^c\) as well as functions (39) and (40). Therefore the condition \(\mathbf{N }_4\)(iii) is fulfilled.

Let us verify the condition \(\mathbf{N }_5\)(1). According to equation (39), uniformly in \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),

$$\begin{aligned} \begin{aligned} \frac{f_\alpha ^2(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}w(\lambda )&=\frac{16\pi \alpha ^2}{\beta }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)^2s^{-1}(\lambda )w(\lambda )=O\left(\lambda ^{-2a}\right);\\ \frac{f_\beta ^2(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}w(\lambda )&=\frac{2\pi }{\beta ^3}s(\lambda )w(\lambda )=O\left(\lambda ^{-2a+4}\right);\\ \frac{f_\gamma ^2(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}w(\lambda )&=\frac{32\pi \gamma ^2}{\beta }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)^2s^{-1}(\lambda )w(\lambda )=O\left(\lambda ^{-2a}\right). \end{aligned} \end{aligned}$$
(42)

Therefore the continuous in \((\lambda ,\theta )\in \mathbb {R}\times \Theta ^c\) functions (42) are bounded in \((\lambda ,\theta )\in \mathbb {R}\times \Theta ^c\), if \(a\ge 2\).

Using equations (40) and (41) we obtain uniformly in \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),

$$\begin{aligned} \begin{aligned} \frac{f_{\alpha \alpha }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=-\frac{8\pi }{\beta }\left(\lambda ^2+3\alpha ^2+\gamma ^2\right)w(\lambda ) +\frac{16\pi \alpha }{\beta }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)s^{-1}(\lambda )s_\alpha '(\lambda )w(\lambda )\\&=O\left(\lambda ^{-2a+2}\right);\\ \frac{f_{\beta \beta }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=0;\\ \frac{f_{\gamma \gamma }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\frac{8\pi }{\beta }\left(\lambda ^2-\alpha ^2-3\gamma ^2\right)w(\lambda ) -\frac{16\pi \gamma }{\beta }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)s^{-1}(\lambda )s_\gamma '(\lambda )w(\lambda )\\&=O\left(\lambda ^{-2a+2}\right);\\ \frac{f_{\alpha \beta }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=-\frac{8\pi \alpha }{\beta ^2} \left(\lambda ^2+\alpha ^2+\gamma ^2\right)w(\lambda ) =O\left(\lambda ^{-2a+2}\right);\\ \frac{f_{\alpha \gamma }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=-\frac{16\alpha \gamma }{\beta }w(\lambda )+ \frac{32\pi \alpha \gamma }{\beta }\left(\lambda ^4-\left(\alpha ^2+\gamma ^2\right)^2\right)s^{-1}(\lambda )w(\lambda ) =O\left(\lambda ^{-2a}\right);\\ \frac{f_{\beta \gamma }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}w(\lambda )&=\frac{8\pi \gamma }{\beta ^2}\left(\lambda ^2-\alpha ^2-\gamma ^2\right)w(\lambda ) =O\left(\lambda ^{-2a+2}\right). \end{aligned} \end{aligned}$$
(43)

So, continuous on \(\mathbb {R}\times \Theta ^c\) functions (43) are bounded in \((\lambda ,\theta )\in \mathbb {R}\times \Theta ^c\), if \(a\ge 1\).

To check \(\mathbf{N }_5\)(ii) consider the weight function

$$\begin{aligned} v(\lambda )=\left(1+\lambda ^2\right)^{-b},\ \lambda \in \mathbb {R},\ b>0. \end{aligned}$$

If \(a\ge b\), then function \(\frac{w(\lambda )}{v(\lambda )}\) is bounded on \(\mathbb {R}\) (condition \(\mathbf{N }_5\)(iii)). Using (42) we obtain uniformly in \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),

$$\begin{aligned} \begin{aligned} \frac{f_\alpha ^2(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}v(\lambda )&=\frac{16\pi \alpha ^2}{\beta }\left(\lambda ^2+\alpha ^2+\gamma ^2\right)^2s^{-1}(\lambda )v(\lambda )=O\left(\lambda ^{-2b}\right);\\ \frac{f_\beta ^2(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}v(\lambda )&=\frac{2\pi }{\beta ^3}s(\lambda )v(\lambda )=O\left(\lambda ^{-2b+4}\right);\\ \frac{f_\gamma ^2(\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}v(\lambda )&=\frac{32\pi \gamma ^2}{\beta }\left(\lambda ^2-\alpha ^2-\gamma ^2\right)^2s^{-1}(\lambda )v(\lambda )=O\left(\lambda ^{-2b}\right). \end{aligned} \end{aligned}$$
(44)

In turn, similarly to (40) it follows uniformly in \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),

$$\begin{aligned} \begin{aligned} \frac{f_\alpha (\lambda ,\,\theta )f_\beta (\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}v(\lambda )&=-\frac{4\pi \alpha }{\beta ^2}\left(\lambda ^2+\alpha ^2+\gamma ^2\right)v(\lambda )=O\left(\lambda ^{-2b+2}\right);\\ \frac{f_\alpha (\lambda ,\,\theta )f_\gamma (\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}v(\lambda )&=-\frac{16\alpha \gamma }{\beta }\left(\lambda ^4-\left(\alpha ^2+\gamma ^2\right)^2\right)s^{-1}(\lambda )v(\lambda )=O\left(\lambda ^{-2b}\right);\\ \frac{f_\beta (\lambda ,\,\theta )f_\gamma (\lambda ,\,\theta )}{f^3(\lambda ,\,\theta )}v(\lambda )&=\frac{8\pi \gamma }{\beta ^2}\left(\lambda ^2-\alpha ^2-\gamma ^2\right)v(\lambda )=O\left(\lambda ^{-2b+2}\right). \end{aligned} \end{aligned}$$
(45)

The functions (44) and (45) will be uniformly continuous in \((\lambda ,\theta )\in \mathbb {R}\times \Theta ^c\), if they converge to zero, as \(\lambda \rightarrow \infty \), uniformly in \(\theta \in \Theta ^c\), that is if \(b>2\).

Similarly to (43) uniformly in \(\theta \in \Theta ^c\), as \(\lambda \rightarrow \infty \),

$$\begin{aligned} \begin{aligned}&\frac{f_{\alpha \alpha }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}v(\lambda )=O\left(\lambda ^{-2b+2}\right);\ \ \frac{f_{\beta \beta }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}v(\lambda )=0;\quad \ \ \frac{f_{\gamma \gamma }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}v(\lambda )=O\left(\lambda ^{-2b+2}\right);\\&\frac{f_{\alpha \beta }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}v(\lambda )=O\left(\lambda ^{-2b+2}\right);\ \ \frac{f_{\alpha \gamma }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}v(\lambda )=O\left(\lambda ^{-2b}\right);\quad \\&\frac{f_{\beta \gamma }(\lambda ,\,\theta )}{f^2(\lambda ,\,\theta )}v(\lambda ) =O\left(\lambda ^{-2b+2}\right). \end{aligned} \end{aligned}$$
(46)

Thus the functions (44)–(46) are uniformly continuous in \((\lambda ,\theta )\in \mathbb {R}\times \Theta ^c\), if \(b>2\).

Proceeding to the verification of condition \(\mathbf{N }_6\), we note that for any \(x=\left(x_\alpha ,\,x_\beta ,\,x_\gamma \right)\ne 0\)

$$\begin{aligned} \left<W_1(\theta )x,\,x\right>=\int \limits _{\mathbb {R}}\,\left(x_\alpha f_\alpha (\lambda ,\,\theta )+x_\beta f_\beta (\lambda ,\,\theta ) +x_\gamma f_\gamma (\lambda ,\,\theta )\right)\frac{w(\lambda )}{f^2(\lambda ,\,\theta )}d\lambda . \end{aligned}$$

From equation (36) it is seen that the positive definiteness of the matrix \(W_1(\lambda )\) follows from linear independence of the functions \(\lambda ^2+\alpha ^2+\gamma ^2\), \(s(\lambda )\), \(\lambda ^2-\alpha ^2-\gamma ^2\). Positive definiteness of the matrix \(W_2(\theta )\) is established similarly.

In our example to satisfy the consistency conditions \(\mathbf{C }_4\) and \(\mathbf{C }_5\) the weight functions \(w(\lambda )\) and \(v(\lambda )\) should be chosen so that \(a\ge b>2\). On the other hand to satisfy the asymptotic normality conditions \(\mathbf{N }_4\) and \(\mathbf{N }_5\) the functions \(w(\lambda )\) and \(v(\lambda )\) should be such that \(a>\frac{5}{2}\) and \(a\ge b>2\).

The spectral density (35) has no singularity at zero, so that the functions \(v(\lambda )\) in the conditions \(\mathbf{C }_5\)(i) and \(\mathbf{N }_5\)(ii) could be chosen to be equal to \(w(\lambda )\), for example, \(a=b=3\). However we prefer to keep in the text the function \(v(\lambda )\), since it is needed when the spectral density could have a singularity at zero or elsewhere, see, e.g., Example 1 (Leonenko and Sakhno 2006), where linear process driven by the Brownian motion and regression function \(g(t,\,\alpha )\equiv 0\) have been studied. Specifically in the case of Riesz-Bessel spectral density

$$\begin{aligned} f(\lambda ,\,\theta )=\frac{\beta }{2\pi |\lambda |^{2\alpha }(1+\lambda ^2)^\gamma },\ \lambda \in \mathbb {R}, \end{aligned}$$
(47)

where \(\theta =\left(\theta _1,\,\theta _2,\,\theta _3\right)=(\alpha ,\,\beta ,\,\gamma )\in \Theta =(\underline{\alpha },\,\overline{\alpha }) \times (\underline{\beta },\,\overline{\beta })\times (\underline{\gamma },\,\overline{\gamma })\), \(\underline{\alpha }>0\), \(\overline{\alpha }<\frac{1}{2}\), \(\underline{\beta }>0\), \(\overline{\beta }<\infty \), \(\underline{\gamma }>\frac{1}{2}\), \(\overline{\gamma }<\infty \), and the parameter \(\alpha \) signifies the long range dependence, while the parameter \(\gamma \) indicates the second-order intermittency (Anh et al. 2004; Gao et al. 2001; Lim and Teo 2008), the weight functions have been chosen in the form

$$\begin{aligned} w(\lambda )=\frac{\lambda ^{2b}}{\left(1+\lambda ^2\right)^a},\ a>b>0;\ \ \ v(\lambda )=\frac{\lambda ^{2b'}}{\left(1+\lambda ^2\right)^{a'}},\ a'>b'>0,\ \lambda \in \mathbb {R}. \end{aligned}$$

Unfortunately, our conditions do not cover so far the case of the general non-linear regression function and Lévy driven continuous-time strongly dependent linear random noise such as Riesz-Bessel motion.