Skip to main content
Log in

Rate-optimal tests for jumps in diffusion processes

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

Suppose one has a sample of high-frequency intraday discrete observations of a continuous time random process, such as foreign exchange rates and stock prices, and wants to test for the presence of jumps in the process. We show that the power of any test of this hypothesis depends on the frequency of observation. In particular, if the process is observed at intervals of length \(1/n\) and the instantaneous volatility of the process is given by \( \sigma _{t}\), we show that at best one can detect jumps of height no smaller than \(\sigma _{t}\sqrt{2\log (n)/n}\). We present a new test which achieves this rate for diffusion-type processes, and examine its finite-sample properties using simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. This implies that we consider only alternatives with a maximum number of jumps.

  2. Chaboud et al. (2010) report that when the data are sampled at sufficiently high frequencies, in empirical practice most sampling intervals contain zero returns. At sufficiently high sampling frequencies, the likelihood of encountering two (or more) adjacent intervals with non-zero returns is very small. This feature of the data generating process imparts a downward bias on volatility estimates that are based on sums of products of absolute returns over adjacent time intervals.

  3. For \(\widehat{V}_{p,k}\), ASJ suggest the following two estimators:

    $$\begin{aligned} \widehat{V}_{p,k}^{c}=\frac{\Delta _{n}M(p,k)\widehat{A}(2p,\Delta )_{t}}{ \widehat{A}\left( p,\Delta \right) _{t}^{2}}\quad \hbox {and}\quad \widetilde{ V}_{p,k}^{c}=\frac{\Delta _{n}M(p,k)\widetilde{A}\left( \frac{p}{p+1} ,2p+2,\Delta \right) _{t}}{\widetilde{A}\left( \frac{p}{p+1},p+1,\Delta \right) _{t}^{2}}, \end{aligned}$$

    where

    $$\begin{aligned} M(p,k)&= \frac{1}{m_{p}^{2}}\left( k^{p-2}(1+k)m_{2p}+k^{p-2}(k-1)m_{p}^{2}-2k^{p/2-1}m_{k,p}\right) , \\ m_{p}&= E\left| Z_{1}\right| ^{p}=\pi ^{-1/2}\,2^{p/2}\,\Gamma \left( (p+1)/2\right) , \\ m_{k,p}&= E\left[ \left| Z_{1}\right| ^{p}\times \left| Z_{1}+ \sqrt{k-1}\,Z_{2}\right| ^{p}\right] , \\ Z_{i}&\sim \hbox {i.i.d. }N\left( 0,1\right) ,\quad \varpi \in (0,1/2), \\ \widehat{A}\left( p,\Delta _{n}\right) _{t}&= \frac{\Delta _{n}^{1-p/2}}{ m_{p}}\sum \left| \Delta _{i}^{n}X\right| ^{p}1\left\{ \left| \Delta _{i}^{n}X\right| \le \alpha \Delta _{n}^{\varpi }\right\} , \end{aligned}$$

    and

    $$\begin{aligned} \widetilde{A}\left( r,q,\Delta _{n}\right) _{t}=\frac{\Delta _{n}^{1-qr/2}}{ m_{r}^{q}}\sum _{i=1}^{n}\prod _{j=1}^{q}\left| \Delta _{i+j-1}^{n}X\right| ^{r}. \end{aligned}$$
  4. Our tests and other tests have some options to choose. So we consider several versions of those tests. LLP-4LN means our test where we choose four times log sample size as our averaging window. BNS-LIN means \(\hat{\tau }_{ BNS }^{ LIN }\) in Definition 1. ASJ-QV and ASJ-BPV means ASJ with \(\widehat{V}_{p,k}^{c}\quad \)and\(\quad \widetilde{V}_{p,k}^{c}\) in Definition 2, respectively. We choose \(p=4\) and \(k=2\) as recommended. LM-SQRT means LM test where we choose the square root of sample size as the averaging window.

  5. The LM Test requires the averaging window the order of which is equal to or larger than the square root of sample size. We report the performance of LM-4LN and LM-2LN to show the problem of small averaging windows although they are not valid in their theory.

  6. We are convinced that our result also holds for larger values of \( \varepsilon \). However, imposing this condition greatly simplifies one part of the proof.

References

  • Ait-Sahalia Y, Jacod J (2009) Testing for jumps in a discretely observed process. Ann Stat 37(1):184–222

    Article  MathSciNet  MATH  Google Scholar 

  • Barndorff-Nielsen O, Shephard N (2002) Econometric analysis of realized volatility and its use in estimating stochastic volatility models. J R Stat Soc B 64(2):253–280

    Article  MathSciNet  MATH  Google Scholar 

  • Barndorff-Nielsen O, Shephard N (2006) Econometrics of testing for jumps in financial economics using bipower variation. J Financial Econom 4(1):1–30

    MathSciNet  Google Scholar 

  • Barndorff-Nielsen O, Hansen P, Lunde A, Shephard N (2008) Designing realized kernels to measure the ex post variation of equity prices in the presence of noise. Econometrica 76(6):1481–1536

    Google Scholar 

  • Chaboud A, Chiquoine B, Hjalmarsson E, Loretan M (2010) Frequency of observation and the estimation of integrated volatility in deep and liquid financial markets. J Empir Finance 17(2):212–240

    Article  Google Scholar 

  • Feller W (1971) An introduction to probability and its applications, vol 2, 2nd edn. Wiley, New York

    MATH  Google Scholar 

  • Karatzas I, Shreve S (1991) Brownian motion and stochastic calculus, 2nd edn. Springer, New York

  • Lee S, Mykland P (2008) Jumps in financial markets: a new nonparametric test and jump dynamics. Rev Financial Stud 21(6):2535–2563

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Werner Ploberger.

Appendix

Appendix

1.1 1. Some useful properties of normal and \(\chi ^{2}\) distributions for small and large values

It is well known that for the standard distribution function \(\Phi (x)\)

$$\begin{aligned} \lim _{x\rightarrow \infty }\left( 1-\Phi (x)\right) \sqrt{2\pi }\,x\exp \, ( x^{2}/2) =1 \end{aligned}$$

or, equivalently, that

$$\begin{aligned} \lim _{x\rightarrow \infty }\left( \log (2\Phi (x)-1)\right) \sqrt{2\pi } \,x\exp \, ( x^{2}/2)/2=-1. \end{aligned}$$

We may define \(M(\varepsilon )\) as the smallest positive number such that for all

$$\begin{aligned} x>M(\varepsilon ) \end{aligned}$$
(9)

one finds

$$\begin{aligned} 1-\varepsilon \le \left| \sqrt{2\pi }\,x\exp \, ( x^{2}/2) \left( 1-\Phi (x)\right) \right| \le 1+\varepsilon \end{aligned}$$
(10)

and also

$$\begin{aligned} -2(1+\varepsilon )^{2}\le \left( \log (2\Phi (x)-1)\right) \sqrt{2\pi } \,x\exp \, ( x^{2}/2) \le -2(1-\varepsilon )^{2}. \end{aligned}$$
(11)

The following lemma may be derived from these properties

Lemma 4

Choose an arbitrary \(\varepsilon >0\) and let \(n\), \(\ell _{n} \), and \(w_{i}\) be the terms defined in Sect. 2.2. If

$$\begin{aligned} \ell _{n}\ge 2\ln n\quad \hbox {and}\quad K\rightarrow \infty , \end{aligned}$$

then

$$\begin{aligned} p_{n}=P\left\{ w_{i}\le \ell _{n}M(\varepsilon )^{2}/K\right\} =o\left( n^{-1}\right) . \end{aligned}$$

1.2 2. Proofs

1.2.1 2.1 Proof of Theorem 1

We assume that the variance of the Wiener process \(W\) is known; without loss of generality, we may assume that this variance is equal to 1. Let \(P_{n}\) be the probability measure of \((X_{0},X_{1/n},X_{2/n},\dots ,X_{1})\) under the null, and \(Q_{n}\) be the measure under the alternative. Let the \(z_{i}\) be defined as

$$\begin{aligned} z_{i}=\sqrt{n}\left( X_{i/n}-X_{(i-1)/n}\right) ,\quad i=2,3,\ldots ,n. \end{aligned}$$

Then each \(z_{i}\) is i.i.d. standard normal, and it is easily seen that the log likelihood ratio is given by

$$\begin{aligned} \ln \frac{\mathrm{d }Q_{n}}{\mathrm{d }P_{n}}=\frac{1}{n}\sum _{i=1}^{n}\exp \left\{ (c_{n}\sqrt{n}\,)z_{i}-\frac{1}{2}(c_{n}\sqrt{n}\,)^{2}\right\} . \end{aligned}$$

Because each \(z_{i}\) is standard normal, the expectation of each \(\exp \Big \{ (c_{n}\sqrt{n}\,)z_{i}-\frac{1}{2}(c_{n}\sqrt{n}\,)^{2}\Big \} \) term equals 1. Moreover,

$$\begin{aligned} E\left[ \exp \left\{ (c_{n}\sqrt{n}\,)z_{i}-\frac{1}{2}(c_{n}\sqrt{n} \,)^{2}\right\} ^{2}\right]&= E\left[ \exp \left\{ 2(c_{n}\sqrt{n} \,)z_{i}-(c_{n}\sqrt{n}\,)^{2}\right\} \right] \\&= \exp \left\{ (c_{n}\sqrt{n} \,)^{2}\right\} . \end{aligned}$$

Hence the variance of \(\ln (\mathrm d Q_{n}/\mathrm d P_{n})\) is smaller than \(\exp \left\{ (c_{n}\sqrt{n}\,)^{2}\right\} /n\), which converges to 0 if \(c_{n}\) satisfies Eq. (3).

Since we are interested primarily in small (but positive) values of \( \varepsilon \)—the smaller we choose \(\varepsilon \), the bigger are the permissible jumps—we may impose the condition thatFootnote 6

$$\begin{aligned} 0<\varepsilon <\frac{2}{3}. \end{aligned}$$
(12)

We need to establish that

$$\begin{aligned} \frac{\mathrm{d }Q_{n}}{\mathrm{d }P_{n}}\rightarrow 1 \end{aligned}$$
(13)

in probability. The term \(\mathrm d Q_{n}/\mathrm d P_{n}\) is a nonnegative random variable. Hence one may establish Eq. (13) by showing that its Laplace transform obeys

$$\begin{aligned} E\exp \left( -s\frac{\mathrm{d }Q_{n}}{\mathrm{d }P_{n}}\right) \rightarrow \exp (-s) \end{aligned}$$

for all positive \(s\). Equivalently, one might establish Eq. (13) by showing that the logarithm of the Laplace transform satisfies

$$\begin{aligned} \ln E\exp \left( -s\frac{\mathrm{d }Q_{n}}{\mathrm{d }P_{n}}\right) \rightarrow -s. \end{aligned}$$

Recall that sequence of local alternatives is assumed to satisfy Eq. (3), i.e., that

$$\begin{aligned} c_{n}=\sigma \sqrt{\frac{2(1-\varepsilon )\ln n}{n}}. \end{aligned}$$

This allows us to rewrite \(\ln (\mathrm{d }Q_{n}/\mathrm d P_{n})\) as

$$\begin{aligned} \ln \frac{\mathrm{d }Q_{n}}{\mathrm{d }P_{n}}=\frac{1}{n^{2-\varepsilon }} \sum _{i=1}^{n}\exp \left( z_{i}\sqrt{2(1-\varepsilon )\ln n}\,\right) . \end{aligned}$$

Since each of the \(z_{i}\) is i.i.d. standard normal, we may express the logarithm of the Laplace transform of \(\mathrm d Q_{n}/\mathrm d P_{n}\) as

$$\begin{aligned} \ln E\exp \left( -s\frac{\mathrm{d }Q_{n}}{\mathrm{d }P_{n}}\right) =n\ln \left\{ E\exp \left[ -\frac{s}{n^{2-\varepsilon }}\exp \left( z_{i}\sqrt{ 2(1-\varepsilon )\ln n}\,\right) \right] \right\} . \end{aligned}$$

Equivalently, then, we must show that

$$\begin{aligned} n\ln \left[ E\exp \left( -\frac{s}{n^{2-\varepsilon }}\exp \left\{ z_{i} \sqrt{2(1-\varepsilon )\ln n}\,\right\} \right) \right] \rightarrow -s. \end{aligned}$$
(14)

It is well known that

$$\begin{aligned} \lim _{x\rightarrow 1}\frac{\ln x}{x-1}=1. \end{aligned}$$

Using this result, it is straightforward to establish that Eq. (14) is equivalent to

$$\begin{aligned} \frac{n}{s}E\left[ 1-\exp \left( -\frac{s}{n^{2-\varepsilon }}\exp \left\{ z_{i}\sqrt{2(1-\varepsilon )\ln n}\,\right\} \right) \right] \rightarrow 1. \end{aligned}$$

In the following, we will prove that this statement is correct. Let \(\Phi (\cdot )\) be the cumulative distribution function (cdf) of the standard normal. Then the cdf of \(\exp \left\{ z_{i}\sqrt{2(1-\varepsilon )\ln n} \,\right\} \) is given by

$$\begin{aligned} \Phi \left( \frac{\ln x}{\sqrt{2(1-\varepsilon )\ln n}}\right) . \end{aligned}$$

Hence

$$\begin{aligned}&E\left[ 1-\exp \left\{ -\frac{s}{n^{2-\varepsilon }}\exp \left( z_{i}\sqrt{ 2(1-\varepsilon )\ln n}\,\right) \right\} \right] \\ \quad&=\int \limits _{0}^{\infty }\left\{ 1-\exp \left( -\frac{s}{n^{2-\varepsilon }} \,x\right) \right\} \mathrm d \Phi \left( \frac{\ln x}{\sqrt{2(1-\varepsilon )\ln n}}\right) \\ \quad&=\frac{1}{\sqrt{2\pi }}\int \limits _{0}^{\infty }\left\{ 1-\exp \left( - \frac{s}{n^{2-\varepsilon }}\,x\right) \right\} \exp \left( -\frac{(\ln x)^{2}}{4(1-\varepsilon )\ln n}\right) \frac{1}{x}\,\frac{1}{\sqrt{ 2(1-\varepsilon )\ln n}}\mathrm d x. \end{aligned}$$

Re-scaling the preceding expression by \(n/s\), we define \(S_{n}\) as

$$\begin{aligned} S_{n}&= \frac{n}{s}\,E\left[ 1-\exp \left\{ -\frac{s}{n^{2-\varepsilon }} \exp \left( z_{i}\sqrt{2(1-\varepsilon )\ln n}\,\right) \right\} \right] \\&= \frac{1}{\sqrt{2\pi }}\frac{n^{-(1-\varepsilon )}}{\sqrt{2(1-\varepsilon )\ln n}}\int \limits _{0}^{\infty }\frac{1-\exp \left( -\frac{s}{n^{2-\varepsilon }} \,x\right) }{\frac{s}{n^{2-\varepsilon }}\,x}\exp \left( -\frac{(\ln x)^{2}}{ 4(1-\varepsilon )\ln n}\right) \mathrm d x. \end{aligned}$$

What we must show is that

$$\begin{aligned} S_{n}\rightarrow 1. \end{aligned}$$
(15)

Using the substitution

$$\begin{aligned} y=\frac{s}{n^{2-\varepsilon }}\,x,\hbox { and thus }\mathrm d x=\frac{ n^{2-\varepsilon }}{s}\,\mathrm d y\hbox { and }\ln x=\ln y+(2-\varepsilon )\ln n-\ln s \end{aligned}$$

yields

$$\begin{aligned} S_{n}&\!=\!&\frac{1}{\sqrt{2\pi }}\frac{n^{-(1-\varepsilon )}}{\sqrt{ 2(1-\varepsilon )\ln n}}\int \limits _{0}^{\infty }\frac{1-e^{-y}}{y}\exp \left[ - \frac{\left\{ \ln y+(2-\varepsilon )\ln n-\ln s\right\} ^{2}}{ 4(1-\varepsilon )\ln n}\right] \frac{n^{2-\varepsilon }}{s}\mathrm d y \nonumber \\&\!=\!&\frac{1}{\sqrt{2\pi }}\frac{n}{\sqrt{2(1-\varepsilon )\ln n}}\,\frac{1}{s} \int \limits _{0}^{\infty }\frac{1-e^{-y}}{y}\exp \left[ -\frac{\left\{ \ln y+(2-\varepsilon )\ln n-\ln s\right\} ^{2}}{4(1-\varepsilon )\ln n}\right] \mathrm d y. \nonumber \\ \end{aligned}$$
(16)

Hence we must now evaluate the integral

$$\begin{aligned} \int \limits _{0}^{\infty }\frac{1-e^{-y}}{y}\,\exp \left[ -\frac{\left\{ \ln y+(2-\varepsilon )\ln n-\ln s\right\} ^{2}}{4(1-\varepsilon )\ln n}\right] \mathrm d y. \end{aligned}$$
(17)

The second factor in the integrand in Eq. (17) may be split into three terms:

$$\begin{aligned}&\exp \left[ -\frac{\{\ln y+[(2-\varepsilon )\ln n-\ln s]\}^{2}}{ 4(1-\varepsilon )\ln n}\right] =\exp \left[ -\frac{(\ln y)^{2}}{4(1-\varepsilon )\ln n}\right] \\&\quad \times \exp \left[ -\frac{(\ln y)[(2-\varepsilon )\ln n+\ln s]}{2(1-\varepsilon )\ln n}\right] \exp \left[ -\frac{[(2-\varepsilon )\ln n-\ln s]^{2}}{4(1-\varepsilon )\ln n} \right] . \end{aligned}$$

Label the third term \(C_{n}\). Observe that \(C_{n}\) does not depend on \(y\); hence, when evaluating Eq. (17), it may be taken outside the integral. It is straightforward to rearrange \(C_{n}\) as

$$\begin{aligned} C_{n}=n^{-\frac{(2-\varepsilon )^{2}}{4(1-\varepsilon )}}s^{\frac{ 2-\varepsilon }{2(1-\varepsilon )}}\exp \left[ -\frac{(\ln s)^{2}}{ 4(1-\varepsilon )\ln n}\right] . \end{aligned}$$
(18)

Next, note that the second term may also be expressed as \(y^{H_{n}}\), where

$$\begin{aligned} H_{n}=-\frac{2-\varepsilon }{2(1-\varepsilon )}+\frac{\ln s}{2(1-\varepsilon )\ln n}. \end{aligned}$$

Equation (16) may therefore be restated as

$$\begin{aligned} S_{n}=\frac{1}{\sqrt{2\pi }}\frac{n}{\sqrt{2(1-\varepsilon )\ln n}}\,\frac{1 }{s}\,C_{n}\int \limits _{0}^{\infty }y^{H_{n}}\frac{1-e^{-y}}{y}\exp \left\{ -\frac{ (\ln y)^{2}}{4(1-\varepsilon )\ln n}\right\} \mathrm d y. \end{aligned}$$
(19)

In order to evaluate the integral on the right-hand side of Eq. (19), we start by rewriting it as

$$\begin{aligned} \int \limits _{0}^{\infty }y^{H_{n}}\frac{1-e^{-y}}{y}\exp \left\{ -\frac{(\ln y)^{2} }{4(1-\varepsilon )\ln n}\right\} \mathrm d y \end{aligned}$$

We will show in the following that the integral labeled \(\mathcal{A }\) is asymptotically negligible relative to that labeled \(\mathcal{B }\). With the substitution

$$\begin{aligned} z=\frac{\ln y}{\sqrt{2(1-\varepsilon )\ln n}} \end{aligned}$$

and thus

$$\begin{aligned} y=\exp \left[ \sqrt{2(1-\varepsilon )\ln n}\,z\right] \quad \hbox {and}\quad \mathrm d y=y\sqrt{2(1-\varepsilon )\ln n}\,\mathrm d z, \end{aligned}$$

the \(\mathcal{B }\) integral may be rewritten as

$$\begin{aligned}&\int \limits _{0}^{\infty }y^{H_{n}} \exp \left\{ -\frac{(\ln y)^{2}}{ 4(1-\varepsilon )\ln n}\right\} \mathrm d y \nonumber \\&\quad =\sqrt{2(1-\varepsilon )\ln n}\int \limits _{-\infty }^{\infty }\exp \left[ zH_{n} \sqrt{2(1-\varepsilon )\ln n}\,\right] \exp \left[ z\sqrt{2(1-\varepsilon )\ln n}\,\right] \exp ( -z^{2}/2) \mathrm d z \nonumber \\&\quad =\sqrt{2(1-\varepsilon )\ln n}\int \limits _{-\infty }^{\infty }\exp \left\{ z\left( H_{n}+1\right) \sqrt{2(1-\varepsilon )\ln n}\,\right\} \exp \left( -z^{2}/2\right) \mathrm d z. \end{aligned}$$
(20)

The integral on the right-hand side of Eq. (20), viz.,

$$\begin{aligned} \int \limits _{-\infty }^{\infty }\exp \left\{ z\left( H_{n}+1\right) \sqrt{ 2(1-\varepsilon )\ln n}\,\right\} \exp \left( -z^{2}/2\right) \mathrm d z, \end{aligned}$$

may be restated as

$$\begin{aligned}&\exp \left[ \frac{1}{2}\left\{ \left( H_{n}\!+\!1\right) \sqrt{2(1-\varepsilon )\ln n}\,\right\} ^{2}\right] \times \!\int \limits _{-\infty }^{\infty }\exp \left[ - \frac{1}{2}\left\{ z-H_{n}\sqrt{2(1-\varepsilon )\ln n}\,\right\} ^{2}\right] \mathrm d z \\&\quad =\sqrt{2\pi }\exp \left\{ \left( H_{n}+1\right) ^{2}(1-\varepsilon )\ln n\right\} . \end{aligned}$$

And, because

$$\begin{aligned} \left( H_{n}+1\right) ^{2}(1-\varepsilon )&= \left\{ 1-\frac{2-\varepsilon }{ 2(1-\varepsilon )}+\frac{\ln s}{2(1-\varepsilon )\ln n}\right\} ^{2}(1-\varepsilon ) \\&= \left\{ -\frac{\varepsilon }{2(1-\varepsilon )}+\frac{\ln s}{ 2(1-\varepsilon )\ln n}\right\} ^{2}(1-\varepsilon ) \\&= \frac{\varepsilon ^{2}}{4(1-\varepsilon )}-\frac{\ln s}{\ln n}\left( \frac{\varepsilon }{2(1-\varepsilon )}\right) +\frac{\left( \ln s\right) ^{2} }{\left( \ln n\right) ^{2}}\frac{1}{4(1-\varepsilon )}, \end{aligned}$$

we have

$$\begin{aligned} \exp \left\{ \left( H_{n}+1\right) ^{2}(1-\varepsilon )\ln n\right\}&= \exp \left[ \left\{ \frac{\varepsilon ^{2}}{4(1-\varepsilon )}-\frac{\ln s}{\ln n} \left( \frac{\varepsilon }{2(1-\varepsilon )}\right) +\frac{\left( \ln s\right) ^{2}}{\left( \ln n\right) ^{2}}\frac{1}{4(1-\varepsilon )}\right\} \ln n\right] \\&= n^{\frac{\varepsilon ^{2}}{4(1-\varepsilon )}}s^{-\frac{\varepsilon }{ 2(1-\varepsilon )}}\exp \left( \frac{1}{\ln n}\frac{\left( \ln s\right) ^{2} }{4(1-\varepsilon )}\right) . \end{aligned}$$

Using Eq. (20), we therefore deduce that for all \(s>0\)

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\int _{0}^{\infty }y^{H_{n}}\exp \left( - \frac{(\ln y)^{2}}{4(1-\varepsilon )\ln n}\right) \mathrm d y}{\displaystyle n^{\frac{ \varepsilon ^{2}}{4(1-\varepsilon )}}s^{-\frac{\varepsilon }{2(1-\varepsilon )}}\sqrt{2\pi }\sqrt{2(1-\varepsilon )\ln n}}=1. \end{aligned}$$
(21)

As the denominator in Eq. (21) diverges to infinity as \( n\rightarrow \infty \) for all fixed \(s>0\), we have established that the integral \(\mathcal{B }\)—which is also the numerator in Eq. (21)—diverges to infinity as \(n\rightarrow \infty \).

Next, we show that the integral \(\mathcal{A }\) above is \(O(1)\). First, observe that

$$\begin{aligned} H_{n}\rightarrow -\frac{2-\varepsilon }{2(1-\varepsilon )}. \end{aligned}$$
(22)

Because \(\varepsilon >0\),

$$\begin{aligned} -\frac{2-\varepsilon }{2(1-\varepsilon )}<-1. \end{aligned}$$

The assumption stated in Eq. (12), viz., \( \varepsilon <2/3\), implies that

$$\begin{aligned} -\frac{2-\varepsilon }{2(1-\varepsilon )}>-2, \end{aligned}$$

Therefore, the limit of \(H_{n}\) lies between \(-2\) and \(-1\). Hence, there exist constants \(\alpha \) and \(\beta \), with \(-2<\alpha <\beta <-1\), such that for all but finitely many \(n\)

$$\begin{aligned} \alpha <H_{n}<\beta . \end{aligned}$$
(23)

Without loss of generality we may assume that Eq. (23) holds for all \(n\).

The second term in the integrand of integral \(\mathcal{A }\), viz.,

$$\begin{aligned} \psi (y)=\frac{1-e^{-y}-y}{y}. \end{aligned}$$

is easily seen to be uniformly bounded for all \(y\ge 1\), i.e., for all \( y\ge 1\) there exists an \(M\) such that

$$\begin{aligned} \left| \psi (y)\right| \le M \hbox { for all } y \ge 1. \end{aligned}$$
(24)

For the case \(0\le y\le 1\), we may use the power series representation for \(\exp (\cdot )\) to find that \(\psi (\cdot )\) is an analytic function as well and that \(\psi (0)=0\). Since any analytic function has derivatives that are bounded on any compact set, we deduce that, for \(0\le y\le 1\),

$$\begin{aligned} \left| \psi (y)\right| \le C y \end{aligned}$$
(25)

for some universal constant \(C\), i.e., that \(\left| \psi (y)\right| \) is bounded by a linear function in \(y\).

Finally, the third term in the integrand of the integral \(\mathcal{A }\) above, viz.,

$$\begin{aligned} \exp \left\{ -\frac{(\ln y)^2 }{4(1-\varepsilon )\ln n}\right\} , \end{aligned}$$

may easily be shown to be smaller than 1 in absolute value for all values of \(y\) and \(n>1\).

We now combine the results for the three terms that make up the integrand of integral \(\mathcal{A }\). To evaluate the integral, one needs to consider separately the regions \(0\le y\le 1\) and \(y\ge 1\).

For \(y\ge 1\), we find using Eq. (24) that

$$\begin{aligned} \left| y^{H_{n}}\,\frac{1-e^{-y}-y}{y}\exp \left\{ -\frac{(\ln y)^{2}}{ 4(1-\varepsilon )\ln n}\right\} \right| \le My^{\beta }, \end{aligned}$$

and, because \(\beta <-1\),

$$\begin{aligned}&\left| \int \limits _{1}^{\infty }y^{H_{n}}\,\frac{1-e^{-y}-y}{y}\exp \left\{ \frac{(\ln y)^{2}}{4(1-\varepsilon )\ln n}\right\} \mathrm d y\right| \\&\le M\int \limits _{1}^{\infty }y^{\beta }\mathrm d y =-M\frac{1}{1+\beta }. \end{aligned}$$

We deduce that

$$\begin{aligned} \left| \int \limits _{1}^{\infty }y^{H_{n}}\,\frac{1-e^{-y}-y}{y}\exp \left\{ - \frac{(\ln y)^{2}}{4(1-\varepsilon )\ln n}\right\} \mathrm d y\right| \end{aligned}$$

is uniformly bounded in \(n\). Therefore, for all \(s\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\int _{1}^{\infty }y^{H_{n}}\,\frac{ 1-e^{-y}-y}{y}\exp \left\{ -\frac{(\ln y)^{2}}{4(1-\varepsilon )\ln n} \right\} \mathrm d y}{\displaystyle \sqrt{2(1-\varepsilon )\ln n}\sqrt{2\pi }\,n^{\frac{ \varepsilon ^{2}}{4(1-\varepsilon )}}s^{-\frac{\varepsilon }{2(1-\varepsilon )}}}=0. \end{aligned}$$
(26)

For the region \(0\le y\le 1\), we find using Eq. (25) that

$$\begin{aligned} \left| y^{H_{n}}\,\frac{1-e^{-y}-y}{y}\exp \left\{ -\frac{(\ln y)^{2}}{ 4(1-\varepsilon )\ln n}\right\} \right| \le Cy^{1+\alpha }. \end{aligned}$$

Because \(\alpha >-2\), we have

$$\begin{aligned} \Big \vert \int \limits _{0}^{1}y^{H_{n}}\,\frac{1-e^{-y}-y}{y}\exp \left\{ -\frac{ (\ln y)^{2}}{4(1-\varepsilon )\ln n}\right\} \mathrm d y\Big \vert \le C\int \limits _{0}^{1}y^{1+\alpha }\mathrm d y=C\frac{1}{2+\alpha }. \nonumber \\ \end{aligned}$$
(27)

Combining the results given in Eqs. (21) and (26), we may state that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\int _{0}^{\infty }y^{H_{n}}\,\frac{1-e^{-y} }{y}\exp \left\{ -\frac{(\ln y)^{2}}{4(1-\varepsilon )\ln n}\right\} \mathrm d y}{\displaystyle \sqrt{2(1-\varepsilon )\ln n}\sqrt{2\pi }\,n^{\frac{\varepsilon ^{2}}{ 4(1-\varepsilon )}}s^{-\frac{\varepsilon }{2(1-\varepsilon )}}}=1. \end{aligned}$$
(28)

We use this result, in turn, to establish the limit of \(S_{n}\), which was defined in Eq. (19) above, as follows:

$$\begin{aligned}&\lim _{n\rightarrow \infty } \frac{S_{n}}{\frac{1}{\sqrt{2\pi }}\,\frac{n}{ \sqrt{2(1-\varepsilon )\ln n}}\,\frac{1}{s}\cdot C_{n}\left[ \sqrt{ 2(1-\varepsilon )\ln n}\sqrt{2\pi }\,n^{\frac{\varepsilon ^{2}}{ 4(1-\varepsilon )}}\,s^{-\frac{\varepsilon }{2(1-\varepsilon )}}\right] } \nonumber \\&\quad =\lim _{n\rightarrow \infty }\frac{S_{n}}{n^{(2-\varepsilon )^{2}/4(1-\varepsilon )}s^{-\varepsilon /(2(1-\varepsilon ))-1}C_{n}} \nonumber \\&\quad =1. \end{aligned}$$
(29)

Because \(C_{n}\) was shown in Eq. (18) to be given by

$$\begin{aligned} C_{n}=n^{-\frac{(2-\varepsilon )^{2}}{4(1-\varepsilon )}}s^{\frac{\left( 2-\varepsilon \right) }{2(1-\varepsilon )}}\exp \left( -\frac{(\ln s)^{2}}{ 4(1-\varepsilon )\ln n}\right) , \end{aligned}$$

we deduce that the denominator of the expression in Eq. (29) converges to 1. But this implies that

$$\begin{aligned} \lim _{n\rightarrow \infty }S_{n}=1 \end{aligned}$$

as well, which is exactly the expression in Eq. (15), the result we wanted to prove. Hence, the Laplace transform

$$\begin{aligned} E\exp \left\{ -s\frac{\mathrm{d }Q_{n}}{\mathrm{d }P_{n}}\right\} \end{aligned}$$

of the density ratio converges to \(e^{-s}\), which is the Laplace transform of a measure concentrated at 1. We have thus shown that the density ratio \( \mathrm d Q_{n}/\mathrm d P_{n}\) converges in distribution to a constant, viz., 1. By (Feller (1971), Theorem 2, p. 431), it also converges in probability to 1. Put differently, for any arbitrary \(\eta >0\),

$$\begin{aligned} \lim _{n\rightarrow \infty }P_{n}\left\{ \left| \frac{\mathrm{d }Q_{n}}{ \mathrm{d }P_{n}}-1\right| >\eta \right\} =0. \end{aligned}$$

Let \(A_{n}\) be any sequence of events. Then, for any \(\eta >0\) we have

$$\begin{aligned} Q_{n}(A_{n})&= \int \limits _{A_{n}}\frac{\mathrm{d }Q_{n}}{\mathrm{d }P_{n}}\mathrm{d } P_{n} \\&\ge \int \limits _{A_{n}}\frac{\mathrm{d }Q_{n}}{\mathrm{d }P_{n}}\,I\left( \frac{ \mathrm{d }Q_{n}}{\mathrm{d }P_{n}}>1-\eta \right) \mathrm{d }P_{n} \\&\ge P_{n}\left[ A_{n}\cap \left( \frac{\mathrm{d }Q_{n}}{\mathrm{d }P_{n}} >1-\eta \right) \right] (1-\eta ) \\&\ge \left[ P_{n}(A_{n})-P_{n}\left( \frac{\mathrm{d }Q_{n}}{\mathrm{d }P_{n} }>1-\eta \right) \right] (1-\eta ) \\&\ge P_{n}(A_{n})(1-\eta )-P_{n}\left[ \,\left| \frac{\mathrm{d }Q_{n}}{\mathrm{d }P_{n}}-1\right| >\eta \right] . \end{aligned}$$

Since \(\eta \) is arbitrary, we deduce that

$$\begin{aligned} \lim _{n\rightarrow \infty }\left[ P_{n}(A_{n})-Q_{n}(A_{n})\right] =0. \end{aligned}$$

Because \(A_{n}\) is an arbitrary sequence of events, we further deduce that the total variation between \(P_{n}\) and \(Q_{n}\) converges to 0. Hence, for all measurable functions \(\varphi _{n}\) with \(0\le \varphi _{n}\le 1\), we have

$$\begin{aligned} \int \varphi _{n}\mathrm d P_{n}-\int \varphi _{n}\mathrm d Q_{n}\rightarrow 0. \end{aligned}$$

But this is exactly what we had to show: For every sequence of tests, their power under the null, \(P_{n}\), is asymptotically equal to that under the alternative, \(Q_{n}\). \(\square \)

1.2.2 2.2 Proof of Lemma 2

Observe that

$$\begin{aligned} P\left( \max _{i=\ell +1,\dots ,n}\tau _{i}>K_{n}^{*}\right) =1-P\left( \max _{i=\ell +1,\dots ,n}\tau _{i}\le K_{n}^{*}\right) \end{aligned}$$

and

$$\begin{aligned} P\left( \max _{i=\ell +1,\dots ,n}\tau _{i}\le K_{n}^{*}\right) =E\Big [ \prod _{i=\ell +1,\dots ,n}I\left\{ \tau _{i}\le K_{n}^{*}\right\} \Big ]. \end{aligned}$$

It may be seen immediately that the \(\tau _{i}\) are \(\mathcal{F }_{i}-\) measurable. We will repeatedly apply the optional sampling theorem for various stopping times. Let \(\varepsilon >0\) be arbitrary and let \( M(\varepsilon )\) be defined by Eqs. (9)–(11) in Appendix 1.

Let us define the stopping time \(\nu \) as the first index \(m\le n-1\) such that either

$$\begin{aligned} \sum _{j=\ell +1}^{m+1}\log E\left[ I\left\{ \tau _{i}\le K_{n}^{*}\right\} \Big |\mathcal{F }_{i-1}\right] <-c(1+\varepsilon )^{3} \end{aligned}$$

or

$$\begin{aligned} \hat{\sigma }_{m+1}^{2}<M(\varepsilon )^{2}/K_{n} \end{aligned}$$

or

$$\begin{aligned} \sum _{j=\ell +1}^{m+1}\log E\left[ I\left\{ \tau _{i}\le K_{n}^{*}\right\} \Big |\mathcal{F }_{i-1}\right] >-c(1-\varepsilon ) \end{aligned}$$

or, if no such \(m\) exists, set \(\nu =n\).

Observe that \(\nu \) is indeed a stopping time adapted to \(\mathcal{F }_{i}\): For any \(i\le m+1\), both \(E\left( (\tau _{i}\le K_{n}^{*})\mid \mathcal{F }_{i-1}\right) \) and \(\hat{\sigma }_{m+1}^{2}\) are \(\mathcal{F }_{m}\) -measurable, and hence we also find that the event

$$\begin{aligned} \left\{ \nu =n\right\} \in \mathcal{F }_{m}. \end{aligned}$$

We contend that

$$\begin{aligned} \lim _{n\rightarrow \infty }P\left\{ \nu =n\right\} =1. \end{aligned}$$
(30)

To establish Eq. (30), it is sufficient to first show that

$$\begin{aligned} P\left\{ \inf \hat{\sigma }_{i}^{2}>M(\varepsilon )^{2}/K_{n}^{*}\right\} \rightarrow 1 \end{aligned}$$
(31)

and second, because \(\log E\left[ I\left\{ \tau _{i}\le K_{n}^{*}\right\} \Big |\mathcal{F }_{i-1}\right] \le 0\), that

$$\begin{aligned} P\left\{ \sum _{i=\ell +1}^{n}\log E\left[ I\left\{ \tau _{i}\le K_{n}^{*}\right\} \Big |\mathcal{F }_{i-1}\right] \ge -c(1+\varepsilon )^{3}\right\} \bigcap \,\left\{ \inf \hat{\sigma }_{i}^{2}>M(\varepsilon )^{2}/K_{n}^{*}\right\} \rightarrow 1. \nonumber \\ \end{aligned}$$
(32)

Equation (31) is an immediate consequence of Lemma 4 , which shows that

$$\begin{aligned} P\left\{ \inf \hat{\sigma }_{i}^{2}\le M(\varepsilon )^{2}/K_{n}^{*}\right\} \le nP\left\{ \hat{\sigma }_{i}^{2}\le M(\varepsilon )^{2}/K_{n}^{*}\right\} \rightarrow 0. \end{aligned}$$

To show that the claim in Eq. (32) is valid, first observe that

$$\begin{aligned} E\left[ I\left\{ \tau _{i}\le K_{n}^{*}\right\} \Big |\mathcal{F }_{i-1} \right] =2\Phi \Big ( \sqrt{K_{n}^{*}\hat{\sigma }_{i}^{2}}\Big ) -1. \end{aligned}$$

If \(\hat{\sigma }_{i}^{2}>M(\varepsilon )^{2}/K_{n}\) , we may use inequality (11) to deduce that

$$\begin{aligned} \log \left( 2\Phi \left( \sqrt{K_{n}^{*}\hat{\sigma }_{i}^{2}}\right) -1\right) \ge -2(1+\varepsilon )^{2}\exp \left( -K_{n}^{*}\hat{\sigma } _{i}^{2}/2\right) \Big /\sqrt{2\pi K_{n}^{*}\hat{\sigma }_{i}^{2}} \end{aligned}$$

Hence

$$\begin{aligned}&\left\{ \sum _{i=\ell +1}^{n}\log E\left[ I\left\{ \tau _{i}\le K_{n}^{*}\right\} \mid \mathcal{F }_{i-1}\right] \ge -c(1+\varepsilon )^{3}\right\} \bigcap \,\left\{ \inf \hat{\sigma }_{i}^{2}>M(\varepsilon )^{2}/K_{n}^{*}\right\} \\&\subseteq \left\{ -2(1+\varepsilon )^{2}\sum _{i=\ell +1}^{n}\exp \left( -K_{n}^{*}\hat{\sigma }_{i}^{2}/2\right) \Big /\sqrt{2\pi K_{n}^{*}\hat{ \sigma }_{i}^{2}}\ge -c(1+\varepsilon )^{3}\right\} \bigcap \left\{ \inf \hat{\sigma }_{i}^{2}>M(\varepsilon )^{2}/K_{n}^{*}\right\} . \end{aligned}$$

Since we already know that \(P\left\{ \inf \hat{\sigma }_{i}^{2}>M(\varepsilon )^{2}/K_{n}^{*}\right\} \rightarrow 1\), it suffices to show that

$$\begin{aligned} P\left\{ 2\sum _{i=\ell +1}^{n}\exp \left( -K_{n}^{*}\hat{\sigma } _{i}^{2}/2\right) \Big /\sqrt{2\pi K_{n}^{*}\hat{\sigma }_{i}^{2}}\le c(1+\varepsilon )\right\} \rightarrow 1 \end{aligned}$$
(33)

Introducing the term \(Y_{i}\) as

$$\begin{aligned} Y_{i}=2\exp \left( -K_{n}^{*}\hat{\sigma }_{i}^{2}/2\right) \Big /\sqrt{2\pi K_{n}^{*}\hat{\sigma }_{i}^{2}}, \end{aligned}$$

we easily see that Eq. (33) is satisfied if

$$\begin{aligned} \sum _{i=\ell +1}^{n}Y_{i}\rightarrow c \end{aligned}$$
(34)

in probability. By the definition of \(K_{n}^{*}\), \(EY_{i}=c/n\). Moreover, we know that \(\hat{\sigma }_{i}^{2}\) is distributed according to a scaled \(\chi ^{2}\) distribution with \(\ell \) degrees of freedom. Hence it is an elementary exercise to show that \(EY_{i}^{2}=O(1/n^{2})\) and that \(Y_{j}\) and \(Y_{k}\) are independent if

$$\begin{aligned} \left| j-k\right| >\ell +1. \end{aligned}$$

As \(\ell /n\rightarrow 0\), we also see that the variance of \(\sum Y_{i}\) converges to zero. This establishes the claim given in Eq. (30).

Now it is rather easy to establish the claim stated in our lemma: We have to show that

$$\begin{aligned} E\left[ \,\prod _{i=\ell +1}^{n}I\left\{ \tau _{i}\le K_{n}^{*}\right\} \right] \rightarrow \exp (-c). \end{aligned}$$

Based on Eq. (30), it is sufficient to show

$$\begin{aligned} E\left[ \prod _{i\le \nu }I\left\{ \tau _{i}\le K_{n}^{*}\right\} \right] \rightarrow \exp (-c). \end{aligned}$$

Trivially,

$$\begin{aligned} E\,\left[ \frac{I\left\{ \tau _{i}\le K_{n}^{*}\right\} }{E\left[ I\left\{ \tau _{i}\le K_{n}^{*}\right\} \,\Big |\, \mathcal{F }_{i-1}\right] }\,\Big |\, \mathcal{F }_{i-1}\right] =1. \end{aligned}$$

A straightforward application of the optional sampling theorem establishes that

$$\begin{aligned} E\left[ \,\frac{\,\left[ \prod _{i\le \nu }I\left\{ \tau _{i}\le K_{n}^{*}\right\} \right] }{\prod _{i\le \nu }E\left[ I\left\{ \tau _{i}\le K_{n}^{*}\right\} \,\Big |\,\mathcal{F }_{i-1}\right] }\right] =1. \end{aligned}$$
(35)

By the definition of \(\nu \),

$$\begin{aligned} -(1+\varepsilon )^{2}\sum _{j=\ell +1}^{\nu }Y_{j}\le \log \prod _{i\le \nu }E\left[ I\left\{ \tau _{i}\le K_{n}^{*}\right\} \,\Big |\,\mathcal{F }_{i-1} \right] \le -(1-\varepsilon )^{2}\sum _{j=\ell +1}^{\nu }Y_{j} \nonumber \\ \end{aligned}$$
(36)

and also

$$\begin{aligned} \log \prod _{i\le \nu }E\left[ I\left\{ \tau _{i}\le K_{n}^{*}\right\} \,\Big |\, \mathcal{F }_{i-1}\right] \ge -c(1+\varepsilon )^{3}. \end{aligned}$$
(37)

Moreover, Eq. (30) implies that \(P\left\{ \sum _{i=\ell +1}^{\nu }Y_{i}=\sum _{i=\ell +1}^{n}Y_{i}\right\} \rightarrow 1\). Therefore, \(\sum _{i=\ell +1}^{\nu }Y_{i}\rightarrow c\) as well. Hence it can be seen that Eqs. (36) and (37) allow us to deduce from Eq. (35) that

$$\begin{aligned} \exp (-(1+\varepsilon )^{2}c)&\le \lim \inf E\prod \limits _{i\le \nu }I\left\{ \tau _{i}\le K_{n}^{*}\right\} \\&\le \lim \sup E\prod \limits _{i\le \nu }I\left\{ \tau _{i}\le K_{n}^{*}\right\} \\&\le \exp \left[ -(1-\varepsilon )^{2}c\right] . \end{aligned}$$

Now it is easy to see that Eq. (30) allows us to replace \( \nu \) with \(n\) in the preceding inequalities, which proves the lemma.\(\square \)

1.2.3 2.3 Proof of Theorem 3

The Proof of Theorem 3 employs the following lemma.

Lemma 5

Suppose we have given a standard-scale Wiener process \(W\), an adapted process \(f\), and a constant \(B\) such that

$$\begin{aligned} \int \limits _{a}^{b}\!\!f^{2}\mathrm d t\le B, \end{aligned}$$

where \(\int _{a}^{b}\!f\mathrm d W\) is the usual Ito integral. Then

$$\begin{aligned} P\left\{ \left| \int \limits _{a}^{b}\!\!f\mathrm d W\right| \ge C\right\} \le 2\exp \left( -\frac{C^{2}}{2B}\right) . \end{aligned}$$

Proof of Lemma 5

Novikov’s theorem implies that for all \(u\)

$$\begin{aligned} E\exp \left( u\int \limits _{a}^{b}\!\!f\mathrm d W-\frac{u^{2}}{2} \int _{a}^{b}\!\!f^{2}\mathrm d t\right) =1. \end{aligned}$$

Hence

$$\begin{aligned} E\exp \left( u\int \limits _{a}^{b}\!\!f\mathrm d W-\frac{u^{2}}{2}B\right) \le 1 \end{aligned}$$

and therefore

$$\begin{aligned} \exp \left( uC-\frac{u^{2}}{2}B\right) P\left\{ \int \limits _{a}^{b}\!\!f\mathrm d W>C\right\} \le 1. \end{aligned}$$

Put

$$\begin{aligned} u=\frac{C}{B}. \end{aligned}$$

Repeating the preceding argument with \(-\int _{a}^{b}\!f\mathrm d W\) completes the proof. \(\square \)

Returning to the Proof of Theorem 3 , we have

$$\begin{aligned} d\mu _{t}=A_{t}\mathrm d t+B_{t}\mathrm d V_{t}^{(1)} \end{aligned}$$

and

$$\begin{aligned} d(\log \sigma _{t})=C_{t}\mathrm d t+D_{t}\mathrm d V_{t}^{(2)}, \end{aligned}$$

where \(A_{t}\), \(B_{t}\), \(C_{t}\), and \(D_{t}\) are continuous processes and \( V_{t}^{(1)}\) and \(V_{t}^{(2)}\) are standard-scale Wiener processes. We begin by demonstrating that we may assume without loss of generality that the processes \(A_{t}\), \(B_{t}\), \(C_{t}\), \(D_{t}\), \(\mu _{t}\), and \(\log \sigma _{t}\) are uniformly bounded. Because \(A_{t}\), \(B_{t}\), \(C_{t}\), \(D_{t}\), \( \mu _{t}\), and \(\ln \sigma _{t}\) are continuous, for every \(\varepsilon >0\) there exists an \(M=M(\varepsilon )\) such that

$$\begin{aligned} P\left[ \left\{ \sup \left| A_{t}\right| ,\sup \left| B_{t}\right| ,\sup \left| C_{t}\right| ,\sup \left| D_{t}\right| ,\sup \left| \mu _{t}\right| ,\sup \left| \ln \sigma _{t}\right| <M(\varepsilon )\right\} \right] >1-\varepsilon . \end{aligned}$$

We define the stopping time \(\tau ^{(\varepsilon )}\) as the first value of \( t \) when one of \(A_{t}\), \(B_{t}\), \(C_{t}\), \(D_{t}\), \(\mu _{t}\), and \(\ln \sigma _{t}\) becomes larger in absolute value than \(M(\varepsilon )\); if the absolute values of these processes remain below \(M(\varepsilon )\) all the time, we set \(\tau ^{(\varepsilon )}=1\). Then

$$\begin{aligned} P\left\{ \tau ^{(\varepsilon )}=1\right\} >1-\varepsilon . \end{aligned}$$
(38)

Put \(r_{i,n}=\left( X_{i/n}-X_{(i-1)/n}\right) \) and \(s_{i,n}=\left( W_{i/n}-W_{(i-1)/n}\right) \), and then put

$$\begin{aligned} \rho _{n}&= \sup _{i}\frac{r_{i}^{2}}{(r_{i-1}^{2}+r_{i-2}^{2}+\dots +r_{i-\ell }^{2})/\ell }, \\ \rho _{n}^{(\varepsilon )}&= \sup _{i\le \tau ^{(\varepsilon )}}\frac{ r_{i}^{2}}{(r_{i-1}^{2}+r_{i-2}^{2}+\dots +r_{i-\ell }^{2})/\ell }, \\ \xi _{n}&= \sup _{i}\frac{s_{i}^{2}}{(s_{i-1}^{2}+s_{i-2}^{2}+\dots +s_{i-\ell }^{2})/\ell }, \end{aligned}$$

and

$$\begin{aligned} \xi _{n}^{(\varepsilon )}=\sup _{i\le \tau ^{(\varepsilon )}}\frac{s_{i}^{2} }{(s_{i-1}^{2}+s_{i-2}^{2}+\dots +s_{i-\ell }^{2})/\ell }. \end{aligned}$$

By definition, \(\rho _{n}\) and \(\xi _{n}\) are our test statistics applied to \(X_{i/n}\) and \(W_{i/n}\), respectively. Moreover, Eq. (38) guarantees that

$$\begin{aligned} P\left\{ \rho _{n}=\rho _{n}^{(\varepsilon )}\right\} >1-\varepsilon \end{aligned}$$

and also that

$$\begin{aligned} P\left\{ \xi _{n}=\xi _{n}^{(\varepsilon )}\right\} >1-\varepsilon . \end{aligned}$$

Hence it is sufficient to show that, for all \(\varepsilon >0\), the difference between \(\xi _{n}^{(\varepsilon )}\) and \(\rho _{n}^{(\varepsilon )}\) converges to zero. To show this, observe that

$$\begin{aligned} \min _{k,\,k\le \ell }\left( \frac{\sigma _{(i-k)\Big /n}^{2}}{\sigma _{i\big /n}^{2}} \right) \le \left( \frac{s_{i}^{2}}{\left( \sum _{j=1}^{\ell }s_{i-j}^{2}\right) \Big /\ell }\right) \Bigg /\left( \frac{\sigma _{i\big /n}^{2}s_{i}^{2}}{ \left( \sum _{j=1}^{\ell }\sigma _{(i-j)\big /n}^{2}s_{i-j}^{2}\right) \Big /\ell } \right) \le \max _{k,\,k\le \ell }\left( \frac{\sigma _{(i-k)\big /n}^{2}}{ \sigma _{i\big /n}^{2}}\right) \!. \end{aligned}$$

To analyze the difference between the left- and right-hand sides of the above inequality, it is sufficient to consider the term

$$\begin{aligned} \sup _{k,\,k\le \ell }\left| \ln \left( \frac{\sigma _{(i-k)/n}^{2}}{ \sigma _{i/n}^{2}}\right) \right| . \end{aligned}$$

Observe that \(\ln (\sigma _{i/n}^{2})-\ln (\sigma _{(i-k)/n}^{2})=\int _{(i-k)/n}^{i/n}C_{t}\mathrm d t+D_{t}\mathrm d V_{t}^{(2)}\). For \(i<\tau ^{(\varepsilon )}\),

$$\begin{aligned} \left| \,\int \limits _{\left( i-k\right) /n}^{i/n}C_{t}\mathrm d t\right| \le kM/n. \end{aligned}$$

Moreover, we have because of Lemma 5,

$$\begin{aligned} P\left\{ \left| \,\int \limits _{(i-k)/n}^{i/n}D_{t}\mathrm d V_{t}^{(2)}\right| >2\sqrt{M\ell }\,\sqrt{\frac{\ln n}{n}}\right\} \le \frac{1}{n^{2}} \end{aligned}$$

and also

$$\begin{aligned} P\left\{ \sup _{i\le \tau ^{(\varepsilon )},\,k\le \ell }\left| \, \int \limits _{(i-k)/n}^{i/n}D_{t}\mathrm d V_{t}^{(2)}\right| >2\sqrt{M\ell }\, \sqrt{\frac{\ln n}{n}}\right\} \le \frac{\ell }{n}\rightarrow 0. \end{aligned}$$

Hence we deduce that

$$\begin{aligned} P\left\{ \sup _{k\le \ell }\left| \ln \left( \frac{\sigma _{(i-k)/n}^{2} }{\sigma _{i/n}^{2}}\right) \right| >4\sqrt{M\ell }\,\sqrt{\frac{\ln n}{n }}\right\} \rightarrow 0. \end{aligned}$$

Because

$$\begin{aligned} \sup \frac{s_{i}^{2}}{(s_{i-1}^{2}+s_{i-2}^{2}+\dots +s_{i-\ell }^{2})/\ell } =O(\ln n), \end{aligned}$$

we further deduce that the difference between

$$\begin{aligned} \sup \frac{s_{i}^{2}}{(s_{i-1}^{2}+s_{i-2}^{2}+\dots +s_{i-\ell }^{2})/\ell } \end{aligned}$$

and

$$\begin{aligned} \sup \frac{\sigma _{i/n}^{2}s_{i}^{2}}{\left( \sigma _{(i-1)/n}^{2}s_{i-1}^{2}+\sigma _{(i-2)/n}^{2}s_{i-2}^{2}+\dots +\sigma _{(i-\ell )}^{2}s_{i-\ell }^{2}\right) /\ell } \end{aligned}$$

converges to zero.

It remains to be shown that the differences \(\left| r_{i,n}-\sigma _{(i-1)/n}s_{i,n}\right| \) remain small. Observe that

$$\begin{aligned} \left| r_{i,n}-\sigma _{(i-1)/n}s_{i,n}\right|&= \left| \, \int \limits _{(i-1)/n}^{i/n}\left( \mu _{t}\mathrm d t+\sigma _{t}\mathrm d W_{t}-\sigma _{(i-1)/n}\mathrm d W_{t}\right) \right| \\&= \left| \,\int \limits _{(i-1)/n}^{i/n}\mu _{t}\mathrm d t\right| +\left| \,\int \limits _{(i-1)/n}^{i/n}\left( \sigma _{t}-\sigma _{(i-1)/n}\right) \mathrm d W_{t}\right| \\&\le \max \left| \mu _{t}\right| \frac{1}{n}+\left| \,\int \limits _{(i-1)/n}^{i/n}\left( \sigma _{u}-\sigma _{(i-1)/n}\right) \mathrm d W_{u}\right| . \end{aligned}$$

For the analysis of

$$\begin{aligned} \left| \,\int \limits _{(i-1)/n}^{i/n}\left( \sigma _{u}-\sigma _{(i-1)/n}\right) \mathrm d W_{u}\right| \end{aligned}$$

we apply Lemma 5. Since \(\sigma _{u}\) is a diffusion process with drift and diffusion coefficients that are assumed to be bounded, we may deduce that for all \(\alpha >0\) there exists an \(M\) such that for all \(i\) and all \(u\in [(i-1)/n,i/n]\),

$$\begin{aligned} P\left\{ \left| \sigma _{u}-\sigma _{(i-1)/n}\right| \le M\left| u-(i-1)/n\right| ^{1/2-\alpha }\right\} \rightarrow 1. \end{aligned}$$

Hence

$$\begin{aligned} P\left\{ \int \limits _{(i-1)/n}^{i/n}\left( \sigma _{u}-\sigma _{(i-1)/n}\right) ^{2} \mathrm d u\le 2Mn^{-2+\alpha }\right\} \rightarrow 1. \end{aligned}$$

To apply Lemma 5, we need to guarantee that the integral

$$\begin{aligned} \int \limits _{(i-1)/n}^{i/n}\left( \sigma _{u}-\sigma _{(i-1)/n}\right) ^{2}\mathrm d u \end{aligned}$$

is uniformly bounded. But the existence of such a bound may be established using a stopping time argument. Put \(i/n\ge S\ge (i-1)/n\). We stop the process at time \(S\) if for the first time

$$\begin{aligned} \int \limits _{(i-1)/n}^{S}\left( \sigma _{u}-\sigma _{(i-1)/n}\right) ^{2}\mathrm d u=2Mn^{-2+\alpha }; \end{aligned}$$

otherwise, we set \(S=1\). The definition of \(M\) obviously guarantees that

$$\begin{aligned} P\left( S=1\right) \ge 1-\varepsilon \end{aligned}$$

Hence, if we define

$$\begin{aligned} \sigma _{u}^{*}=\left\{ \begin{array}{l} \sigma _{u}\quad \hbox {for }u\le S \\ \sigma _{S}\quad \hbox {otherwise,} \end{array} \right. \end{aligned}$$

we have

$$\begin{aligned} P\left\{ \sigma _{u}^{*}=\sigma _{u} \hbox {for all} \ u \right\} \ge 1-\varepsilon . \end{aligned}$$

Hence it is sufficient to give estimates for \(\int _{(i-1)/n}^{i/n}(\sigma _{u}^{*}-\sigma _{(i-1)/n}^{*})\mathrm d W_{u}\). For this task, however, we may apply Lemma 5. We deduce that

$$\begin{aligned} P\left\{ \left| \,\int \limits _{(i-1)/n}^{i/n}\left( \sigma _{u}^{*}-\sigma _{(i-1)/n}^{*}\right) \mathrm d W_{u}\right| >\sqrt{8Mn^{-2+\alpha }\ln n}\right\} \le \frac{2}{n^{2}}. \end{aligned}$$

Since \(\alpha >0\) was arbitrary, we may conclude that for arbitrary \(\beta >0 \)

$$\begin{aligned} P\left\{ \sup \left| \,\int \limits _{(i-1)/n}^{i/n}\left( \sigma _{u}^{*}-\sigma _{(i-1)/n}^{*}\right) \mathrm d W_{u}\right| >n^{-1+\beta }\right\} \rightarrow 0, \end{aligned}$$

which demonstrates that these terms are negligible.\(\square \)

1.3 3. Tables

Tables 1, 2, 3, and 4 summarize rejection probabilities under 5 % size.

Table 1 Simulated rejection probability of Model 1: pure diffusion
Table 2 Simulated rejection probability of Model 2: Log SV
Table 3 Simulated rejection probability of Model 3: 2 factor CIR SV (low volatility)
Table 4 Simulated rejection probability of Model 4: 2 factor CIR SV (high volatility)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lee, T., Loretan, M. & Ploberger, W. Rate-optimal tests for jumps in diffusion processes. Stat Papers 54, 1009–1041 (2013). https://doi.org/10.1007/s00362-013-0541-y

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-013-0541-y

Keywords

Navigation