Abstract
In this paper we derive martingale estimating functions for the dimensionality parameter of a Bessel process based on the eigenfunctions of the diffusion operator. Since a Bessel process is non-ergodic and the theory of martingale estimating functions is developed for ergodic diffusions, we use the space-time transformation of the Bessel process and formulate our results for a modified Bessel process. We deduce consistency, asymptotic normality and discuss optimality. It turns out that the martingale estimating function based of the first eigenfunction of the modified Bessel process coincides with the linear martingale estimating function for the Cox Ingersoll Ross process. Furthermore, our results may also be applied to estimating the multiplicity parameter of a one-dimensional Dunkl process and some related polynomial processes.
Similar content being viewed by others
1 Introduction
Martingale estimating functions introduced in Bibby and Sørensen (1995) provide a well-established method for inference in discretely observed diffusion processes, when the likelihood function is unknown or too complicated. The idea behind martingale estimating functions is to provide a simple approximation of the true likelihood, which forms a martingale and hence leads under suitable regularity assumptions to consistent and asymptotically normal estimators. One way of approximating the likelihood function is by Taylor expansion leading to linear and quadratic martingale estimating functions, cf. Bibby and Sørensen (1995). Another possibility is to use the eigenfunctions of the associated diffusion operator, cf. Kessler and Sørensen (1999). In this context a suitable optimality concept was introduced by Godambe and Heyde (1987) and Heyde (1988). For a general theory of asymptotic statistics for diffusion processes we refer e.g. to Höpfner (2014).
Our aim in this paper is to estimate the dimensionality or index parameter \(\vartheta \in \Theta \subset (-\frac{1}{2},\infty )\) of a classical one-dimensional Bessel process given by the stochastic differential equation
where B denotes a standard Brownian motion. Since a Bessel process is non-ergodic, we transform it into a stationary and ergodic process by adding a mean reverting term with speed of mean reversion \(\alpha >0\) in the drift, which we call modified Bessel process in the following. The two processes are then related by the well-known space-time transformation of a Bessel process. Since the eigenfunctions of the associated diffusion operator of the modified Bessel process are known, we base our martingale estimation function on these eigenfunctions and follow the lines of Kessler and Sørensen (1999).
For the estimating function based on the first eigenfunction we obtain an explicit formula for the estimator, which only depends quadratically on the observations. We see that the estimator coincides with the one of a linear martingale estimation function for the Cox Ingersoll Ross process, which is the square of the modified Bessel process. We discuss optimality in the sense of Godambe and Heyde. Note that in Overbeck and Ryden (1997) also local asymptotic normality for estimators in the Cox Ingersoll Ross model was established.
Furthermore, we consider martingale estimating functions based on the first two eigenfunctions and discuss the improvement of the asymptotic variance. In this case we do not get an explicit result for the estimator anymore.
Note that our results for the Bessel process may also be used to estimate the multiplicity parameter k of a one-dimensional Dunkl process, a special jump diffusion given by the generator
By the last term in the generator we see that the associated process possesses jumps due to a reflection, which lead to a sign change. Hence, the modulus of this Dunkl process is a Bessel process with dimensionality parameter \(k-1/2\), cf. Chybiryakov et al. (2008). For the Dunkl process the multiplicity parameter is of special interest, since it determines the jump activity, namely for \(k\ge \frac{1}{2}\) a Dunkl process has a finite jump activity, whereas for \(k<1/2\) we have infinite jump activity.
Furthermore, the technique transforming a non-ergodic process to an ergodic one via a space-time transformation may also be used for larger classes of polynomial diffusion processes given by a generalization of the stochastic differential equation of a Bessel process. We introduce these processes and provide results for the martingale estimating function based on the first eigenfunction.
The paper is organised as follows: in Sect. 2 we collect the basic facts on the processes, Sect. 3 is devoted to martingale estimation functions based on the first eigenfunction for Bessel processes, while in Sect. 4 we provide an extension to a larger class of polynomial diffusions. Section 5 considers estimators based on two eigenfunctions for Bessel processes.
2 Basic results on Bessel processes and a stationary modification
In this section we introduce the basic results on the underlying diffusions, which we will need in the following for the theory of martingale estimation functions. Our aim is to estimate the parameter \(\vartheta \in \Theta \subset (-\frac{1}{2},\infty )\) of a classical one-dimensional Bessel process. Since a Bessel process is non-ergodic and most results on parameter estimation for diffusions are developed for ergodic diffusions, we start by introducing a modification of a Bessel process which is ergodic.
We consider the stochastic differential equation
for a Brownian motion B, some fixed \(\alpha >0\) and the parameter of interest \(\vartheta \in \Theta \subset (-\frac{1}{2},\infty )\). The equation (2.1) is similar to the equation defining a Bessel process except for the drift term \(-\alpha X_t \,{\mathrm {d}}t\), which we add to ensure ergodicity and stationarity. We can also state the generator
In order to determine the density of \((X_t)_{t\ge 0}\), we consider the space time transformation
for a Bessel process \((Y_t)_{t\ge 0}\) with index \(\vartheta \), which immediately follows by Itô’s formula. For simplicity, we use the notation \(f(t):=\exp (-\alpha t)\) and \(g(t):=\frac{\exp (2\alpha t)-1}{2\alpha }\)
for some Brownian motion W as \(f(t)^2g'(t)=1\) and \(f'(t)=-\alpha f(t)\). Therefore, we derive the distribution of \((X_t)_{t\ge 0}\) by using the well-known distribution of the Bessel process \((Y_t)_{t\ge 0}\), namely
where
is the Bessel function with index \(\vartheta \) [see for instance Itô and McKean (1974)]. Hence, we obtain
with
We denote the density of \(X_{\Delta }\) with starting point x by \(p_\vartheta (x,\cdot ,\Delta )\) and the distribution of \(X_\Delta \) by \(P_{\vartheta }\). In the following, we check that \((X_t)_{t\ge 0}\) is indeed stationary and ergodic and determine the invariant measure. The density of the scale measure for a fixed \(\xi \in (0,\infty )\) is defined as
Note that, due to the singularity in the drift, we initially have to consider some positive interior point \(\xi \).
By Sørensen (2012, p. 9) and Skorokhod (1989) we may deduce that \((X_t)_{t\ge 0}\) is ergodic as we see that the conditions
are satisfied.
As the invariant measure is defined via the scale measure \(m(\,{\mathrm {d}}x):=\frac{1}{s(x)}\,{\mathrm {d}}x\), we obtain by a straightforward calculation that the density of the invariant probability measure is given by
on \((0,\infty )\) with respect to the Lebesgue measure (Sørensen 2012, Eq. (1.15)).
For the calculation of the asymptotic variance we will need the symmetric distribution \(Q_\Delta ^\vartheta \) of two consecutive observations \(X_{(i-1)\Delta }\) and \(X_{i\Delta }\) on \((0,\infty )^2\). It is given by
with
3 Martingale estimating functions based on eigenfunctions
In this section we proceed similarly to Bibby and Sørensen (1995) and Kessler and Sørensen (1999) to construct martingale estimation functions for our parameter of interest \(\vartheta \). The concepts in these papers are based on ergodic diffusions. As Bessel processes are non-ergodic we constructed the ergodic and stationary version in (2.1). Let \(X_{\Delta },\dots ,X_{n\Delta }\) be discrete observations of the process. We consider the eigenfunctions of the generator
which are the solutions of \(L_\vartheta \phi _\eta =-\lambda _\eta \phi _\eta \) given by
with the Pochhammer symbols \((x)_0:=1\) and \((x)_k:=\frac{\Gamma (x+k)}{\Gamma (x)}=x (x+1) \dots (x+k-1)\) for \(k\in {\mathbb {N}}\) cf. (Rösler and Voit 2008, 2.58 Corollary (i)). According to (Kessler and Sørensen 1999, 5. Eigenfunctions and Martingals), the property
for the polynomials \(\phi _\eta \) is sufficient to deduce
by Itô’s formula. Consequently, we may use the general theory on estimators based on eigenfunctions given in Kessler and Sørensen (1999). However, in our case we may calculate the involved quantities and obtain explicit results. For the first eigenfunction \(\phi _1(x,\vartheta )=1-\frac{\alpha x^2}{\vartheta +1}\) we consider the estimator based on the martingale estimating function
The unique solution of \(G_n({\widehat{\vartheta }}_n)=0\) is
Now, we may deduce consistency and asymptotic normality along the same lines as for general martingale estimating functions.
Theorem 3.1
For every true value \(\vartheta _0 \in \Theta \subset (-\frac{1}{2},\infty )\), we have
-
(i)
\({\widehat{\vartheta }}_n\rightarrow \vartheta _0\) in probability and
-
(ii)
\(\sqrt{n}({\widehat{\vartheta }}_n-\vartheta _0)\rightarrow N(0,\sigma ^2(\vartheta _0))\) in distribution
under \(P_{\vartheta _0}\) with \(\sigma ^2(\vartheta _0):=(\vartheta _0+1)\frac{1+e^{-2\alpha \Delta }}{1-e^{-2\alpha \Delta }}.\)
Proof
We define
a continuously differentiable function with respect to \(\vartheta \). The absolute value of the derivative
is dominated by \(4\alpha (y^2+e^{-2\alpha \Delta }x^2)\), which is independent of \(\vartheta \) and square integrable with respect to \(Q_\Delta ^{\vartheta _0}\). Moreover, the symmetry in x and y of the density of \(Q_\Delta ^{\vartheta _0}\) implies
which completes the proof of (i) and (ii) according to (Kessler and Sørensen 1999, Theorem 4.3).
Due to (Kessler and Sørensen 1999, Theorem 4.3), the asymptotic variance is given by \(\sigma ^2(\vartheta _0)=\frac{v(\vartheta _0)}{f^2(\vartheta _0)}\) with
Because of the symmetry of \(Q_\Delta ^{\vartheta _0}\) and
we get
Furthermore, we can calculate
By calculating \(\,{\mathrm {E}}\,(X^2_{i\Delta } \, \vert \, X_{(i-1)\Delta }=x)\) explicitly, we conclude
Applying these formulas we establish
\(\square \)
Let us discuss the results. Looking at the asymptotic variance we see that it decreases when \(\alpha \Delta \) is increasing. This seems surprisingly at the first glance, since it implies that the asymptotic variance decreases when the distance between observations increases, as we keep the mean reverting parameter \(\alpha \) fixed. Note that we have the observation scheme \(X_\Delta , \cdots , X_{n\Delta }\), hence \(n\rightarrow \infty \) and \(\Delta \rightarrow 0\) such that \(n\Delta \rightarrow \infty \) would correspond to continuous observations. However, keeping in mind that equidistant observations for the stationary version of the Bessel process means that the distance between two observations of the underlying Bessel process is exponentially growing, this leads to a fast growing observation interval. This might capture the non-stationary behaviour of the original Bessel process. Furthermore, we see that the asymptotic variance tends to infinity as the mean-reverting parameter tends to zero.
Having a closer look at the estimator, we see that it only depends on the square of the observations, hence we could reformulate our problem and consider the squared process \(Y_t:=X_t^2\). Itô’s formula yields
an equation describing a Cox Ingersoll Ross process. We consider now the canonical linear martingale estimating function
For \(\vartheta >-\frac{1}{2}\) the unique solution of \({\widetilde{G}}_n({\widehat{\vartheta }}_n)=0\) is again
Hence, we see that the two estimators coincide. In 3.1 we have already established the consistence and asymptotic normality of \({\widehat{\vartheta }}_n\).
The next step is to increase the flexibility of \({\widetilde{G}}_n\) by adding the weight \(g_{i-1}\) depending on the parameter of interest and the previous observation
where \(g_{i-1}\) is \(\sigma (X_\Delta ,\dots , X_{(i-1)\Delta })\) measurable and continuously differentiable to keep the martingale property. Using the same technique we search for the optimal estimator with the smallest asymptotic variance. Considering this second approach via linear martingale estimating functions for the squared process, allows us easily to determine this optimal estimator, cf. Heyde (1988) and Godambe and Heyde (1987). By Bibby and Sørensen (1995, Eq. (2.10)) the optimal estimator is given by
where \(\varphi (X_{(i-1)\Delta },\vartheta )\) is the conditional variance of \(X_{i\Delta }\) given \(X_{(i-1)\Delta }\). Unfortunately, the equation defining the optimal estimator
is not explicitly solvable with respect to \(\vartheta \). However, we can nevertheless determine the improvement in the asymptotic variance. Following again the same lines as (Bibby and Sørensen 1995, Theorem 3.2), we have to establish the finiteness of
the reciprocal of the asymptotic variance, the asymptotic information. Consequently, we can deduce that a lower bound of the optimal variance is given by \(\vartheta _0 +1\).
Figure 1 shows the asymptotic information of the 10.000 simulated optimal estimator (triangles) and \({\widehat{\vartheta }}_n\) (dots) for \(n=1.000\). The solid line corresponds to the calculated asymptotic information of \({\widehat{\vartheta }}_n\) in Theorem 3.1. The dotted line represents our computed bound above. As the lines nearly touch around \(\Delta =3\), the improvement of the optimal estimator quickly tends to zero. Starting from the value \(\Delta =1\) the simulated asymptotic information is almost the same for both estimators. Beforehand, the improvement is clearly visible but we do not want to maintain such a high variance as we can choose the value of \(\alpha \Delta \) such that the asymptotic variance is close to the lower bound.
We take a closer look at the asymptotic variance of \({\widehat{\vartheta }}_n\), which decreases monotonously in \(\alpha \Delta \):
Due to the fast convergence to the lower bound \(\vartheta _0+1\), we can for practical purposes restrict ourselves to the estimator \({\widehat{\vartheta }}_n\) and hence have an explicit estimator.
4 An extension to some polynomial diffusion processes
Next, we aim to extend the previously developed technique to some larger class of processes. We consider some non-ergodic polynomial processes solving the stochastic differential equation
for a Brownian motion B, the parameter of interest \(\vartheta \in \Theta \subset (-\frac{1}{2},\infty )\) and the additional parameter \(p<1\). Note that for \(p=-1\), we get the Bessel process back. We briefly analyze a martingale estimator based on the first eigenfunction with the same technique as before. Using the space-time transformation
for some \(\alpha >0\), we receive by Itô’s formula an ergodic and stationary version
The corresponding generator can be stated as
With a similar calculation as for \(\mu _\vartheta \), we obtain the invariant measure
on \((0,\infty )\) with respect to the Lebesgue measure. After a brief calculation we get
as the first eigenfunction of the generator \(L_{\vartheta ,p}\) with eigenvalue \(\lambda _{1,p}=(1-p)\alpha \). Let \(X_{\Delta ,p},\dots , X_{n\Delta ,p}\) be discrete observations of (4.2). We consider the estimator based on the martingale estimating function
The unique solution of \(G_{n,p}({\widehat{\vartheta }}_{n,p})=0\) is
Next, we review how this process is related to a linear martingale estimating function. Application of Itô’s formula yields
hence we can determine the conditional mean \(f(t):=\,{\mathrm {E}}\,(X_{t,p}^{1-p}|X_{t_0,p})\) by solving the differential equation
Thus, we receive the linear martingale estimating function
and see that the unique solution of \({\widetilde{G}}_{n,p}(\vartheta _{n,p})=0\) is again (4.3).
Theorem 4.1
For every true value \(\vartheta _0 \in \Theta \subset (-\frac{1}{2},\infty )\), we have
-
(i)
\({\widehat{\vartheta }}_{n,p}\rightarrow \vartheta _0\) in probability and
-
(ii)
\(\sqrt{n}({\widehat{\vartheta }}_{n,p}-\vartheta _0)\rightarrow N(0,\sigma ^2(\vartheta _0))\) in distribution
under \(P_{\vartheta _0}\) with \(\sigma ^2(\vartheta _0):=\frac{(1-p)( \vartheta _0+1)e^{-(1-p)\alpha \Delta }}{1-e^{-(1-p)\alpha \Delta }}+\frac{(2\vartheta _0+1-p)(1-p)}{4}.\)
Proof
Obviously, \(\sigma ^2(\vartheta _0)\in (0,\infty )\) applies. According to (Bibby and Sørensen 1995, Theorem 3.2), the convergences (i) and (ii) are given if the equation
holds, where
and \(\varphi \) is the conditional variance of \(X_{\Delta ,p}^{1-p}\) given \(X_{0,p}\) determined by
Note that this formula can also be derived via the solution of a differential equation. By establishing
we conclude
and hence the equation \(\sigma ^2(\vartheta _0)=\frac{v(\vartheta _0)}{f(\vartheta _0)^2}\) is valid. \(\square \)
We want to increase the flexibility of \({\widetilde{G}}_{n,p}\) using the same scheme as for \({\widetilde{G}}_n={\widetilde{G}}_{n,-1}\). According to Heyde (1988); Godambe and Heyde (1987), we once more obtain the optimal weight
for the estimating function
cf. Bibby and Sørensen (1995, Eq. (2.10)). As before, we cannot explicitly derive the estimator as a solution of
but we can analyze the improvement with respect to the estimator \({\widehat{\vartheta }}_{n,p}\). Following the same lines as (Bibby and Sørensen 1995, Theorem 3.2), we have to establish the finiteness of
the reciprocal of the asymptotic variance, to achieve consistency and asymptotic normality. Comparing this result to the limit
we recognize a fast convergence to the asymptotic variance’s lower bound of the optimal estimator. This result resembling the case of the Bessel process justifies the restriction to the explicit estimator \({\widehat{\vartheta }}_{n,p}\) from a practical point of view.
5 Estimator based on two eigenfunctions
Now, we turn back to the Bessel process and try to improve the asymptotic variance further by considering martingale estimating functions based on two eigenfunctions. Yet, this approach suffers from the drawback that we do not get explicit results for the estimators anymore, but for the asymptotic variance at least for weights depending only on the unknown parameter.
As in the previous sections we start with a class of martingale estimating functions with weight depending on the unknown parameter only. We consider
where \(\beta _1\) and \(\beta _2\) are continuously differentiable functions only depending on \(\vartheta \). Under suitable conditions on the interplay between the weights \(\beta _i\) and the eigenfunctions, we can easily achieve a consistent and asymptotic normal estimator.
Theorem 5.1
If for every \(\vartheta \in \Theta \)
is satisfied, then there exists a solution of \(H_n({\widehat{\vartheta }}_{n,2})=0\) with a probability tending to one as \(n\rightarrow \infty \) under \(P_{\vartheta _0}\). Furthermore, for every true value \(\vartheta _0 \in \Theta \subset (-\frac{1}{2},\infty )\) we have
-
(i)
\({\widehat{\vartheta }}_{n,2}\rightarrow \vartheta _0\) in probability and
-
(ii)
\(\sqrt{n}({\widehat{\vartheta }}_{n,2}-\vartheta _0)\rightarrow N\left( 0,\frac{v(\beta _1,\beta _2,\vartheta _0)}{f^2(\beta _1,\beta _2,\vartheta _0)}\right) \) in distribution
under \(P_{\vartheta _0}\) with
Proof
As by the assumption \(f(\cdot , \cdot , \vartheta )\not = 0\) for every \(\vartheta \in \Theta \), we conclude \(\beta _1(\vartheta )\not =0\) or \(\beta _2(\vartheta )\not = 0\) and consequently \(v(\cdot , \cdot ,\vartheta )\not = 0 \) for every \(\vartheta \in \Theta \). Using again (Kessler and Sørensen 1999, Theorem 4.3) we only have to establish the formulas of f and v. In our calculations below we need the following straightforward properties
-
(a)
\(Q_\Delta \) symmetric,
-
(b)
\(\int _0^\infty \phi _1(x,\vartheta )\phi _2(x,\vartheta )\mu _\vartheta (x)\,{\mathrm {d}}x=0\),
-
(c)
\(\int _0^\infty \phi _j(x,\vartheta )\mu _\vartheta (x)\,{\mathrm {d}}x=0\),
-
(d)
\(\int _0^\infty x^{2\eta } \mu _{\vartheta }(x)\,{\mathrm {d}}x=\frac{\Gamma (\eta +\vartheta +1)}{\alpha ^\eta \Gamma (\vartheta +1)}\) for \(\eta \in {\mathbb {N}}\).
Step 1 Like in (Kessler and Sørensen 1999, Condition 4.2 (a)) we define f by
The first step is to obtain the explicit expression given in Theorem 5.1. We can easily calculate the two summands
and similarly
Step 2 According to (Kessler and Sørensen 1999, Theorem 4.3), we receive
with
In the following we explicitly compute these integrals, starting with \(\alpha _{11}\). If we take a look at the proof of Theorem 3.1, we recognize the already calculated value
For the next term \(\alpha _{12}\), it holds
and similarly we obtain for \(\alpha _{22}\)
\(\square \)
Our aim is now to find \(\beta _i\)s, which lead to the smallest asymptotic variance as \(\alpha \Delta \rightarrow \infty \). Therefore, we define for fixed \(\vartheta \in \Theta \) the approximated functions
for which
holds. This property justifies the search for the global minimum of
To establish the minimum we first simplify the function
and determine the first derivatives
Taking into account the properties of the \(\beta _i\)s in Theorem 5.1, we get as possible minima \(\beta _1=2\beta _2\not =0\) with value
In order to check, if we indeed have minima, we consider \(\beta _1\not = 2\beta _2\) and see
Hence, these critical points are global minima. Finally, we may specify the improvement of the asymptotic variance
if we consider the asymptotic behaviour \(\alpha \Delta \rightarrow \infty \). Hence, we see that relative improvement compared to \(\vartheta +1\), the bound of the asymptotic variance in the case of only one eigenfunction, is \(\frac{1}{2\vartheta +5}\) and decreases as \(\vartheta \) increases. However, for the boundary case \(\vartheta =-1/2\) we get an improvement of \(25\%\). For the case \(\vartheta =0\), which for a Dunkl process separates between finite and infinite jump activity, we still get an improvement of \(20\%\).
As a second step we may consider weights which also depend on the observations. Note that though we may determine the optimal weights as solutions to a system of linear equations with coefficients depending on higher order conditional moments, which is theoretically feasible, we cannot provide an explicit result for the optimal asymptotic variance. Hence, we are not able to quantify the improvement compared to the simpler weights before.
If we take into account weights \(a_j^\star \), that additionally depend on the trajectories, i.e. if we consider estimating functions
the optimal weights in the sense of Godambe Heyde Godambe and Heyde (1987) are given in (Kessler and Sørensen 1999, p. 305). The weights \(\alpha _j^\star \) are specified by the equation
with
for \(1\le i < j \le 2\) and
for \(j=1,2\). Hence,
and
References
Bibby BM, Sørensen M (1995) Martingale estimation functions for discretely observed diffusion processes. Bernoulli 1:17–39
Chybiryakov O, Gallardo L, Yor M (2008) Dunkl processes and their radial parts relative to a root system. In: Graczyk P et al (eds) Harmonic and stochastic analysis of Dunkl processes. Hermann, Paris
Godambe VP, Heyde CC (1987) Quasi-likelihood and optimal estimation. Int Stat Rev 55:231–244
Heyde CC (1988) Fixed sample and asymptotic optimality for classes of estimating functions. Contemp Math 80:241–247
Höpfner R (2014) Asymptotic statistics: with a view to stochastic processes. De Gruyter
Itô K, McKean HP (1974) Diffusion processes and their sample paths. Springer
Kessler M, Sørensen M (1999) Estimating equations based on eigenfunctions for a discretely observed diffusion process. Bernoulli 5:299–314
Overbeck L, Ryden T (1997) Estimation in the Cox Ingersoll Ross model. Econ Theory 13:430–461
Rösler M, Voit M (2008) Dunkl theory, convolution algebras, and related Markov processes. In: Graczyk P et al (eds) Harmonic and stochastic analysis of Dunkl processes. Hermann, Paris
Skorokhod AV (1989) Asymptotic methods in the theory of stochastic differential equations. American Society, Providence
Sørensen M (2012) Estimating functions for diffusion-type processes. Kessler M, Lindner A, Sorensen M (eds) Statistical methods for stochastic differential equations, vol 124. CRC Press
Acknowledgements
The financial support of the DFG-GRK 2131 is gratefully acknowledged. We would like to thank the reviewers for their helpful comments and suggestions.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hufnagel, N., Woerner, J.H.C. Martingale estimation functions for Bessel processes. Stat Inference Stoch Process 25, 337–353 (2022). https://doi.org/10.1007/s11203-021-09250-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11203-021-09250-8