Skip to main content
Log in

Integral curves from noisy diffusion MRI data with closed-form uncertainty estimates

  • Published:
Statistical Inference for Stochastic Processes Aims and scope Submit manuscript

Abstract

We present a method for estimating the trajectories of axon fibers through diffusion tensor MRI (DTI) data that provides theoretically rigorous estimates of trajectory uncertainty. We develop a three-step estimation procedure based on a kernel estimator for a tensor field based on the raw DTI measurements, followed by a plug-in estimator for the leading eigenvectors of the tensors, and a plug-in estimator for integral curves through the resulting vector field. The integral curve estimator is asymptotically normal; the covariance of the limiting Gaussian process allows us to construct confidence ellipsoids for fixed points along the curve. Complete trajectories of fibers are assembled by stopping integral curve tracing at locations with multiple viable leading eigenvector directions and tracing a new curve along each direction. Unlike probabilistic tractography approaches to this problem, we provide a rigorous, theoretically sound model of measurement uncertainty as it propagates from the raw MRI data, to the tensor field, to the vector field, to the integral curves. In addition, trajectory uncertainty is estimated in closed form while probabilistic tractography relies on sampling the space of tensors, vectors, or curves. We show that our estimator provides more realistic trajectory uncertainty estimates than a more simplified prior approach for closed-form trajectory uncertainty estimation due to Koltchinskii et al. (Ann Stat 35:1576–1607, 2007) and a popular probabilistic tractography method due to Behrens et al. (Magn Reson Med 50:1077–1088, 2003) using theory, simulation, and real DTI scans.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. We note that while in theory, higher-order solvers such as Runge–Kutta’s method could have provided more accurate integral curves, we recently showed that the Runge–Kutta method provides no tangible advantages over Euler’s method when applied to the simplified integral curve model of Koltchinskii et al. (2007) (see Sakhanenko 2012).

References

  • Assemlal H-E, Tschumperle D, Brun L, Siddiqi K (2011) Recent advances in diffusion MRI modeling: angular and radial reconstruction. Med Image Anal 15:369–396

    Article  Google Scholar 

  • Bammer R, Holdsworth S, Veldhuis W, Skare S (2009) Newmethods in diffusion-weighted and diffusion tensor imaging. Magn Reson Imaging Clin N Am 17:175–204

    Article  Google Scholar 

  • Basser P, Pajevic S (2000) Statistical artifacts in diffusiontensor MRI (DTI) caused by background noise. Magn Reson Med 44:41–50

    Article  Google Scholar 

  • Basser PJ, Pajevic S, Pierpaoli C, Duda J, Aldroubi A (2000) In vivo fiber tractography using DTI data. Magn Reson Med 44:625–632

    Article  Google Scholar 

  • Basser P, Pierpaoli C (1998) A simplified method to measurethe diffusion tensor from seven MR images. Magn Reson Med 39:928–934

    Article  Google Scholar 

  • Beaulieu C (2002) The basis of anisotropic water diffusion in thenervous system—a technical review. NMR Biomed 15:435–455

    Article  Google Scholar 

  • Behrens T, Woolrich M, Jenkinson M, Johansen-Berg H, Nunes R, Clare S, Matthews P, Brady J, Smith S (2003) Characterization and propagation of uncertainty indiffusion-weighted MR imaging. Magn Reson Med 50:1077–1088

    Article  Google Scholar 

  • Billingsley P (1999) Convergence of probability measures, 2nd edn. Wiley, New York

    Book  MATH  Google Scholar 

  • Chanraud S, Zahr N, Sullivan E, Pfefferbaum A (2010) MR diffusion tensor imaging: a window into white matter integrity of the working brain. Neuropsychol Rev 20:209–225

    Article  Google Scholar 

  • Friman O, Farneback G, Westin C-F (2006) A Bayesian approach for stochastic white matter tractography. IEEE Trans Med Imaging 25:965–978

    Article  Google Scholar 

  • Gelman A, Carlin J, Stern H, Rubin D (2004) Bayesian data analysis. Chapman & Hall/CRC Texts in Statistical Science. Chapman & Hall/CRC, New York

    Google Scholar 

  • Gudbjartsson H, Patz S (1995) The Rician distribution ofnoisy MRI data. Magn Reson Med 34:910–914

    Article  Google Scholar 

  • Hahn K, Prigarin S, Heim S, Hasan K (2006) Random noisein diffusion tensor imaging, its destructive impact and somecorrections. In: Weickert J, Hagen H (eds) Visualization and processing of tensor fields. Springer, Berlin, pp 107–117

    Chapter  Google Scholar 

  • Hahn K, Prigarin S, Rodenacker K, Hasan K (2009) Denoising for diffusion tensor imaging with low signal to noiseratios: method and Monte Carlo validation. Int J Biomath Biostat 1:63–81

    Google Scholar 

  • Hille E (1969) Lectures on ordinary differential equations. Addison-Wesley, Reading

    MATH  Google Scholar 

  • Jones D (2003) Determining and visualizing uncertainty inestimates of fiber orientation from diffusion tensor MRI. Magn Reson Med 49:7–12

    Article  Google Scholar 

  • Kato T (1980) Perturbation theory for linear operators. Springer, New York

    MATH  Google Scholar 

  • Koltchinskii V, Sakhanenko L, Cai S (2007) Integral curves of noisy vector fields and statistical problems in diffusion tensor imaging: nonparametric kernel estimation and hypotheses testing. Ann Stat 35:1576–1607

    Article  MathSciNet  MATH  Google Scholar 

  • Koltchinskii V, Sakhanenko L (2009) Asymptotics of statistical estimators of integral curves. In: Houdre C, Koltchinskii V, Mason D, Peligrad M (eds) High dimensional probability V: The Luminy Volume. IMS Collections, pp 326–337

  • Lazar M, Alexander A (2005) Bootstrap white mattertractography (BOOT-TRAC). Neuroimage 24:524–532

    Article  Google Scholar 

  • Magnus J (1985) On differentiating eigenvalues and eigenvectors. Econom Theory 1:179–191

    Article  Google Scholar 

  • Mammen E (1992) When does Bootstrap work? Lecture Notes in Statistics. Springer, New York

    Book  MATH  Google Scholar 

  • Mukherjee P, Berman J, Chung S, Hess C, Henry R (2008) Diffusion tensor MR imaging and fiber tractography: theoretic underpinnings. Am J Neuroradiol 29:632–641

    Article  Google Scholar 

  • Mukherjee P, Chung S, Berman J, Hess C, Henry R (2008) Diffusion tensor MR imaging and fiber tractography: technical considerations. Am J Neuroradiol 29:843–852

    Article  Google Scholar 

  • Parker GJM, Alexander D (2003) Probabilistic Monte Carlo based mapping of cerebral connections utilizing whole-brain crossing fiber information. In: Proceedings of IPMI’2003, pp 684–695

  • Sakhanenko L (2010) Lower bounds for accuracy of estimation in diffusion tensor imaging. Theory Probab Appl 54:168–177

    Article  MathSciNet  MATH  Google Scholar 

  • Sakhanenko L (2011) Global rate optimality in a model for diffusion tensor imaging. Theory Probab Appl 55:77–90

    Article  MathSciNet  MATH  Google Scholar 

  • Sakhanenko L (2012) Numerical issues in estimation of integral curves from noisy diffusion tensor data. Stat Probab Lett 82:1136–1144

    Article  MathSciNet  MATH  Google Scholar 

  • Yuan Y, Zhu HT, Ibrahim J, Lin WL, Peterson BG (2008) A noteon bootstrapping uncertainty of diffusion tensor parameters. IEEE Trans Med Imaging 27:1506–1514

    Article  Google Scholar 

  • Zhu H, Zhang H, Ibrahim J, Peterson B (2007) Statistical analysis of diffusion tensors in diffusion-weighted magnetic resonance image data. J Am Stat Assoc 102:1081–1110

    MATH  Google Scholar 

  • Zhu H, Li Y, Ibrahim I, Shi X, An H, Chen Y, Gao W, Lin W, Rowe D, Peterson B (2009) Regression models for identifying noise sources in magnetic resonance images. J Am Stat Assoc 104:623–637

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lyudmila Sakhanenko.

Additional information

Owen Carmichael: Research was partially supported by NIH Grants K01 AG 030514, P30 AG010129, R01 AG010220, R01 AG 031563, R01 AG021028.

Lyudmila Sakhanenko: Research was partially supported by NSF Grants DMS-1208238, 0806176.

Appendix: Proofs

Appendix: Proofs

Lemmas 1 and 2 are quite standard results. So the proofs are omitted.

1.1 Proof of Lemma 3

For any \(\varepsilon >0\) consider an event \(A_n(\varepsilon ):=\{\sup _{x\in \mathbb {R}^d}|\hat{M}_n(x)-M(x)|\le \varepsilon \}\). By Lemma 1, \(P(A_n(\varepsilon ))\rightarrow 1\) as \(n\rightarrow \infty \). We have for any \(t\in [0, T]\)

$$\begin{aligned} |\hat{X}_n(t)-x(t)|= & {} \bigg |\int _0^t [v(\hat{M}_n(\hat{X}_n(s)))-v(M(x(s)))] ds\bigg | \nonumber \\\le & {} \int _0^t |v(M(\hat{X}_n(s)))-v(M(x(s)))| ds + t \sup _{x\in \mathbb {R}^d}|v(\hat{M}_n(x))-v(M(x))|.\nonumber \\ \end{aligned}$$
(8)

For a fixed tensor \(M_0\) we use Theorem 1 in Magnus (1985) to observe that \(\lambda (M)\) and v(M) are infinitely continuously differentiable with respect to M in a neighborhood of \(M_0\) provided that \(\lambda (M_0)\) is the simple eigenvalue of \(M_0\), and

$$\begin{aligned} \frac{\partial \lambda (M)}{\partial M_{kl}}\bigg |_{M_0}= & {} (2-\delta _{kl})v_k(M_0)v_l(M_0),\nonumber \\ \frac{\partial v_p(M)}{\partial M_{kl}}\bigg |_{M_0}= & {} (1-\delta _{kl}/2)[(\lambda (M_0)I_d-M_0)^{+}_{pk}v_l(M_0) \nonumber \\&+\,(\lambda (M_0)I_d-M_0)^{+}_{pl}v_k(M_0)]\quad \mathrm{for}\quad 1\le k, l, p\le d, \end{aligned}$$
(9)

where \(A^{+}\) stands for Moore–Penrose inverse of A. Note that M(x) is a continuous function of x on a compact set \(\bar{G}\) with support in G, where it has the simple maximal eigenvalue \(\lambda (M(x))\). Then on the event \(A_n(\varepsilon )\) we can apply Theorem 1 in Magnus (1985) and obtain

$$\begin{aligned} \sup _{x\in \mathbb {R}^d}|v(\hat{M}_n(x))-v(M(x))|\le L_v\sup _{x\in \mathbb {R}^d}|\hat{M}_n(x)-M(x)| \end{aligned}$$

for some finite constant \(L_v\). Moreover, due to local infinite differentiability of v, smoothness of \(M(x), x\in G,\) and \(\bar{G}\) being a compact set we have the following Lipschitz condition \( |v(M(x_1))-v(M(x_2))|\le L_{vM}|x_1-x_2| \) for all \(x_{1}, x_2\in \bar{G}\) and some finite constant \(L_{vM}\). Then we can bound the expression in (8) on the event \(A_n(\varepsilon )\) as follows

$$\begin{aligned}&|\hat{X}_n(t)-x(t)|\le TL_v\sup _{x\in {\mathbb {R}}^d}|\hat{M}_n(x)-M(x)|+L_{vM}\int _0^t|\hat{X}_n(s)-x(s)|ds \end{aligned}$$

for all \(t\in [0, T]\). By Gronwall–Bellman inequality; see, e.g., Hille (1969), we obtain for all \(t\in [0, T]\) on the event \(A_n(\varepsilon )\)

$$\begin{aligned} |\hat{X}_n(t)-x(t)|\le TL_v\sup _{x\in {\mathbb {R}}^d}|\hat{M}_n(x)-M(x)| e^{L_{vM}t}=o_P(1)\ \mathrm{as}\ n\rightarrow \infty . \end{aligned}$$

\(\square \)

1.2 Proof of Lemma 4

Since M(x) is a symmetric matrix for all \(x\in G\) from condition (A1) and \(\Gamma _j, j=1,\dots ,n,\) are symmetric matrices by definition, by construction \(\hat{M}_n(x)\) is a symmetric matrix for all \(x\in G\).

As in proof of Lemma 3 define an event \(A_n(\varepsilon ):=\{\sup _{x\in \mathbb {R}^d}|\hat{M}_n(x)-M(x)|\le \varepsilon \}\) for every \(\varepsilon >0\). By Lemma 1, \(P(A_n(\varepsilon ))\rightarrow 1\) as \(n\rightarrow \infty \).

Let \(\mu (M_0)\) and \(u(M_0)\) denote the minimal eigenvalue and the corresponding eigenvector of a matrix \(M_0\). By Theorem 1 in Magnus (1985) the functions \(\mu \) and u are infinitely differentiable in a small neighborhood of \(M_0\), provided that \(\mu (M_0)\) is the simple eigenvalue. Furthermore, \( \frac{\partial \mu (M)}{\partial M_{kl}}\bigg |_{M_0}=(2-\delta _{kl})u_k(M_0)u_l(M_0),\) \(1\le k, l\le d. \) Since \(\mu (M(x))>0\) for all \(x\in G\) and \(\bar{G}\) is a compact set, then on the event \(A_n(\varepsilon )\) the function \(\mu (\hat{M}_n(x))\) is positive for all \(x\in G\) and \(\mu (\hat{M}_n(x))\) remains the simple minimal eigenvalue of \(\hat{M}_n\) by Theorem 1 in Magnus (1985) and due to condition (A1) that all eigenvalues of \(M(x), x\in G,\) are simple. \(\square \)

1.3 Proof of Theorem 1

  • Step 1: Approximation For any \(\varepsilon >0\) consider an event \(B_n(\varepsilon ):=\{\sup _{t\in [0, T]}|\hat{X}_n(t)-x(t)|\le \varepsilon \}\). By Lemma 3, \(P(B_n(\varepsilon ))\rightarrow 1\) as \(n\rightarrow \infty \). Following the notation and ideas of the proof of Theorem 1 in Koltchinskii et al. (2007) we have

    $$\begin{aligned} \hat{X}_n(t)-x(t)= & {} \int _0^t [v(\hat{M}_n(\hat{X}_n(s)))-v(M(x(s)))]ds \\= & {} \int _0^t \nabla v(M(x(s)))\nabla M(x(s))(\hat{X}_n(s)-x(s))ds \\&+ \int _0^t \nabla v(M(x(s)))(\hat{M}_n(x(s))-M(x(s)))ds + R_n(t),\quad t\in [0, T], \end{aligned}$$

    where the remainder is defined as

    $$\begin{aligned} R_n(t)= & {} \int _0^t [v(M(\hat{X}_n(s)))-v(M(x(s)))-\nabla v(M(x(s)))\nabla M(x(s))(\hat{X}_n(s)-x(s))]ds \\&+\int _0^t [v(\hat{M}_n(\hat{X}_n(s)))-v(M(\hat{X}_n(s)))-\nabla v(M(x(s)))(\hat{M}_n(x(s))-M(x(s)))]ds \end{aligned}$$

    for all \(t\in [0, T]\). Note that due to local smoothness of v on the event \(B_n(\varepsilon )\) we have

    $$\begin{aligned}&v(M(\hat{X}_n(s)))-v(M(x(s)))-\nabla v(M(x(s)))\nabla M(x(s))(\hat{X}_n(s)-x(s)) \\&\quad =\int _0^1\int _0^1 \delta \bigg \langle \nabla (\nabla v(M(x_{n,\tau ,\delta }(s)))\nabla M(x_{n,\tau ,\delta }(s)))(\hat{X}_n(s)-x(s)), \hat{X}_n(s)-x(s)\bigg \rangle d\delta d\tau , \end{aligned}$$

    where \(x_{n,\tau ,\delta }(s)=\tau \delta \hat{X}_n(s)+(1-\tau \delta )x(s)\). Quite similarly, on the event \(A_n(\varepsilon )\cap B_n(\varepsilon )\) with \(A_n(\varepsilon )\) defined in the proof of Lemma 3, a straightforward derivation yields a similar representation for the second term in \(R_n(t)\). Thus, we can bound the remainder \(R_n\) on the event \(A_n(\varepsilon )\cap B_n(\varepsilon )\) as

    $$\begin{aligned} |R_n(t)|\le & {} C_1\int _0^t|\hat{X}_n(s)-x(s)|^2ds+C_2\sup _{x\in \mathbb {R}^d}|\hat{M}_n(x)-M(x)|\int _0^t|\hat{X}_n(s)-x(s)|ds \\&+\,C_3\sup _{x\in \mathbb {R}^d}|\hat{M}_n(x)-M(x)|^2+C_4\sup _{x\in \mathbb {R}^d}|\nabla \hat{M}_n(x)\!-\!\nabla M(x)|\int _0^t |\hat{X}_n(s)\!-\!x(s)|ds \end{aligned}$$

    for all \(t\in [0, T]\), where \(C_{1}, C_2, C_3, C_4\) are some finite positive constants. In other words, using Lemmas 13

    $$\begin{aligned} \sup _{t\in [0, T]}|R_n(t)|=o_P\bigg (\int _0^T|\hat{X}_n(s)-x(s)|ds+\sup _{x\in \mathbb {R}^d}|\hat{M}_n(x)-M(x)|^2\bigg ). \end{aligned}$$

    Denote a d-dimensional random process

    $$\begin{aligned} Z_n(t):= & {} \int _0^t \nabla v(M(x(s)))\nabla M(x(s))Z_n(s) ds \\&+ \int _0^t \nabla v(M(x(s)))(\hat{M}_n(x(s))-M(x(s)))ds, \quad t\in [0, T]. \end{aligned}$$

    This process can also be written as

    $$\begin{aligned} Z_n(t)=\int _0^t U(t,s)\nabla v(M(x(s)))(\hat{M}_n(x(s))-M(x(s)))ds,\quad t\in [0, T], \end{aligned}$$

    where U(ts) is the Green’s function defined just before Theorem 1. The process \(Z_n(t), t\in [0, T],\) approximates the process \((\hat{X}_n-x)(t), t\in [0, T]\). Indeed, for any \(t\in [0, T]\)

    $$\begin{aligned}&|\hat{X}_n(t)-x(t)-Z_n(t)| \\&\quad =\bigg |\int _0^t \nabla v(M(x(s)))\nabla M(x(s))[\hat{X}_n(s)-x(s)-Z_n(s)] ds + R_n(t)\bigg | \\&\quad \le \int _0^t |\nabla v(M(x(s)))\nabla M(x(s))||\hat{X}_n(s)-x(s)-Z_n(s)| ds + \sup _{t\in [0, T]}|R_n(t)|. \end{aligned}$$

    Then by Gronwall–Bellman inequality we obtain on the event \(A_n(\varepsilon )\cap B_n(\varepsilon )\)

    $$\begin{aligned}&\sup _{t\in [0, T]}|\hat{X}_n(t)-x(t)-Z_n(t)| \\&\quad \le \sup _{t\in [0, T]}|R_n(t)|\sup _{t\in [0, T]}\exp \bigg \{\int _0^t|\nabla v(M(x(s)))\nabla M(x(s))|ds\bigg \} \\&\quad \le C\sup _{t\in [0, T]}|R_n(t)|, \end{aligned}$$

    which yields

    $$\begin{aligned} \sup _{t\in [0, T]}|\hat{X}_n(t)-x(t)-Z_n(t)|=o_P\bigg (\int _0^T|Z_n(s)|ds+\sup _{x\in \mathbb {R}^d}|\hat{M}_n(x)-M(x)|^2\bigg ). \end{aligned}$$

    It will follow that \( \int _0^T|Z_n(s)|ds=O_P((nh_n^{d-1})^{-1/2}), \) which in combination with Lemma 1 leads to \( \sup _{t\in [0, T]}|\hat{X}_n(t)-x(t)-Z_n(t)|=o_P((nh_n^{d-1})^{-1/2}). \)

  • Step 2: Mean and covariance of \(Z_n\) Similar to Koltchinskii et al. (2007) define a \(d\times d^2\)-tensor-valued function \( f_t(s)=I_{[0, t]}(s)U(t,s)\nabla v(M(x(s))), \) for \( s\in \mathbb {R}, t\in [0, T].\) Note that it belongs to \(\mathcal L\), the set of all \(\mathbb {R}^{d^3}\)-valued bounded functions on \(\mathbb {R}\) with support in [0, T], that are a.e. continuous in \(\mathbb {R}\). Furthermore, note that \(\mathcal L\) is a linear space. Then we can rewrite for \( t\in [0, T]\)

    $$\begin{aligned} Z_n(t)= & {} \int f_t(s)(\hat{M}_n(x(s))-M(x(s)))ds \\= & {} \frac{1}{nh_n^d}\sum _{j=1}^n \int f_t(s)K((x(s)-X_j)/h_n)ds (M(X_j)+\Gamma _{j})-\int f_t(s)M(x(s))ds. \end{aligned}$$

    We have for all \(t\in [0, T]\) with o-term being uniform in t

    $$\begin{aligned} {\mathbb {E}}Z_n(t)= & {} h_n^{-d}\int f_t(s)\int K((x(s)-y)/h_n)M(y)dy ds -\int f_t(s)M(x(s))ds \\= & {} -h_n\int f_t(s)\nabla M(x(s))\int uK(u)du ds \\&+\,0.5h_n^2\int f_t(s)\int K(u)\langle \nabla ^2 M(x(s))u, u\rangle du ds +o(h_n^2). \end{aligned}$$

    Next consider the components of the covariance matrix. It is more convenient to treat \(d\times d\) matrices as \(d_0\)-vectors in the lengthy derivations below. To this end, for any \(t_{1}, t_2\in [0, T]\) we have

    $$\begin{aligned}&nh_n^{2d}\mathrm{Cov}(Z_{n}(t_1), Z_{n}(t_2))\\&\quad =\mathrm{Cov}\bigg (\int K((x(s)-X_1)/h_n)U(t_1,s)\tilde{\nabla }v(M(x(s)))H{\varvec{\Gamma }}_1ds, \\&\qquad \int K((x(u)-X_1)/h_n)U(t_2,s)\tilde{\nabla }v(M(x(u)))H{\varvec{\Gamma }}_1du\bigg ) \\&\qquad +\,\mathrm{Cov}\bigg (\int K((x(s)-X_1)/h_n)U(t_1,s)\tilde{\nabla }v(M(x(s)))H\mathbf{M}(X_1)ds \\&\qquad \quad \int K((x(u)-X_1)/h_n)U(t_2,s)\tilde{\nabla }v(M(x(u)))H\mathbf{M}(X_1)du\bigg ) \\&\quad =: nh_n^{2d}(I+II). \end{aligned}$$

    Then using two substitutions \(z=(x(s)-y)/h_n\) and \(u=s+\tau h_n\), also utilizing Lebesgue dominated convergence theorem [since \(\int |\Sigma _{\Gamma }|^2(x)dx<\infty \), \(\nabla M\) and \(\nabla v\) are uniformly bounded everywhere, \(\int K^4(z)dz<\infty \), and there exists \(\Delta >0\) such that \(|\int _0^1 v(M(x(s+\delta (t-s))))d\delta |>\Delta \) for all \(s, t\in (0, T), s\ne t\)], we can rewrite

    $$\begin{aligned} I= & {} \frac{1}{nh_n^{2d}}{\mathbb {E}}{\mathbb {E}}\bigg (\int \int K\bigg (\frac{x(s)-X_1}{h_n}\bigg )K\bigg (\frac{x(u) -X_1}{h_n}\bigg )U(t_1,s)\tilde{\nabla }v(M(x(s))) \\&\qquad \qquad \quad \times H{\varvec{\Gamma }}_1{\varvec{\Gamma }}_1^*H\tilde{\nabla }v(M(x(u)))^* U^*(t_2,u) ds du \bigg | X_1\bigg ) = \frac{1}{nh_n^{d-1}}\int \int \int K(z) \\&\times K\bigg (z+\tau \int _0^1 v(M(x(s+\tau \delta h_n)))d\delta \bigg ) U(t_1,s)\tilde{\nabla }v(M(x(s)))H\Sigma _{\Gamma }(x(s)-zh_n) \\&\times H\tilde{\nabla }v(M(x(s+\tau h_n)))^* U^*(t_2,s+\tau h_n) ds d\tau dz =\frac{1+o(1)}{nh_n^{d-1}} \\&\quad \int \psi (v(M(x(s)))) U(t_1,s)\tilde{\nabla }v(M(x(s)))H\Sigma _{\Gamma }(x(s))H\tilde{\nabla }v(M(x(s)))^* U^*(t_2,s)ds \end{aligned}$$

    with o-term being uniform in t. Quite similarly,

    $$\begin{aligned} II= & {} \frac{1}{nh_n^{d}}\int \int \int K(z)K(z+(x(u)-x(s))/h_n)U(t_1,s)\tilde{\nabla }v(M(x(s))) \\&\times \,H\mathbf{M}(x(s)-zh_n) \mathbf{M}^*(x(s)-zh_n)H\tilde{\nabla }v(M(x(u)))^*U^*(t_2,u) ds du dz \\&-\,\frac{1}{n}\bigg (\int \int K(z) U(t_1,s)\tilde{\nabla }v(M(x(s)))H\mathbf{M}(x(s)-zh_n) ds dz \bigg ) \\&\times \,\bigg ( \int \int K(z) \mathbf{M}^*(x(s)-zh_n)H\tilde{\nabla }v(M(x(u)))^*U^*(t_2,u) du dz\bigg ) \\= & {} \frac{1+o(1)}{nh_n^{d-1}}\int \psi (v(M(x(s)))) U(t_1,s)\tilde{\nabla }v(M(x(s)))H\mathbf{M}(x(s)) \mathbf{M}^*(x(s)) \\&\times \,H\tilde{\nabla }v(M(x(s)))^*U^*(t_2,s) ds \end{aligned}$$

    with o-term being uniform in t, where we again used Lebesgue dominated convergence theorem due to the same reasons as in the case of (I) and due to \(M(\cdot )\) being uniformly bounded everywhere. Thus, for all \(t_{1}, t_2\in [0, T]\) the covariance of \(Z_n\) is

    $$\begin{aligned}&\mathrm{Cov}(Z_n(t_1), Z_n(t_2))=\frac{1+o(1)}{nh_n^{d-1}}\int \psi (v(M(x(s)))) U(t_1,s)\tilde{\nabla }v(M(x(s))) \\&\quad \times H[\Sigma _{\Gamma }(x(s))+\mathbf{M}(x(s)) \mathbf{M}^*(x(s))]H\tilde{\nabla }v(M(x(s)))^*U^*(t_2,s) ds. \end{aligned}$$
  • Step 3: Lyapunov’s condition and asymptotical equicontinuity Next we show that \(\sqrt{nh_n^{d-1}}(\hat{X}_n(t)-x(t))\) is asymptotically normal for all \(t\in [0, T]\) by checking Lyapunov’s conditions of CLT. Let

    $$\begin{aligned} \eta _j=\int f_t(s)K((x(s)-X_j)/h_n)ds (M(X_j)+\Gamma _j),\quad j\ge 1,\quad t\in [0, T], \end{aligned}$$

    so that \(\sqrt{nh_n^{d-1}}(Z_n(t)-{\mathbb {E}}Z_n(t))=\frac{1}{\sqrt{nh_n^{d+1}}}\sum _{j=1}^n (\eta _j-{\mathbb {E}}\eta _j). \) Then

    $$\begin{aligned} {\mathbb {E}}|\eta _j|^4\le & {} C h_n^{d+3} \times \int \int \int \Lambda (\tau _1,\tau _2,\tau _3)\bigg (\int |f_t(s)|^4 ds\bigg )^{1/4} \bigg (\int |f_t(s+\tau _1 h_n)|^4 ds\bigg )^{1/4} \nonumber \\&\times \,\bigg (\int |f_t(s+\tau _2 h_n)|^4 ds\bigg )^{1/4}\bigg (\int |f_t(s+\tau _3 h_n)|^4 ds\bigg )^{1/4} d\tau _1 d\tau _2 d\tau _3 \nonumber \\\le & {} Ch_n^{d+3}\int |f_t(s)|^4ds\int \int \int \Lambda _4(\tau _1,\tau _2,\tau _3)d\tau _1 d\tau _2 d\tau _3, \end{aligned}$$
    (10)

    where the function \(\Lambda : \mathbb {R}^3\rightarrow \mathbb {R}\) is defined as

    $$\begin{aligned}&\Lambda _4(\tau _1,\tau _2,\tau _3) :=\sup _{s_{1}, s_2, s_3, s_4\in [0, T]}\int K(z)K(z+\tau _1(x(s_1)-x(s_4))/(s_1-s_4)) \\&\quad \times \, K(z+\tau _1(x(s_2)-x(s_4))/(s_2-s_4))K(z+\tau _1(x(s_3) -x(s_4))/(s_3-s_4))dz. \end{aligned}$$

    It is integrable, since it is bounded and has a bounded support due to conditions (A3) and (A7). Then \( {\mathbb {E}}(\sqrt{nh_n^{d-1}}(Z_n(t)-{\mathbb {E}}Z_n(t)))^4\le \frac{Cnh_n^{d+3}}{n^2h_n^{2(d+1)}}=\frac{C}{nh_n^{d-1}}\rightarrow 0\) as \(n\rightarrow \infty . \) By Lyapunov’s condition of CLT we have the f.d.d. convergence of the stochastic processes \(\sqrt{nh_n^{d-1}}Z_n(t), t\in [0, T],\) and \(\sqrt{nh_n^{d-1}}(\hat{X}_n(t)-x(t)), t\in [0, T],\) to the Gaussian process \(\mathcal{G}(t), t\in [0, T]\). Finally, we need to check the asymptotic equicontinuity condition for the sequence of processes \(\sqrt{nh_n^{d-1}}(Z_n(t)-{\mathbb {E}}Z_n(t)),\ t\in [0, T]\). As in Koltchinskii et al. (2007) we use the formula

    $$\begin{aligned} {\mathbb {E}}|\sqrt{nh_n^{d-1}}(Z_n(t)-{\mathbb {E}}Z_n(t))|^4 =\frac{1}{n^4h_n^{4d}}\bigg [0.5n(n-1)\bigg ({\mathbb {E}}|\eta -{\mathbb {E}}\eta |^2\bigg )^2+n{\mathbb {E}}|\eta -{\mathbb {E}}\eta |^4\bigg ]. \end{aligned}$$

    Similar to (10) we get \( {\mathbb {E}}|\eta _j|^2\le Ch_n^{d+1}\int |f_t(s)|^2ds\int \Lambda _2(\tau )d\tau , \) where

    $$\begin{aligned} \Lambda _2(\tau ):=\sup _{s_{1}, s_2\in [0, T]}\int K(z)K(z+\tau (x(s_1)-x(s_2))/(s_1-s_2))dz,\quad \tau \in \mathbb {R}. \end{aligned}$$

    This function is integrable since it is bounded and has a bounded support due to conditions (A3) and (A7). Then with a finite constant C

    $$\begin{aligned} {\mathbb {E}}|\sqrt{nh_n^{d-1}}(Z_n(t)-{\mathbb {E}}Z_n(t))|^4\le C\bigg [\bigg (\int |f_t(s)|^2 ds\bigg )^2+\frac{1}{nh_n^{d-1}}\int |f_t(s)|^4 ds\bigg ], \end{aligned}$$

    which we will apply to \(f_{t_1}-f_{t_2}\) instead of \(f_t\). Due to continuity of U, \(\nabla v\) and M we have \( \int |f_{t_1}(s)-f_{t_2}(s)|^2 ds\le C|t_1-t_2|\) and \(\int |f_{t_1}(s)-f_{t_2}(s)|^4 ds\le C|t_1-t_2| \) for all \(t_{1,2}\in [0, T]\), which yields

    $$\begin{aligned} {\mathbb {E}}|\sqrt{nh_n^{d-1}}[(Z_n(t_1)-{\mathbb {E}}Z_n(t_1))-(Z_n(t_2)-{\mathbb {E}}Z_n(t_2))]|^4\le C\bigg [|t_1-t_2|^2+\frac{|t_1-t_2|}{nh_n^{d-1}}\bigg ]. \end{aligned}$$

    The asymptotic equicontinuity follows by arguments in Billingsley (1999) as \(nh_n^{d-1}\rightarrow \infty \). For more details see Koltchinskii et al. (2007). \(\square \)

1.4 Proof of Proposition

It follows from a more general result on the asymptotical joint distribution of eigenvalues of \(\hat{M}_n(x)\) for a fixed point \(x\in G\). To formulate it we need new notation \(\lambda _1(A)>\lambda _2(A)>\cdots >\lambda _d(A)\) for ordered eigenvalues of a symmetric positive definite matrix A. Let \(v^{(i)}(A), i=1,\dots ,d,\) be the corresponding eigenvectors of A. With slight abuse of notation, let \(\lambda (A)\) be the vector of all ordered eigenvalues of A.

Proposition 2

Suppose conditions (A1)–(A8) hold. Then for any fixed \(x\in G\) the difference \(\sqrt{nh_n^d}(\mathbf{\lambda }(\hat{M}_n(x))-\mathbf{\lambda }(M(x)))\) is asymptotically normal with mean 0 and the covariance matrix with the ij-th entries

$$\begin{aligned} \sigma _{\lambda _i\lambda _j}(x)=\int K^2(u)du {{\varvec{\Delta }}}^*_{v_i}(x)H[M(x)M^*(x)+\Sigma _{\Gamma }(x)]H {{\varvec{\Delta }}}_{v_j}(x),\quad 1\le i, \;\, j\le d, \end{aligned}$$

where \({\varvec{\Delta }}_{v_i}\) is defined in Proposition 1.

The proof is based on the standard asymptotical normality result for the kernel regression estimator \(\hat{M}_n(x)\) at a fixed point \(x\in G\), since \({\varvec{\Gamma }}_j, j=1,\dots ,n,\) are centered i.i.d. random variables. Then we utilize the linearization with respect to the tensor of the eigenvalues based on the first order Taylor expansion (9) in the proof of Lemma 3. A straightforward calculation of asymptotical covariances completes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Carmichael, O., Sakhanenko, L. Integral curves from noisy diffusion MRI data with closed-form uncertainty estimates. Stat Inference Stoch Process 19, 289–319 (2016). https://doi.org/10.1007/s11203-015-9126-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11203-015-9126-9

Keywords

Navigation