Skip to main content

Diffusion Tensor Imaging with Deterministic Error Bounds


Errors in the data and the forward operator of an inverse problem can be handily modelled using partial order in Banach lattices. We present some existing results of the theory of regularisation in this novel framework, where errors are represented as bounds by means of the appropriate partial order. We apply the theory to diffusion tensor imaging, where correct noise modelling is challenging: it involves the Rician distribution and the non-linear Stejskal–Tanner equation. Linearisation of the latter in the statistical framework would complicate the noise model even further. We avoid this using the error bounds approach, which preserves simple error structure under monotone transformations.


Often in inverse problems, we have only very rough knowledge of noise models, or the exact model is too difficult to realise in a numerical reconstruction method. The data may also contain process artefacts from black-box devices [44]. Partial order in Banach lattices has therefore recently been investigated in [3234] as a less-assuming error modelling approach for inverse problems. This framework allows the representation of errors in the data as well as in the forward operator of an inverse problem by means of order intervals (i.e. lower and upper bounds by means of appropriate partial orders). An important advantage of this approach vs. statistical noise modelling is that deterministic error bounds preserve their simple structure under monotone transformations.

Fig. 1
figure 1

The noise in the absolute values of complex MRI data should be Rician. Here we have taken a 50-bin histogram of the noise in real data. This divides the pixels into bins of 50 different noise levels. However, we only find approximately 10 noise levels to have non-zero pixel count. As the Rician distribution is continuous, we see that the noise cannot be Rician, some bins of the 50-bin histogram being empty. The measurement setup of the data used here is described in Sect. 5.3. a Slice of a real MRI measurement. b 50-bin histogram of noise estimated from background. c 50-bin histogram after eddy-current correction with FSL

We apply partial order in Banach lattices to diffusion tensor imaging (DTI). We will in due course explain the diffusion tensor imaging progress, as well as the theory of inverse problems in Banach lattices, but start by introducing our model

$$\begin{aligned} \min _u~ R(u) \quad \text {s.t.} \quad u\geqslant & {} 0,\\ g_j^l\leqslant & {} A_j u \leqslant g_j^u,\quad (j=1,\ldots ,N). \end{aligned}$$

That is, we want to find a field of symmetric 2-tensors \(u: {\varOmega }\rightarrow {{\mathrm{Sym}}}^2(\mathbb {R}^3)\) on the domain \({\varOmega }\subset \mathbb {R}^3\), minimising the value of the regulariser R on the feasible set. The tensor field u is our unknown image. It is subject to a positivity constraint, as well as partial order constraints imposed through the operators \([A_j u](x) :=-\langle b_j,u(x)b_j\rangle \), and the upper and lower bounds \(g_j^l :=\log ({\hat{s}}_j^l/{\hat{s}}_0^u)\) and \(g_j^u :=\log ({\hat{s}}_j^u/{\hat{s}}_0^l)\). These arise from the linearisation (via the monotone logarithmic transformation) of the Stejskal-Tanner equation

$$\begin{aligned} s_j(x)=s_0(x) \exp (-\langle b_j,u(x)b_j\rangle ), \quad (j=1,\ldots ,N), \end{aligned}$$

central to the diffusion tensor imaging process.

To shed more light on u and the equation (1), let us briefly outline the diffusion tensor imaging process. As a first step towards DTI, diffusion-weighted magnetic resonance imaging (DWI) is performed. This process measures the anisotropic diffusion of water molecules. To capture the diffusion information, the magnetic resonance images have to be measured with diffusion-sensitising gradients in multiple directions. These are the different \(b_i\)’s in (1). Eventually, multiple DWI images \(\{s_j\}\) are related through the Stejskal–Tanner equation (1) to the symmetric positive-definite diffusion tensor field \(u: {\varOmega }\rightarrow {{\mathrm{Sym}}}^2(\mathbb {R}^3)\) [4, 29]. At each point \(x \in {\varOmega }\), the tensor u(x) is the covariance matrix of a normal distribution for the probability of water diffusing in different spatial directions.

The fact that multiple \(b_i\)’s are needed to recover u leads to very long acquisition times, even with ultra fast sequences like echo planar imaging (EPI). Therefore, DTI is inherently a low-resolution and low-SNR method. In theory, the amplitude DWI images exhibit Rician noise [24]. However, as the histogram of an in vivo measurement in Fig. 1 illustrates, this may not be the case for practical datasets from black-box devices. Moreover, the DWI process is prone to eddy-current distortions [53], and due to the slowness of it, it is very sensitive to patient motion [1, 27]. We therefore have to use techniques that remove these artefacts in solving for u(x). We also need to ensure the positivity u, as non-positive-definite diffusion tensor are non-physical. One proposed approach for the satisfaction of this constraint is that of log-Euclidean metrics [3]. This approach has several theoretically desirable aspects, but some practical shortcomings [57]. Special Perona–Malik-type constructions on Riemannian manifolds can also be used to maintain the structure of the tensor field [14, 54]. Such anisotropic diffusion is, however, severely ill-posed [60]. Recently manifold-valued discrete-domain total variation models have also been applied to diffusion tensor imaging [6].

Our approach is also in the total variation family, first considered for diffusion tensor imaging in [48]. Namely, we follow up on the work in [55, 5759] on the application of total generalised variation regularisation [9] to DTI. We should note that in all of these works, the fidelity function was the ROF-type [45] \(L^2\) fidelity. This would only be correct, according to the assumption that noise of MRI measurements is Gaussian, if we had access to the original complex k-space MRI data. The noise of the inverse Fourier-transformed magnitude data \(s_j\), that we have in practice access to, is however Rician under the Gaussian assumption on the original complex data [24]. This is not modelled by the \(L^2\) fidelity.

Numerical implementation of Rician noise modelling has been studied in [22, 40]. As already discussed, in this work, we take the other direction. Instead of modelling the errors in a statistically accurate fashion, not assuming to know an exact noise model, we represent them by means of pointwise bounds. The details of the model are presented in Sect. 3. We study the practical performance in Sect. 5 using the numerical method presented in Sect. 4. First we, however, start with the general error modelling theory in Sect. 2. Readers who are not familiar with notation for Banach lattices or symmetric tensor fields are advised to start with the Appendix 2 where we introduce our mathematical notation and techniques.

Deterministic Error Modelling

Mathematical Basis

We now briefly outline the theoretical framework [32] that is the basis for our approach. Consider a linear operator equation

$$\begin{aligned} Au = f, \quad u \in U, \,\, f \in F, \end{aligned}$$

where U and F are Banach lattices, \(A :U \rightarrow F\) is a regular injective operator. The inaccuracies in the right-hand side f and the operator A are represented as bounds by means of appropriate partial orders, i.e.

$$\begin{aligned} \begin{aligned}&f^l, f^u :&\,&f^l \leqslant _F f \leqslant _F f^u, \\&A^l, A^u :&\,&A^l \leqslant _{L^\sim (U,F)} A \leqslant _{L^\sim (U,F)} A^u, \end{aligned} \end{aligned}$$

where the symbol \(\leqslant _F\) stands for the partial order in F and \(\leqslant _{L^\sim (U,F)}\) for the partial order for regular operators induced by partial orders in U and F. Further, we will drop the subscripts at inequality signs where it will not cause confusion.

The exact right-hand side f and operator A are not available. Given the approximate data \((f^l,f^u,A^l,A^u)\), we need to find an approximate solution u that converges to the exact solution \({\bar{u}}\) as the inaccuracies in the data diminish. This statement needs to be formalised. We consider monotone convergent sequences of lower and upper bounds

$$\begin{aligned} \begin{aligned}&f^l_n :f^l_{n+1} \geqslant f^l_n,&\quad&A^l_n :A^l_{n+1} \geqslant A^l_n, \\&f^u_n :f^u_{n+1} \leqslant f^u_n,&\quad&A^u_n :A^u_{n+1} \leqslant A^u_n, \\&f^l_n \leqslant f \leqslant f^u_n,&\quad&A^l_n \leqslant A \leqslant A^u_n \quad \forall n \in {\mathbb {N}},\\&\Vert f^l_n - f^u_n \Vert \rightarrow 0,&\quad&\Vert A^l_n - A^u_n \Vert \rightarrow 0 \quad \text {as } n \rightarrow \infty . \end{aligned} \end{aligned}$$

We are looking for an approximate solution \(u_n\) such that \(\Vert u_n-{\bar{u}}\Vert _U \rightarrow 0\) as \(n \rightarrow \infty \).

Let us ask the following question. What are the elements \(u \in U\) that could have produced data within the tolerances (4)? Obviously, the exact solution is one of such elements. Let us call the set containing all such elements the feasible set \(U_n \subset U\).

Suppose that we know a priori that the exact solution is positive (by means of the appropriate partial order in U), then it is easy to verify that the following inequalities hold for all \(n \in {\mathbb {N}}\)

$$\begin{aligned} {\bar{u}} \geqslant _U 0, \quad A^u_n {\bar{u}} \geqslant _F f^l_n, \quad A^l_n {\bar{u}} \leqslant _F f^u_n. \end{aligned}$$

This observation motivates our choice of the feasible set:

$$\begin{aligned} U_n = \left\{ u \in U :u \geqslant _U 0, \quad A^u_n u \geqslant _F f^l_n, \quad A^l_n u \leqslant _F f^u_n \right\} . \end{aligned}$$

It is clear that the exact solution \({\bar{u}}\) belongs to the sets \(U_n\) for all \(n \in {\mathbb {N}}\). Our goal is to define a rule that will choose for any n an element \(u_n\) of the set \(U_n\) such that the sequence \(u_n \in U_n\) will strongly converge to the exact solution \({\bar{u}}\). We do so by minimising an appropriate regularisation functional R(u) on \(U_n\):

$$\begin{aligned} u_n = \mathop {\hbox {arg min}}\limits _{u \in U_n} R(u). \end{aligned}$$

This method, in fact, is a lattice analogue of the well-known residual method [23, 52]. The convergence result is as follows [32].

Theorem 1

Suppose that

  1. 1.

    R(u) is bounded from below on U,

  2. 2.

    R(u) is lower semi-continuous,

  3. 3.

    level sets \(\{u :R(u) \leqslant C\}\) (\(C=\mathrm{const}\)) are sequentially compact in U (in the strong topology induced by the norm).

Then the sequence defined in (5) strongly converges to the exact solution \({\bar{u}}\) and \({\mathcal {R}}(u_n) \rightarrow {\mathcal {R}}({\bar{u}})\).

Examples of regularisation functionals that satisfy the conditions of Theorem 1 are as follows. Total Variation in \(L^1({\varOmega })\), where \({\varOmega }\) is a subset of \({\mathbb {R}}^n\), assures strong convergence in \(L^1\), given that the \(L^1\)-norm of the solution is bounded. The Sobolev norm \(\Vert u \Vert _{W^{1,q}({\varOmega })}\) yields strong convergence in the spaces \(L^p({\varOmega })\), where \(p \geqslant 1\), \(q > \frac{np}{p+n}\). The latter fact follows from the compact embedding of the corresponding Sobolev \(W^{1,q}({\varOmega })\) space into \(L^p({\varOmega })\) [16].

However, the assumption that the sets \(\{u :R(u) \leqslant C\}\) are strong compacts in U is quite strong. It can be replaced by the assumption of weak compactness, provided that the regularisation functional possesses the so-called Radon–Riesz property.

Definition 1

A functional \(F :U \rightarrow {\mathbb {R}}\) has the Radon–Riesz property (sometimes called the H-property), if for any sequence \(u_n \in U\) weak convergence \(u_n \rightharpoonup u_0\) and simultaneous convergence of the values \(F(u_n) \rightarrow F(u_0)\) imply strong convergence \(u_n \rightarrow u_0\).

Theorem 2

Suppose that

  1. 1.

    R(u) is bounded from below on U,

  2. 2.

    R(u) is weakly lower semi-continuous,

  3. 3.

    level sets \(R(u) \leqslant C\) (\(C=\mathrm{const}\)) are weakly sequentially compact in U,

  4. 4.

    R(u) possesses the Radon–Riesz property.

Then the sequence defined in (5) strongly converges to the exact solution \({\bar{u}}\) and \({\mathcal {R}}(u_n) \rightarrow {\mathcal {R}}({\bar{u}})\).

It is easy to verify that the norm in any Hilbert space possesses the Radon–Riesz property. Moreover, this holds for the norm in any reflexive Banach space [16].

As we explain in the Appendix 2 \(L^p({\varOmega }; {{\mathrm{Sym}}}^{2}(\mathbb {R}^m))\) is not a Banach lattice. Therefore, Theorems 1 and 2 cannot be applied directly. Further theoretical work will be undertaken to extend the framework to the non-lattice case. For the moment, however, we will prove that if there are no errors in the operator A in (2), the requirement that the solution space U is a lattice can be dropped.

Theorem 3

Let U be a Banach space, and F be a Banach lattice. Let the operator A in (2) be a linear, continuous and injective operator. Let \(f^l_n\) and \(f^u_n\) be sequences of lower and upper bounds for the right-hand side defined in (4), and suppose that there are no errors in the operator A. Let us redefine the feasible set in the following way

$$\begin{aligned} U_n = \left\{ u \in U :\quad f^l_n \leqslant _F A u \leqslant _F f^u_n \right\} . \end{aligned}$$

Suppose also that the regulariser R(x) satisfies conditions of either Theorem 1 or Theorem 2. Then the sequence defined in (5) strongly converges to the exact solution \({\bar{u}}\) and \(R(u_n) \rightarrow R({\bar{u}})\).


Let us define an approximate right-hand side and its approximation error in the following way

$$\begin{aligned} f_{\delta _n} = \frac{f^u_n + f^l_n}{2}, \quad \delta _n = \frac{\Vert f^u_n-f^l_n\Vert }{2}. \end{aligned}$$

One can easily verify that the inequality \(\Vert f-f_{\delta _n}\Vert \leqslant \delta _n\) holds. Indeed, we have

$$\begin{aligned} \begin{aligned}&f - f_{\delta _n} \leqslant f^u_n - f_{\delta _n} = \frac{f^u_n-f^l_n}{2}, \\&-(f - f_{\delta _n}) \leqslant f_{\delta _n} - f^l_n = \frac{f^u_n-f^l_n}{2},\\&|f - f_{\delta _n}| = (f - f_{\delta _n}) \vee (-(f - f_{\delta _n})) \leqslant \frac{f^u_n-f^l_n}{2},\\&\Vert f - f_{\delta _n}\Vert \leqslant \frac{\Vert f^u_n-f^l_n\Vert }{2}. \end{aligned} \end{aligned}$$

The first two inequalities are consequences of the conditions (4), the third one holds by the definition of supremum and the equality \(|f| = f \vee (-f)\) that holds for all \(f \in F\), and the last inequality is due to the monotonicity of the norm in a Banach lattice.

Similarly, one can show that for any \(u \in U_n\), we have

$$\begin{aligned} \Vert Au - f_{\delta _n}\Vert \leqslant \delta _n. \end{aligned}$$

Therefore, the inclusion \(U_n \subset \{u :\Vert Au-f_{\delta _n}\Vert \leqslant \delta _n\}\) holds.

Now we will proceed with the proof of convergence \(\Vert u_n-{\bar{u}}\Vert \rightarrow 0\). Will prove it for the case when the regulariser R(u) satisfies conditions of Theorem 1. Suppose that the sequence \(u_n\) does not converge to the exact solution \({\bar{u}}\), then it contains a subsequence \(u_{n_k}\) such that \(\Vert u_{n_k} - {\bar{u}}\Vert \geqslant \varepsilon \) for any \(k \in {\mathbb {N}}\) and some fixed \(\varepsilon >0\).

Since the inclusion \({\bar{u}} \in U_n\) holds for all \(n \in {\mathbb {N}}\), we have \(R(u_n) \leqslant R({\bar{u}})\) for all \(n \in {\mathbb {N}}\). Since the level set \(\{u :R(u) \leqslant R({\bar{u}})\}\) is a compact set by assumptions of the theorem, the sequence \(u_{n_k}\) contains a strongly convergent subsequence. With no loss of generality, let us assume that \(u_{n_k} \rightarrow u_0\). We will now show that \(u_0 = {\bar{u}}\). Indeed, we have

$$\begin{aligned} \Vert A u_{n_k} - A {\bar{u}}\Vert \leqslant \Vert A u_{n_k} - f_{\delta _n}\Vert + \Vert f-f_{\delta _n}\Vert \leqslant 2\delta _{n_k} \rightarrow 0. \end{aligned}$$

On the other hand, we have

$$\begin{aligned} \Vert A u_{n_k} - A {\bar{u}}\Vert \rightarrow \Vert A u_0 - A {\bar{u}} \Vert \end{aligned}$$

due to continuity of A and \(\Vert \cdot \Vert \). Therefore, \(A u_0 = A {\bar{u}}\) and \(u_0 = {\bar{u}}\), since A is an injective operator. By contradiction, we get \(\Vert u_n - {\bar{u}}\Vert \rightarrow 0\).

Finally, since the regulariser R(u) is lower semi-continuous, we get that \(\lim \inf R(u_n) = R({\bar{u}})\). However, for any n we have \(R(u_n) \leqslant R({\bar{u}})\), therefore, we get the convergence \(R(u_n) \rightarrow R({\bar{u}})\) as \(n \rightarrow \infty \). \(\square \)

Philosophical Discussion and Statistical Interpretation

In practice, our data are discrete. So let us momentarily switch to measurements \({\hat{f}} = ({\hat{f}}^1, \ldots , {\hat{f}}^n) \in \mathbb {R}^n\) of a true data \(f \in \mathbb {R}^n\). If we actually knew the pointwise noise model of the data, then one way to obtain potentially useful upper and lower bounds for the deterministic model is by means of statistical interval estimates: confidence intervals. Roughly, the idea is to find for each true signal f individual random upper and lower bounds \({\hat{f}}^u\) and \({\hat{f}}^l\) such that

$$\begin{aligned} P\left( {\hat{f}^u} \le f \le {\hat{f}}^l\right) = 1-\theta . \end{aligned}$$

If \({\hat{f}}^u\) and \({\hat{f}}^l\) are computed based on multiple experiments (i.e. multiple noisy samples \({\hat{f}}_1,\dots ,{\hat{f}}_m\), of the true data f), the interval \([{\hat{f}}^{u,i}, {\hat{f}}^{l,i}]\) will converge in probability to the true data \({\hat{f}}^i\), as the number of experiments m increases. Thus we obtain a probabilistic version of the convergences in (4).

Let us try to see, how such intervals might work in practice. For the purpose of the present discussion, assume that the noise is additive and normal-distributed with variance \(\sigma \) and zero mean—an assumption that does not hold in practice, as we have already seen in Fig. 1, but will suffice for the next thought experiments. That is, \({\hat{f}}_j=f+\nu _j\) for the noise \(\nu _j\). Let the sample mean of \(\{{\hat{f}}_j\}_{j=1}^m\) be \({\bar{f}}_m = ({\bar{f}}^1_m, \ldots {\bar{f}}^n_m)\), and pointwise sample variance \(\sigma =(\sigma ^1,\ldots ,\sigma ^n)\). The product of the pointwise confidence intervals \(I_i\) with confidence \(1-\theta \) is [15, 49]

$$\begin{aligned} \prod _{i=1}^n I_i = \left[ {\bar{f}}_m - k^*_{1-\theta /2}\frac{\sigma }{\sqrt{m}}, {\bar{f}}_M +k^*_{1-\theta /2}\frac{\sigma }{\sqrt{m}}\right] , \end{aligned}$$

where \(k^*_{1-t} :={\varPhi }^{-1}(t)\) with \({\varPhi }\) the cumulative normal distribution function. For \(\theta =0.05\), i.e. the \(95\,\%\) confidence interval, \({\varPhi }^{-1}(0.05/2)=1.96\). Now, the probability that \(I_i\) covers the true \(f^i\) is \(1-\theta \), e.g. \(95\,\%\). If we have only a single sample \(m=1\), the intervals stay large, but the joint probability \((1-\theta )^n\) goes to zero as n increases. As an example, for a rather typical single \(128 \times 128\) slice of a DTI measurement, the probability that exactly \(\phi =5\,\%\) (to the closest discrete value possible) of the \(1-\theta =95\,\%\) confidence intervals do not cover the true parameter would be about \(1.4\,\%\), or

$$\begin{aligned}&1.4\,\% \approx {n \atopwithdelims ()m} \theta ^m {(1-\theta )}^{n-m}, \\&\text {where } n=128^2 \text { and } m=\lceil \phi n \rceil . \end{aligned}$$

The probability of at least \(\phi =5\,\%\) of the pointwise \(95\,\%\) confidence intervals not covering the true parameter is in this setting approximately \(49\,\%\). This can be verified by summing the above estimates over \(m=\lceil \phi n \rceil ,\ldots ,n\).

In summary, unless \(\theta \) simultaneously goes to 1, the product intervals are very unlikely to cover the true parameter. Based on a single experiment, the deterministic approach as interpreted statistically through confidence intervals is therefore very likely to fail to discover the true solution as the data size n increases unless the pointwise confidence is very low. But, if we let the pointwise confidences be arbitrarily high, such that the intervals are very large, the discovered solution in our applications of interest would be just a constant!

Asymptotically, the situation is more encouraging. Indeed, if we could perform more experiments to compute the confidence intervals, then for any fixed n and \(\theta \), it is easy to see that the solution of the “deterministic” error model is an asymptotically consistent and hence asymptotically unbiased estimator of the true f. That is, the estimates converge in probability to f as the experiment count m increases. Indeed, the error bounds-based estimator \({\tilde{f}}_m\), based on m experiments, by definition satisfies \({\tilde{f}}_m \in \prod _{i=1}^n I_i\). Therefore, we have

$$\begin{aligned}&P \left( |{\tilde{f}}^i_m-{\bar{f}}^i_m|>\epsilon \text { for some } i\right) = 0\\&\text {whenever} \quad m \ge (k^*_{1-\theta /2}\sigma /\epsilon )^2. \end{aligned}$$

Thus \(f \overset{P}{\rightarrow } {\bar{f}}\) in probability. Since by the law of large numbers also \({\bar{f}}_m \overset{P}{\rightarrow } f\), this proves the claim, and to some extent justifies our approach from the statistical viewpoint.

It should be noted that this is roughly the most that has previously been known of the maximum a posteriori estimate (MAP), corresponding to the Tikhonov models

$$\begin{aligned} \min \frac{1}{2}\Vert {\hat{f}}-u\Vert ^2 + \alpha R(u). \end{aligned}$$

In particular, the MAP is not the Bayes estimator for the typical squared cost functional. This means that it does not minimise \({\tilde{f}} \mapsto \mathbb {E}[\Vert f-{\tilde{f}}\Vert ^2]\). The minimiser in this case is the conditional mean (CM) estimate, which is why it has been preferred by Bayesian statisticians despite its increased computational cost. The MAP estimate is merely an asymptotic Bayes estimator for the uniform cost function. In a very recent work [12], it has, however, been proved that the MAP estimate is the Bayes estimator for certain Bregman distances. One possible critique of the result is that these distances are not universal and do depend on the regulariser R, unlike the squared distance for CM. The CM estimate, however, has other problems in the setting of total variation and its discretisation [35, 36].

Application to Diffusion Tensor Imaging

We now build our model for applying the deterministic error modelling theory to diffusion tensor imaging. We start by building our forward model based on the Stejskal–Tanner equation, and then briefly introduce the regularisers we use.

The Forward Model

For \(u: {\varOmega }\rightarrow {{\mathrm{Sym}}}^2(\mathbb {R}^3)\), \({\varOmega }\subset \mathbb {R}^3\), a mapping from \({\varOmega }\) to symmetric second-order tensors, let us introduce non-linear operators \(T_j\), defined by

$$\begin{aligned}{}[T_j(u)](x) :=s_0(x) \exp (-\langle b_j,u(x)b_j\rangle ), \quad (j=1,\ldots ,N). \end{aligned}$$

Their role is to model the so-called Stejskal–Tanner equation [4]

$$\begin{aligned} s_j(x)=s_0(x) \exp (-\langle b_j,u(x)b_j\rangle ), \quad (j=1,\ldots ,N). \end{aligned}$$

Each tensor u(x) models the covariance of a Gaussian probability distribution at x for the diffusion of water molecules. The data \(s_j \in L^2({\varOmega })\), (\(j=1,\ldots ,N\)), are the diffusion-weighted MRI images. Each of them is obtained by performing the MRI scan with a different non-zero diffusion-sensitising gradient \(b_j\), while \(s_0\) is obtained with a zero gradient. After correcting the original k-space data for coil sensitivities, each \(s_j\) is assumed real. As a consequence, any measurement \({\hat{s}}_j\) of \(s_j\) has—in theory—Rician noise distribution [24].

Our goal is to reconstruct u with simultaneous denoising. Following [31, 55], we consider using a suitable regulariser R the Tikhonov model

$$\begin{aligned} \min _{u \geqq 0}~ \sum _{j=1}^N \frac{1}{2} \Vert {\hat{s}}_j-T_j(u)\Vert ^2 + \alpha R(u). \end{aligned}$$

The constraint \(u\geqq 0\) is to be understood in the sense that u(x) is positive semidefinite for \(\mathcal {L}^n\)-a.e. \(x \in {\varOmega }\) (see Appendix 2 for more details). Due to the Rician noise of \({\hat{s}}_j\), the Gaussian noise model implied by the \(L^2\)-norm in (7) is not entirely correct. However, in some cases the \(L^2\) model may be accurate enough, as for suitable parameters the Rician distribution is not too far from a Gaussian distribution. If one were to model the problem correctly, one should either modify the fidelity term to model Rician noise or include the (unit magnitude complex number) coil sensitivities in the model. The Rician noise model is highly non-linear due to the Bessel functional logarithms involved. Its approximations have been studied in [5, 22, 40] for single MR images and DTI. Coil sensitivities could be included either by knowing them in advance or by simultaneous estimation as in [30]. Either way, significant complexity is introduced into the model, and for the present work, we are content with the simple \(L^2\) model.

We may also consider, as is often the case, and as was done with TGV in [57], the linearised model

$$\begin{aligned} \min _{u \geqq 0}~ \Vert f-u\Vert ^2 + \alpha R(u), \end{aligned}$$

where, for each \(x \in {\varOmega }\), f(x) is solved by regression for u(x) from the system of equations (6) with \(s_j(x)={\hat{s}}_j(x)\). Further, as in [58], we may also consider

$$\begin{aligned} \min _{u \geqq 0}~ \sum _{j=1}^N \frac{1}{2} \Vert g_j-A_j u\Vert ^2 + \alpha R(u), \end{aligned}$$


$$\begin{aligned}{}[A_j u](x)&:=\langle b_j,u(x)b_j\rangle , \quad \text {and} \nonumber \\ g_j(x)&:=\log ({\hat{s}}_j(x)/{\hat{s}}_0(x)). \end{aligned}$$

In both of these linearised models, the assumption of Gaussian noise is in principle even more remote from the truth than in the non-linear model (7). We will employ (8) and (7) as benchmark models.

We want to further simplify the model, and forgo with accurate noise modelling. After all, we often do not know the real noise model for the data available in practice. It can be corrupted by process artefacts from black-box algorithms in the MRI devices. This problem of black-box devices has been discussed extensively in [44], in the context of Computed Tomography. Moreover, as we have discussed above, even without such artefacts, the correct model may be difficult to realise numerically. So we might be best off choosing the least assuming model of all—that of error bounds. This is what we propose in the reconstruction model

$$\begin{aligned} \min _u~ R(u) \quad \text {s.t.} \quad&u \geqq 0, \nonumber \\&g_j^l \leqslant A_j u \leqslant g_j^u, \nonumber \\&\quad \mathcal {L}^n\text {-a.e.},\ (j=1,\ldots ,N). \end{aligned}$$

Here \(g_j^l :=\log ({\hat{s}}_j^l/{\hat{s}}_0^u)\) and \(g_j^u :=\log ({\hat{s}}_j^u/{\hat{s}}_0^l)\), \(g_j^l, g_j^u \in L^2({\varOmega })\), are our upper and lower bounds on \(g_j\) that we derive from the data.

Choice of the Regulariser R

A prototypical regulariser in image processing is the total variation, first studied in this context in [45]. It can be defined for a \(u \in L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\) as

$$\begin{aligned} \begin{aligned} {{\mathrm{TV}}}(u)&:=\Vert Eu\Vert _{\mathcal {M}({\varOmega }; {{\mathrm{Sym}}}^{k+1}(\mathbb {R}^m))} \\&:=\sup \left\{ \int _{\varOmega }\langle {{\mathrm{div}}}\phi (x),u(x)\rangle {{\mathrm{d}}}x \right. \\&\left. \qquad \qquad \qquad \Bigm | \begin{array}{l} \phi \in C_c^\infty ({\varOmega }; {{\mathrm{Sym}}}^{k+1}(\mathbb {R}^m)) \\ \sup _x \Vert \phi (x)\Vert _F \le 1 \end{array} \right\} . \end{aligned} \end{aligned}$$

Observe that for scalar or vector fields, i.e. the cases \(k=0,1\), we have \({{\mathrm{Sym}}}^0(\mathbb {R}^m)=\mathcal {T}^0(\mathbb {R}^m)=\mathbb {R}\), and \({{\mathrm{Sym}}}^1(\mathbb {R}^m)=\mathcal {T}^1(\mathbb {R}^m)=\mathbb {R}^m\). Therefore, for scalars in particular, this gives the usual isotropic total variation

$$\begin{aligned} {{\mathrm{TV}}}(u) = \Vert Du\Vert _{\mathcal {M}({\varOmega }))}. \end{aligned}$$

Total generalised variation was introduced in [9] as a higher-order extension of \({{\mathrm{TV}}}\). Following [11, 57], the second-order variant may be defined with the differentiation cascade formulation for symmetric tensor fields \(u \in L^1({\varOmega }; {{\mathrm{Sym}}}^{k}(\mathbb {R}^m))\) as the marginal

$$\begin{aligned} {{\mathrm{TGV}}}^2_{(\beta ,\alpha )}(u):= & {} \min \Big \{ {\varPhi }_{(\beta ,\alpha )}(u, w)\nonumber \\&\mid w \in L^1({\varOmega }; {{\mathrm{Sym}}}^{k+1}(\mathbb {R}^m)) \Big \} \end{aligned}$$


$$\begin{aligned} \begin{aligned} {\varPhi }_{(\beta ,\alpha )}(u, w)&:=\alpha \Vert E u - w\Vert _{F,\mathcal {M}({\varOmega }; {{\mathrm{Sym}}}^{k+1}(\mathbb {R}^m))}\\&\phantom {:=} +\beta \Vert E w\Vert _{F,\mathcal {M}({\varOmega }; {{\mathrm{Sym}}}^{k+2}(\mathbb {R}^m))}. \end{aligned} \end{aligned}$$

It turns out that the standard BV-norm

$$\begin{aligned} \Vert u\Vert _{{{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))} :=\Vert u\Vert _{L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))}+{{\mathrm{TV}}}(u) \end{aligned}$$

and the “BGV norm” [9]

$$\begin{aligned} \Vert u\Vert ' :=\Vert u\Vert _{L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))}+{{\mathrm{TGV}}}^2_{(\beta ,\alpha )}(u) \end{aligned}$$

are topologically equivalent norms [10, 11] on the space \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), yielding the same convergence results for TGV regularisation as for TV regularisation. The geometrical regularisation behaviour is, however, different, and TGV tends to avoid the staircasing observed in TV regularisation.

Regarding topologies, we say that a sequence \(\{u^i\}\) in \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\) converges weakly* to u, if \(u^i \rightarrow u\) strongly in \(L^1\), and weakly* as Radon measures [2, 51, 57]. The latter means that for all \(\phi \in C_c^\infty ({\varOmega }; {{\mathrm{Sym}}}^{k+1}(\mathbb {R}^m))\) holds \(\int _{\varOmega }\langle {{\mathrm{div}}}\phi (x),u^i(x)\rangle {{\mathrm{d}}}x \rightarrow \int _{\varOmega }\langle {{\mathrm{div}}}\phi (x),u(x)\rangle {{\mathrm{d}}}x\).

Compact Subspaces

Now, for a weak* lower semi-continuous seminorm R on \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), let us set

$$\begin{aligned} {{\mathrm{BV}}}_{0,R}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m)) :={{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m)) / \ker R. \end{aligned}$$

That is, we identify elements \(u, {\tilde{u}} \in {{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), such that \(R(u - {\tilde{u}})=0\). Now R is a norm on the space \({{\mathrm{BV}}}_{0,R}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\); compare, e.g. [42] for the case of \(R={{\mathrm{TV}}}\).


$$\begin{aligned} \Vert u\Vert ' :=\Vert u\Vert _{L^1({\varOmega })} + R(u) \end{aligned}$$

is a norm on \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), equivalent to the standard norm. If also the R -Sobolev-Korn-Poincaré inequality

$$\begin{aligned} \inf _{R(v)=0} \Vert u-v\Vert _{L^1({\varOmega })} \le C R(u) \end{aligned}$$

holds, we may then bound

$$\begin{aligned}&\inf _{R(v)=0} \Vert u-v\Vert _{{{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))} \le \inf _{R(v)=0} C' \Vert u-v\Vert ' \\&\quad = \inf _{R(v)=0} C' \bigl ( \Vert u-v\Vert _{L^1({\varOmega })} + R(u-v)\bigr ) \\&\quad \le C' (1+C)R(u). \end{aligned}$$

Now, using the weak* lower semicontinuity of the BV-norm, and the weak* compactness of the unit ball in \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\)—we refer to [2] for these and other basic properties of BVspaces—we may thus find a representative \({\tilde{u}}\) in the \({{\mathrm{BV}}}_{0,R}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\) equivalence class of u, satisfying

$$\begin{aligned} \Vert {\tilde{u}}\Vert _{{{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))} \le C' (1+C)R(u). \end{aligned}$$

Again using the weak* compactness of the unit ball in \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), and the weak* lower semicontinuity of R, it follows that the sets

$$\begin{aligned} {{\mathrm{lev}}}_a R:= & {} \left\{ u \in {{\mathrm{BV}}}_{0,R}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m)) \mid R(u) \le a \right\} , \nonumber \\&\quad (a>0), \end{aligned}$$

are weak* compact in \({{\mathrm{BV}}}_{0,R}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), in the topology inherited form \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\). Consequently, they are strongly compact subsets of the space \(L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\). This feature is crucial for the application of the regularisation theory in Banach lattices above.

On a connected domain \({\varOmega }\), in particular

$$\begin{aligned} {{\mathrm{BV}}}_{0,{{\mathrm{TV}}}}({\varOmega }) \simeq \left\{ u \in {{\mathrm{BV}}}({\varOmega }) \big | \int _{\varOmega }u {{\mathrm{d}}}x=0 \right\} . \end{aligned}$$

That is, the space consists of zero-mean functions. Then \(u \mapsto \Vert Du\Vert _{\mathcal {M}({\varOmega }; \mathbb {R}^m)}\) is a norm on \({{\mathrm{BV}}}_{0,{{\mathrm{TV}}}}({\varOmega })\) [42], and this space is weak* compact. In particular, the sets \({{\mathrm{lev}}}_a {{\mathrm{TV}}}\) are compact in \(L^1({\varOmega })\).

More generally, we know from [8] that on a connected domain \({\varOmega }\), \(\ker {{\mathrm{TV}}}\) consists of \({{\mathrm{Sym}}}^k(\mathbb {R}^m)\)-valued polynomials of maximal degree k. By extension, the kernel of \({{\mathrm{TGV}}}^2\) consists of \({{\mathrm{Sym}}}^k(\mathbb {R}^m)\)-valued polynomials of maximal degree \(k+1\). In both cases, (13), weak* lower semicontinuity of R and the equivalence of \(\Vert \,\varvec{\cdot }\,\Vert '\) to \(\Vert \,\varvec{\cdot }\,\Vert _{{{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))}\) hold by the results in [8, 11, 51]. Therefore, we have proved the following.

Lemma 1

Let \({\varOmega }\subset \mathbb {R}^m\) and \(k \ge 0\). Then the sets \({{\mathrm{lev}}}_a {{\mathrm{TV}}}\) and \({{\mathrm{lev}}}_a {{\mathrm{TGV}}}^2\) are weak* compact in \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\) and strongly compact in \(L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\).

Now, in the above cases, \(\ker R\) is finite-dimensional, and we may write

$$\begin{aligned} {{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m)) \simeq {{\mathrm{BV}}}_{0,R}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m)) \oplus \ker R. \end{aligned}$$

Denoting by

$$\begin{aligned} B_X(r) :=\{x \in X \mid \Vert x\Vert \le r\}, \end{aligned}$$

the closed ball of radius r in a normed space X, we obtain by the finite-dimensionality of \(\ker R\) the following result.

Proposition 1

Let \({\varOmega }\subset \mathbb {R}^m\) and \(k \ge 0\). Pick \(a>0\). Then the sets

$$\begin{aligned} V :={{\mathrm{lev}}}_a R \oplus B_{\ker R}(a) \end{aligned}$$

for both regularisers \(R={{\mathrm{TV}}}\) and \(R={{\mathrm{TGV}}}^2\), are weak* compact in \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\) and strongly compact in \(L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\).

The next result summarises Theorem 3 and Proposition 1.

Theorem 4

With \(U=L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), let the operator \(A: U \rightarrow F\) be linear, continuous and injective. Let \(f^l_n\) and \(f^u_n\) be sequences of lower and upper bounds for the right-hand such that

$$\begin{aligned} \begin{aligned}&f^l_n :f^l_{n+1} \geqslant f^l_n,&\quad&f^u_n :f^u_{n+1} \leqslant f^u_n, \\&f^l_n \leqslant f \leqslant f^u_n,&\quad&\Vert f^l_n - f^u_n \Vert \rightarrow 0 \quad \text {as } n \rightarrow \infty . \end{aligned} \end{aligned}$$

Supposing that there are no errors in the operator A and the exact solution \({\bar{u}}\) exists, define the feasible set as follows

$$\begin{aligned} U_n = \left\{ u \in U :\quad f^l_n \leqslant _F A u \leqslant _F f^u_n \right\} . \end{aligned}$$

Decomposing \(u \in U\) as \(u=u_0+u^\perp \) with \(u^\perp \in \ker R\), suppose

$$\begin{aligned} u \in U_n \implies \Vert u^\perp \Vert \le a \end{aligned}$$

for some constant \(a>0\), then for \(R={{\mathrm{TV}}}\) and \(R={{\mathrm{TGV}}}^2\), the sequence

$$\begin{aligned} u_n = \mathop {\hbox {arg min}}\limits _{u \in U_n} R(u) \end{aligned}$$

converges strongly in \(L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\) to the exact solution \({\bar{u}}\) and \(R(u_n) \rightarrow R({\bar{u}})\).


With the decomposition \(u_n = u_{0,n} + u_n^\perp \), where \(u_n^\perp \in \ker R\), we have \(u_{0,n} \in {{\mathrm{lev}}}_a R\) for suitably large \(a>0\) through

$$\begin{aligned} R(u_{0,n}) = R(u_n) = \min _{u' \in U_n} R(u') \le R({\bar{u}}). \end{aligned}$$

The assumption (15) bounds \(\Vert u_n^\perp \Vert \le a\). Thus \(u_n \in V\) for V as in Proposition 1. The proposition thus implies the necessary compactness in \(U=L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\) for the application of Theorem 3.

Remark 1

The condition (15) simply says for \(R={{\mathrm{TV}}}\) that the data have to bound the solution in mean. This is very reasonable to expect for practical data; anything else would be very non-degenerate. For \(R={{\mathrm{TGV}}}^2\) we also need that the data bound the entire affine part of the solution. Again, this is very likely for real data. Indeed, in DTI practice, with at least 6 independent diffusion-sensitising gradients, A is an invertible or even over-determined linear operator. In that typical case, the bounds \(f^l_n\) and \(f^u_n\) will be translated into \(U_n\) being a bounded set.

Solving the Optimisation Problem

The Chambolle–Pock Method

The Chambolle–Pock algorithm is an inertial primal-dual backward-backward splitting method, classified in [18] as the modified primal-dual hybrid gradient method (PDHGM). It solves the minimax problem

$$\begin{aligned} \min _x \max _y\ G(x) + \langle Kx,y\rangle - F^*(y), \end{aligned}$$

where \(G: X \rightarrow \overline{\mathbb {R}}\) and \(F^*: Y \rightarrow \overline{\mathbb {R}}\) are convex, proper, lower semi-continuous functionals on (finite-dimensional) Hilbert spaces X and Y. The operator \(K:X \rightarrow Y\) is linear, although an extension of the method to non-linear K has recently been derived [55]. The PDHGM can also be seen as a preconditioned ADMM (alternating directions method of multipliers); we refer to [18, 47, 56] for reviews of optimisation methods popular in image processing. For step sizes \(\tau ,\sigma >0\), and an over-relaxation parameter \(\omega >0\), each iteration of the algorithm consists of the updates

$$\begin{aligned} u_{i+1}&:=(I+\tau \partial G)^{-1}(u_i - \tau K^* y_{i}), \end{aligned}$$
$$\begin{aligned} {\bar{u}}_{i+1}&:=u_{i+1} + \omega (u_{i+1}-u_i), \end{aligned}$$
$$\begin{aligned} y_{i+1}&:=(I+\sigma \partial F^*)^{-1}(y_i + \sigma K {\bar{u}}_{i+1}). \end{aligned}$$

We should remark that the order of the primal (u) and dual (y) updates here is reversed from the original presentation in [13]. The reason is that when reordered, the updates can, as discovered in [26], be easily written in a proximal point form.

The first and last update are the backward (proximal) steps for the primal (x) and dual (y) variables, respectively, keeping the other fixed. However, the dual step includes some “inertia” or over-relaxation, as specified by the parameter \(\omega \). Usually \(\omega =1\), which is required for convergence proofs of the method. If G or \(F^*\) is uniformly convex, by smartly choosing for each iteration the step length parameters \(\tau ,\sigma \), and the inertia \(\omega \), the method can be shown to have convergence rate \(O(1/N^2)\). This is similar to Nesterov’s optimal gradient method [43]. In the general case the rate is O(1 / N). In practice the method produces visually pleasing solutions in rather few iterations, when applied to image processing problems.

In implementation of the method, it is crucial that the resolvents \((I+\tau \partial G)^{-1}\) and \((I+\sigma \partial F^*)^{-1}\) can be computed quickly. We recall that they may be written as

$$\begin{aligned} (I+\tau \partial G)^{-1}(u) = \mathop {\hbox {arg min}}\limits _{u'}\left\{ \frac{\Vert u'-u\Vert ^2}{2\tau } + G(u')\right\} . \end{aligned}$$

Usually in applications, these computations turn out to be simple projections or linear operations—or the soft-thresholding operation for the \(L_1\)-norm.

As a further implementation note, since the algorithm (17) is formulated in Hilbert spaces (see however [28]) while our problems are formulated in the Banach space \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^2(\mathbb {R}^3))\), we have to discretise our problems before application of the algorithm. We do this by simple forward-differences discretisation of the operator E with cell width \(h=1\) on a regular rectangular grid corresponding to the image voxels.

Implementation of Deterministic Constraints

We now reformulate the problem (11) of DTI imaging with deterministic error bounds in the form (16). Suppose we have some upper and lower bounds \(s_j^l\leqslant s_j\leqslant s_j^u\) on the DWI signals \(s_j\), (\(j=0,\ldots ,N\)). Then the bounds for \(g_j=\log (s_j/s_0)\) are

$$\begin{aligned} g_j^l=\log \left( s_j^l/s_0^u\right) ;\quad g_j^u=\log \left( s_j^u/s_0^l\right) , \quad (j=1,\ldots ,N), \end{aligned}$$

because \(g_j\) is monotone in regards to \(s_j\). We are thus trying to solve

$$\begin{aligned} u=\mathop {\hbox {arg min}}\limits _{u'\in U=\cap U^j} R(u'), \end{aligned}$$


$$\begin{aligned} U^j=\left\{ u:g_j^l\leqslant A_ju\leqslant g_j^u\right\} . \end{aligned}$$

For the ease of notation, we write

$$\begin{aligned} g&=\begin{pmatrix}g_1,&\ldots ,&g_N\end{pmatrix}, \quad \text {and} \nonumber \\ Au&= \begin{pmatrix} A_1 u_1,&\ldots ,&A_N u_N\end{pmatrix}, \end{aligned}$$

so that the Stejskal–Tanner equation is satisfied when

$$\begin{aligned} Au=g. \end{aligned}$$

The problem (19) may be rewritten as

$$\begin{aligned} \min _{u'}~ F_0(Au')+R(u'), \end{aligned}$$


$$\begin{aligned} F_0(y) = \delta (g^l \leqslant y\leqslant g^r) \end{aligned}$$

with \(\delta (g^l \leqslant y\leqslant g^u)\) denoting the indicator function of the convex set \(\{y :g^l \leqslant y\leqslant g^r\}\). Solving the conjugate

$$\begin{aligned} F_0^*(y)={\left\{ \begin{array}{ll} \langle g^l,y\rangle , &{} y<0 \\ \langle g^u,y\rangle , &{} y\ge 0. \end{array}\right. }, \end{aligned}$$

and also writing

$$\begin{aligned} R(u)=R_0(K_0 u) \end{aligned}$$

for some \(R_0\) and \(K_0\), the problem can further be written in the saddle point form (16) with

$$\begin{aligned} \begin{aligned} G(u)&= 0, \\ K&= \begin{pmatrix} A \\ K_0 \end{pmatrix}, \\ F^*(y,\psi )&= F_0^*(y) + R_0^*(\psi ). \end{aligned} \end{aligned}$$

To apply algorithm (17), we need to compute the resolvents of \(G_0^*\) and \(R_0^*\). For details regarding \(R_0^*\) for \(R={{\mathrm{TGV}}}_{(\beta ,\alpha )}^2\) and \(R=\alpha {{\mathrm{TV}}}\) in the discretised setting, we refer to [57, 58]; here it suffices to note that for \(R=\alpha {{\mathrm{TV}}}\), we have \(K_0=\alpha E\) and \(R_0^*(\phi )\) is the indicator function of the dual ball \(\{\phi \mid \sup _x \Vert \phi (x)\Vert _F \le 1\}\). Thus the resolvent \((I+\tau \partial R_0^*)^{-1}\) becomes a projection to the dual ball. The case of \(R={{\mathrm{TGV}}}_{(\beta ,\alpha )}^2\) is similar. For \(F_0^*\) we have

$$\begin{aligned} (I+\tau \partial F_0^*)^{-1}(y)&= \mathop {\hbox {arg min}}\limits _{y'} F_0^*(y')+\frac{|y-y'|^2}{2\tau }, \end{aligned}$$

which resolves pointwise at each \(\xi \in {\varOmega }\) into the expression

$$\begin{aligned}{}[(I+\tau \partial F_0^*)^{-1}(y)](\xi )=S(y(\xi )) \end{aligned}$$


$$\begin{aligned} S(y(\xi ))={\left\{ \begin{array}{ll} y(\xi )-g^u(\xi )\sigma , &{} y(\xi )\ge g^u(\xi )\sigma , \\ 0, &{} g^u(\xi )\sigma \le y(\xi )\le g^l(\xi )\sigma , \\ y(\xi )-g^l(\xi )\sigma , &{} y(\xi )\le g^l(\xi )\sigma . \end{array}\right. } \end{aligned}$$

Finally, we note that the saddle point system (16) has to have a solution for the Chambolle–Pock algorithm to converge. In our setting, in particular, we need to find error bounds \(g^l\) and \(g^u\), for which there exists a solution u to

$$\begin{aligned} g^l \leqslant Au \leqslant g^u. \end{aligned}$$

If one uses at most six independent diffusion directions (\(N=6\)), as we will, then, for any g, there in fact exists a solution to \(g=Au\). The condition (21) becomes \(g^l \leqslant g^u\), immediately guaranteed through the monotonicity of (18), and the trivial conditions \(s_j^l \leqslant s_j^u\).

We are thus ready to apply the algorithm (17) to diffusion tensor imaging with deterministic error bounds. For the realisation of (17) for models (8), (9), and (7), we refer to [55, 5759].

Fig. 2
figure 2

Helix vector field for the principal eigenvector of the ground-truth tensor field

Table 1 Numerical results for the synthetic data
Fig. 3
figure 3

Synthetic test data results. a ground-truth plot. b regression result plot. ch Plot of a slice of the solution for \(L^2\), non-linear \(L^2\) and constrained models. The legend on the right indicates the colour-coding of directions of the principal eigenvector plotted

Experimental Results

We now study the efficiency of the proposed reconstruction model in comparison to the different \(L^2\)-squared models, i.e. ones with Gaussian error assumption. This is based on a synthetic data, for which a ground-truth is available, as well as a real in vivo DTI data set. First we, however, have to describe in detail the procedure for obtaining upper and lower bounds for real data, when we do not know the true noise model, and are unable to perform multiple experiments as required by the theory of Sect. 2.2.

Fig. 4
figure 4

Fractional anisotropy in greyscale superimposed by principal eigenvector. Legend on left indicates the greyscale intensities of the fractional anisotropy. a Ground-truth. b Regression result. c Linear \(L^2\), discrepancy principle. d Linear \(L^2\), error-optimal. e Non-linear \(L^2\), discrepancy principle. f Non-linear \(L^2\), error-optimal. g Constrained, 95 % confidence intervals. h Constrained, 90 % confidence intervals

Fig. 5
figure 5

Colour-coded errors of fractional anisotropy and principal eigenvector for the computations on the synthetic test data. Legend on the right indicates the colour-coding of errors between u and \(g_0\) as functions of the principal eigenvector angle error \(\theta =\cos ^{-1}(\langle {\hat{v}}_u,{\hat{v}}_{g_0}\rangle )\) in terms of the hue, and the fractional anisotropy error \(e_FA =|FA _u -FA _{g_\mathrm {0}}|\) in terms of whiteness. a Regression result. b Linear \(L^2\), discrepancy principle. c Linear \(L^2\), error-optimal. d Non-linear \(L^2\), discrepancy principle. f Non-linear \(L^2\), error-optimal. g Constrained, 95 % confidence intervals. h Constrained, 90 % confidence intervals

Table 2 Numerical results for the in vivo brain data. For the \(L^2\) and non-linear \(L^2\) reconstruction models the free parameter chosen by the parameter choice criterion is the regularisation parameter \(\alpha \), and for the constrained problem it is the confidence interval
Fig. 6
figure 6

Reconstruction results on the in vivo brain data. a Pseudo-ground-truth plot. b Regression result. cg Plot of a slice of the solution for \(L^2\), non-linear \(L^2\) and constrained problem approach. The legend on the bottom-right indicates the colour-coding of directions of the principal eigenvector plotted

Fig. 7
figure 7

Fractional anisotropy of the corpus callosum in greyscale superimposed by principal eigenvector. Legend on the right indicates the greyscale intensities of the fractional anisotropy. a Pseudo-ground-truth. b Linear \(L^2\), discrepancy principle. c Linear \(L^2\), error-optimal. d Non-linear \(L^2\), discrepancy principle. e Non-linear \(L^2\), error-optimal. f Constrained, 95 % confidence intervals

Estimating Lower and Upper Bounds from Real Data

As we have already discussed, in practice the noise in the measurement signals \({\hat{s}}_j\) is not Gaussian or Rician; in fact we do not know the true noise distribution and other corruptions. Therefore, we have to estimate the noise distribution from the image background. To do this, we require a known correspondence between the measurement, the noise and the true value. As we have no better assumptions available, the standard one that we use is that of additive noise. Continuing in the statistical setting of Sect. 2.2, we now describe the procedure, working on discrete images expressed as vectors \({\hat{f}}=s_j \in \mathbb {R}^n\) for some fixed \(j \in \{0,1,\dots ,N\}\). We use superscripts to denote the voxel indices, that is \({\hat{f}}=(f^1,\ldots ,f^n)\).

In the i-th voxel, the measured value \({\hat{f}}^i\) is the sum of the true value \(f^i\) and additive noise \(\nu ^i\):

$$\begin{aligned} {\hat{f}}^i = f^i + \nu ^i. \end{aligned}$$

All \(\nu ^i\) are assumed independent and identically distributed (i.i.d.), but their distribution is unknown. If we did know the true underlying cumulative distribution function F of the noise, we could choose a confidence parameter \(\theta \in (0,1)\) and use the cumulative distribution function to calculate \(\nu _{\theta /2}, \nu _{1-\theta /2}\) such thatFootnote 1

$$\begin{aligned} P(\nu _{\theta /2} \leqslant \nu ^i \leqslant \nu _{1-\theta /2}) = 1-\theta . \end{aligned}$$

Let us instead proceed non-parametrically, and divide the whole image into two groups of voxels: the ones belonging to the background region and the rest. For simplicity, let the indices \(i=1,\ldots ,k\), (\(k < n\)), stand for the background voxels. In this region, we have \(f^i = 0\) and \({\hat{f}}^i = \nu ^i\). Therefore, the background region provides us with a number of independent samples from the unknown distribution of \(\nu \). The Dvoretzky–Kiefer–Wolfowitz inequality [17, 37, 41] states that

$$\begin{aligned} P\bigl (\sup _t |F(t)-F_k(t)| > \epsilon \bigr ) \le 2e^{-2k\epsilon ^2}, \end{aligned}$$

for the empirical cumulative distribution function

$$\begin{aligned} F_k(t) :=\frac{1}{k} \sum _{i=1}^k \chi _{(-\infty , {\hat{f}}^i]}(t). \end{aligned}$$

Therefore, computing \(\nu _{\theta /2}\) and \(\nu _{1-\theta /2}\) such that

$$\begin{aligned} P_k(\nu _{\theta /2} \leqslant \nu \leqslant \nu _{1-\theta /2}) = 1-\theta , \end{aligned}$$

we also have

$$\begin{aligned} P_k\left( f^i + \nu _{\theta /2} \leqslant {\hat{f}}^i \leqslant f^i + \nu _{1-\theta /2}\right) = 1-\theta . \end{aligned}$$

We may therefore use the values

$$\begin{aligned} {\hat{f}}^{l,i} = {\hat{f}}^i - \nu _{1-\theta /2}, \quad {\hat{f}}^{u,i} = {\hat{f}}^i - \nu _{\theta /2}, \end{aligned}$$

as lower and upper bounds for the true values \(f^i\) outside the background region.

The Dvoretzky–Kiefer–Wolfowitz inequality implies that the interval estimates converge to the true intervals, determined by (22), as the number of background pixels k increases with the image size n. This procedure, with large k, will therefore provide an estimate of a single-experiment (\(m=1\)) confidence interval for \(f^i\). We note that this procedure will, however, not yield the convergence of the interval estimate \([{\hat{f}}^{l,i}, {\hat{f}}^{u,i}]\) to the true data; for that we would need multiple experiments, i.e. multiple sample images (\(m>1\)), not just agglomeration of the background voxels into a single noise distribution estimate. In practice, however, we can only afford a single experiment (\(m=1\)), and cannot go to the limit.

Verification of the Approach with Synthetic Data

To verify the effectiveness of the considered approach and to compare it to the other models, we use synthetic data. For the ground-truth tensor field \(u_{g.t.}\) we take a helix region in a 3D box \(100 \times 100 \times 30\), and choose the tensor in each point inside the helix in such a way that the principal eigenvector coincides with the helix direction (Fig. 2). The helix region is described by the following equations:

$$\begin{aligned}&x=(R+r\cos (\theta ))\cos (\phi ),\\&y=(R+r\cos (\theta ))\sin (\phi ),\\&z=r\sin (\theta )+\phi /\phi _{\max },\\&\phi \in [0,\phi _{\max }],\quad r \in [0,r_{\max }], \quad \theta \in [0,2\pi ]. \end{aligned}$$

The vector direction in every point coincides with helix direction:

$$\begin{aligned} \mathbf {r}=\begin{pmatrix}-R\sin (\phi )\\ R\cos (\phi )\\ 1/\phi _{\max }\end{pmatrix}. \end{aligned}$$

We take the parameters \(R=0.3, \phi _{\max }=4\pi , r_{\max }=0.07\) in this numerical experiment.

We apply the forward operators \(T_j(u_{g.t.})\) for each \(j=0, \ldots , 6\) to obtain the data \(s_j(x)\). We then add Rician noise to this data \(\bar{s_j}=s_j+\delta \) with \(\sigma =2\), which corresponds to \(PSNR \approx 27\hbox {dB}\).

We apply several models for solving the inverse problem of reconstructing u: the linear and non-linear \(L^2\) approaches (8) and (7), and the constrained problem (11). As the regulariser we use \(R={{\mathrm{TGV}}}^2_{(0.9\alpha ,\alpha )}\), where the choice \(\beta =0.9\alpha \) was made somewhat arbitrarily, however yielding good results for all the models. This is slightly lower than the range \([1, 1.5]\alpha \) discovered in comprehensive experiments for other imaging modalities [7, 38].

For the linear and non-linear \(L^2\) models (8) and (7), respectively, the regularisation parameter \(\alpha \) is chosen either by a version of the discrepancy principle for inconsistent problems [52] or optimally with regard to the \(\Vert \,\varvec{\cdot }\,\Vert _{F,2}\) distance between the solution and the ground-truth. In case of the discrepancy principle, such an \(\alpha \) was chosen that the following equality holds:

$$\begin{aligned} {\varDelta }\rho (\alpha ) :=\sum _j||T_j(u)-\bar{s_j}||^2-\tau \sum _j||\bar{s_j}-s_j||^2=0. \end{aligned}$$
Fig. 8
figure 8

Colour-coded errors of fractional anisotropy and principal eigenvector for the computations on the synthetic test data. Legend on the right indicates the colour-coding of errors between u and \(g_0\) as functions of the principal eigenvector angle error \(\theta =\cos ^{-1}(\langle {\hat{v}}_u,{\hat{v}}_{g_0}\rangle )\) in terms of the hue, and the fractional anisotropy error \(e_FA =|FA _u -FA _{g_\mathrm {0}}|\) in terms of whiteness. a Linear \(L^2\), discrepancy principle. b Linear \(L^2\), error-optimal. c Non-linear \(L^2\), discrepancy principle. d Non-linear \(L^2\), error-optimal. e Constrained, 95 % confidence intervals

Fig. 9
figure 9

Tractography visualisation results on the in vivo brain data. a Pseudo-ground-truth plot. b Regression result. cf Plot of a slice of the solution for \(L^2\), non-linear \(L^2\) and constrained problem approach

We find \(\alpha \) by solving this equation numerically using bisection method. We start by finding such \(\alpha _1,\alpha _2\) that \({\varDelta }\rho (\alpha _1)>0\) and \({\varDelta }\rho (\alpha _2)<0\). We calculate \({\varDelta }\rho (\alpha _3)\) for \(\alpha _3=\frac{\alpha _1+\alpha _2}{2}\) and depending on its sign replace either \(\alpha _1\) or \(\alpha _2\) with \(\alpha _3\). We repeat this procedure until the stopping criteria are reached.

As stopping criteria we use \(|f(\alpha )|<\epsilon \). We use \(\tau =1.05\), \(\epsilon =0.01\) for linear and \(\tau =1.2\), \(\epsilon =0.0001\) for non-linear \(L^2\) solution. A value of \(\tau \) yielding a reasonable degree of smoothness has been chosen by trial and error, and is different for the non-linear model, reflecting a different non-linear objective in the discrepancy principle. For the constrained problem we calculate \(\theta =90\), 95 and \(99\,\%\) confidence intervals to generate the upper and lower bounds. We, however, digress a little bit from the approach of Sect. 2.2. Minding that we do not know the true underlying distribution, which fails to be Rician as illustrated in Fig. 1, we do not use it to calculate the confidence intervals, but use the estimation procedure described in Sect. 5.1. We stress that we only have a single sample of each signal \(s_j\), so are unable to verify any asymptotic estimation properties.

The numerical results are in Table 1 and Figs. 3, 4, and 5, with the first of the figures showing the colour-coded principal eigenvector of the reconstruction, the second showing the fractional anisotropy and principal eigenvectors and the last one the errors in the latter two, in a colour-coded manner. All plots are masked to represent only the non-zero region. The field of fractional anisotropy is defined for a field u of 2-tensors on \({\varOmega }\subset \mathbb {R}^m\) as

$$\begin{aligned} FA _u(x) =\frac{\Bigl (\sum _{i=1}^m (\lambda _i-{\bar{\lambda }})^2\Bigr )^{1/2}}{\Bigl ({\sum _{i=1}^m \lambda _i^2}\Bigr )^{1/2}} \in [0, 1], \end{aligned}$$

with \(\lambda _1,\ldots ,\lambda _m\) denoting the eigenvalues of u(x) at a point \(x \in {\varOmega }\). It measures how far the ellipsoid prescribed by the eigenvalues and eigenvectors is from a sphere, with \(FA _u(x)=1\) corresponding a full sphere, and \(FA _u(x)=0\) corresponding to a degenerate object not having full dimension.

As we can see, the non-linear approach (7) performs overall the best by a wide margin, in terms of the pointwise Frobenius error, i.e. error in \(\Vert \cdot \Vert _{F,2}\). This is expressed as a PSNR in Table 1. What is, however, interesting, is that the constraint-based approach (11) has a much better reconstruction of the principal eigenvector angle, and a comparable reconstruction of its magnitude. Indeed, the 95 % confidence interval in Figs. 3(g) and 4(g) suggests a nearly perfect reconstruction in terms of smoothness. But, the Frobenius PSNR in Table 1 for this approach is worse than the simple unregularised inversion by regression. The problem is revealed by Fig. 5(f): the large white cloudy areas indicate huge fractional anisotropy errors, while at the same time, the principal eigenvector angle errors expressed in colour are much lower than for other approaches. Good reconstruction of the principal eigenvector is important for the process of tractography, i.e. the reconstruction of neural pathways in a brain. One explanation for our good results is that the regulariser completely governs the solution in areas where the error bounds are inactive due to generally low errors. This results in very smooth reconstructions, which is in the present case desirable as our synthetic tensor field is also smooth within the helix.

Results with In Vivo Brain Imaging Data

We now wish to study the proposed regularisation model on a real in vivo diffusion tensor image. Our data are that of a human brain, with the measurements of a volunteer performed on a clinical 3T system (Siemens Magnetom TIM Trio, Erlangen, Germany), with a 32 channel head coil. A 2D diffusion-weighted single-shot EPI sequence with diffusion-sensitising gradients applied in 12 independent directions (\(b = 1000\,\hbox {s}/\hbox {mm}^2\)). An additional reference scan without diffusion was used with the parameters: \(\hbox {TR}=7900\hbox {ms}\), \(\hbox {TE}=94\hbox {ms}\), flip angle \(90^\circ \). Each slice of the 3D dataset has plane resolution \(1.95\,\hbox {mm} \times 1.95\,\hbox {mm}\), with a total of \(128 \times 128\) pixels. The total number of slices is 60 with a slice thickness of 2mm. The dataset consists of 4 repeated measurements. The GRAPPA acceleration factor is 2. Prior to the reconstruction of the diffusion tensor, eddy-current correction was performed with FSL [50]. Written informed consent was obtained from the volunteer before the examination.

For error bounds calculation according to the procedure of Sect. 5.1, to avoid systematic bias near the brain, we only use about 0.6 % of the total volume near the borders, or roughly \(k \approx 6000\) voxels.

To estimate errors for the all the considered reconstruction models, for each gradient direction \(b_i\) we use only one out of the four duplicate measurements. We then calculate the errors using a somewhat less than ideal pseudo-ground-truth, which is the linear regression reconstruction from all the available measurements.

The results are in Table 2 and Figs. 6, 7, and 8, again with the first of the figures showing the colour-coded principal eigenvector of the reconstruction, the second showing the fractional anisotropy and principal eigenvectors and the last one the errors in the latter two, in a colour-coded manner. Again, all plots are masked to represent only the non-zero region. In the figures, we concentrate on error bounds based on 95 % confidence intervals, as the results for the 90 and 99 % cases do not differ significantly according to Table 2.

This time, the linear \(L^2\) approach (8) has best overall reconstruction (Frobenius PSNR), while the non-linear \(L^2\) approach (7) has clearly the best principal eigenvector angle reconstruction besides the regression, which does not seem entirely reliable regarding our regression-based pseudo-ground-truth. The constraints-based approach (11), with 95 % confidence intervals is, however, not far behind in terms of numbers. More detailed study of the corpus callosum in Fig. 8 (small picture in picture) and Fig. 7, however, indicates a better reconstruction of this important region by the non-linear approach. The constrained approach has some very short vectors there in the white region. Naturally, however, these results on the in vivo data should be taken with a grain of salt, as we have only a somewhat unreliable pseudo-ground-truth available for comparison purposes.

Conclusions from the Numerical Experiments

Our conclusion is that the error bounds-based approach is a feasible alternative to standard modelling with incorrect Gaussian assumptions. It can produce good reconstructions, although the non-linear \(L^2\) approach of [55] is possibly slightly more reliable. The latter does, however, in principle depend on a good initialisation of the optimisation method, unlike the convex bounds-based approach.

Further theoretical work will be undertaken to extend the partial-order-based approach to modelling errors in linear operators to the non-lattice case of the semidefinite partial order for symmetric matrices, which will allow us to consider problems of diffusion MRI with errors in the forward operator.

It also needs to be investigated whether the error bounds approach needs to be combined with an alternative, novel, regulariser that would ameliorate the fractional anisotropy errors that the approach exhibits. It is important to note, however, that from the practical point of view, of using the reconstruction tensor field for basic tractography methods based solely on principal eigenvectors, these are not that critical. As pointed out by one of the reviewers, the situation could differ with more recent geodesic tractography methods [20, 21, 25] employing the full tensor. We provide basic principal eigenvector tractography results for reference in Fig. 9, without attempting to extensively interpret the results. It suffices to say that the results look comparable. With this in sight, the error bounds approach produces a very good reconstruction of the direction of the principal eigenvectors, although we saw some problems with the magnitude within the corpus callosum.


  1. Recall that for a random variable X with a cumulative distribution function F, the quantile function \(F^{-1}\) returns a number \(x_\theta =F^{-1}(\theta )\) such that \(P(X \leqslant x_\theta ) = \theta \).

  2. Recall that an indexed subset \(\{ x_\tau :\tau \in \{\tau \} \}\) of an ordered vector space X is called directed upwards if for any pair \(\tau _1,\,\tau _2 \in \{\tau \}\) there exists \(\tau _3 \in \{\tau \}\) such that \(x_{\tau _3} \geqslant x_{\tau _1}\) and \(x_{\tau _3} \geqslant x_{\tau _2}\).


  1. Aksoy, M., Forman, C., Straka, M., Skare, S., Holdsworth, S., Hornegger, J., Bammer, R.: Real-time optical motion correction for diffusion tensor imaging. Magn. Reson. Med. 66(2), 366–378 (2011). doi:10.1002/mrm.22787

    Article  Google Scholar 

  2. Ambrosio, L., Fusco, N., Pallara, D.: Functions of Bounded Variation and Free Discontinuity Problems. Oxford University Press, Oxford (2000)

    MATH  Google Scholar 

  3. Arsigny, V., Fillard, P., Pennec, X., Ayache, N.: Log-euclidean metrics for fast and simple calculus on diffusion tensors. Magn. Reson. Med. 56(2), 411–421 (2006)

    Article  Google Scholar 

  4. Basser, P.J., Jones, D.K.: Diffusion-tensor MRI: theory, experimental design and data analysis—a technical review. NMR Biomed. 15(7–8), 456–467 (2002). doi:10.1002/nbm.783

    Article  Google Scholar 

  5. Basu, S., Fletcher, T., Whitaker, R.: Rician noise removal in diffusion tensor mri. In: Larsen, R., Nielsen, M., Sporring, J. (eds.) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006, Lecture Notes in Computer Science, vol. 4190, pp. 117–125. Springer, Berlin (2006). doi:10.1007/11866565_15

  6. Bačák, M., Bergmann, R., Steidl, G., Weinmann, A.: A second order non-smooth variational model for restoring manifold-valued images (2015)

  7. Benning, M., Gladden, L., Holland, D., Schönlieb, C.B., Valkonen, T.: Phase reconstruction from velocity-encoded MRI measurements—a survey of sparsity-promoting variational approaches. J. Magn. Reson. 238, 26–43 (2014). doi:10.1016/j.jmr.2013.10.003

    Article  Google Scholar 

  8. Bredies, K.: Symmetric tensor fields of bounded deformation. Annali di Matematica Pura ed Applicata 192(5), 815–851 (2013). doi:10.1007/s10231-011-0248-4

    MathSciNet  Article  MATH  Google Scholar 

  9. Bredies, K., Kunisch, K., Pock, T.: Total generalized variation. SIAM J. Imaging Sci. 3, 492–526 (2011). doi:10.1137/090769521

    MathSciNet  Article  MATH  Google Scholar 

  10. Bredies, K., Kunisch, K., Valkonen, T.: Properties of \(L^1\)-\(\text{ TGV }^2\): the one-dimensional case. J. Math. Anal. Appl. 398, 438–454 (2013). doi:10.1016/j.jmaa.2012.08.053

    MathSciNet  Article  MATH  Google Scholar 

  11. Bredies, K., Valkonen, T.: Inverse problems with second-order total generalized variation constraints. In: Proceedings of the 9th International Conference on Sampling Theory and Applications (SampTA) 2011, Singapore (2011)

  12. Burger, M., Lucka, F.: Maximum a posteriori estimates in linear inverse problems with log-concave priors are proper bayes estimators. Inverse Prob. 30(11), 114,004 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  13. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011). doi:10.1007/s10851-010-0251-1

    MathSciNet  Article  MATH  Google Scholar 

  14. Chefd’hotel, C., Tschumperlé, D., Deriche, R., Faugeras, O.: Constrained flows of matrix-valued functions: Application to diffusion tensor regularization. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) Computer Vision–ECCV 2002, Lecture Notes in Computer Science, vol. 2350, pp. 251–265. Springer, Berlin (2002). doi:10.1007/3-540-47969-4_17

  15. Cox, D., Hinkley, D.: Theoretical Statistics. Taylor & Francis, London (1979)

    MATH  Google Scholar 

  16. Dunford, N., Schwartz, J.T.: Linear Operators, Part I General Theory. Interscience Publishers, Hoboken (1958)

    MATH  Google Scholar 

  17. Dvoretzky, A., Kiefer, J., Wolfowitz, J.: Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator. Ann. Math. Stat. 27(3), 642–669 (1956). doi:10.1214/aoms/1177728174

    MathSciNet  Article  MATH  Google Scholar 

  18. Esser, E., Zhang, X., Chan, T.F.: A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imaging Sci. 3(4), 1015–1046 (2010). doi:10.1137/09076934X

    MathSciNet  Article  MATH  Google Scholar 

  19. Federer, H.: Geometric Measure Theory. Springer, New York (1969)

    MATH  Google Scholar 

  20. Fuster, A., Dela Haije, T., Tristán-Vega, A., Plantinga, B., Westin, C.F., Florack, L.: Adjugate diffusion tensors for geodesic tractography in white matter. J. Math. Imaging Vis. 54(1), 1–14 (2016). doi:10.1007/s10851-015-0586-8

    MathSciNet  Article  MATH  Google Scholar 

  21. Fuster, A., Tristan-Vega, A., Haije, T., Westin, C.F., Florack, L.: A novel riemannian metric for geodesic tractography in dti. In: Schultz, T., Nedjati-Gilani, G., Venkataraman, A., O’Donnell, L., Panagiotaki, E. (eds.) Computational Diffusion MRI and Brain Connectivity, Mathematics and Visualization, pp. 97–104. Springer, New York (2014). doi:10.1007/978-3-319-02475-2_9

  22. Getreuer, P., Tong, M., Vese, L.A.: A variational model for the restoration of MR images corrupted by blur and Rician noise. In: Advances in Visual Computing, Lecture Notes in Computer Science, vol. 6938, pp. 686–698. Springer, Berlin (2011). doi:10.1007/978-3-642-24028-7_63

  23. Grasmair, M., Haltmeier, M., Scherzer, O.: The residual method for regularizing ill-posed problems. Appl. Math. Comp. 218(6), 2693–2710 (2011). doi:10.1016/j.amc.2011.08.009

    MathSciNet  Article  MATH  Google Scholar 

  24. Gudbjartsson, H., Patz, S.: The Rician distribution of noisy MRI data. Magn. Reson. Med. 34(6), 910–914 (1995)

    Article  Google Scholar 

  25. Hao, X., Whitaker, R., Fletcher, P.: Adaptive riemannian metrics for improved geodesic tracking of white matter. In: Székely, G., Hahn, H.K. (eds.) Information Processing in Medical Imaging, Lecture Notes in Computer Science, vol. 6801, pp. 13–24. Springer, Berlin (2011). doi:10.1007/978-3-642-22092-0_2

  26. He, B., Yuan, X.: Convergence analysis of primal-dual algorithms for a saddle-point problem: From contraction perspective. SIAM J. Imaging Sci. 5(1), 119–149 (2012). doi:10.1137/100814494

    MathSciNet  Article  MATH  Google Scholar 

  27. Herbst, M., Maclaren, J., Weigel, M., Korvink, J., Hennig, J., Zaitsev, M.: Prospective motion correction with continuous gradient updates in diffusion weighted imaging. Magn. Reson. Med. (2011). doi:10.1002/mrm.23230

  28. Hohage, T., Homann, C.: A generalization of the Chambolle-Pock algorithm to Banach spaces with applications to inverse problems (2014)

  29. Kingsley, P.: Introduction to diffusion tensor imaging mathematics: Parts I-III. Concepts Magn. Reson. A 28(2), 101–179 (2006). doi:10.1002/cmr.a.20048

    Article  Google Scholar 

  30. Knoll, F., Clason, C., Bredies, K., Uecker, M., Stollberger, R.: Parallel imaging with nonlinear reconstruction using variational penalties. Magn. Reson. Med. 67(1), 34–41 (2012)

    Article  Google Scholar 

  31. Knoll, F., Raya, J.G., Halloran, R.O., Baete, S., Sigmund, E., Bammer, R., Block, T., Otazo, R., Sodickson, D.K.: A model-based reconstruction for undersampled radial spin-echo dti with variational penalties on the diffusion tensor. NMR Biomed. 28(3), 353–366 (2015). doi:10.1002/nbm.3258

    Article  Google Scholar 

  32. Korolev, Y.: Making use of a partial order in solving inverse problems: II. Inverse Prob. 30(8), 085,003 (2014)

    MathSciNet  Article  Google Scholar 

  33. Korolev, Y., Yagola, A.: On inverse problems in partially ordered spaces with a priori information. J. Inverse Ill-Posed Prob. 20(4), 567–573 (2012)

    MathSciNet  MATH  Google Scholar 

  34. Korolev, Y., Yagola, A.: Making use of a partial order in solving inverse problems. Inverse Prob. 29(9), 095,012 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  35. Lassas, M., Saksman, E., Siltanen, S.: Discretization-invariant bayesian inversion and Besov space priors. Inverse Prob. Imaging 3(1), 87–122 (2009). doi:10.3934/ipi.2009.3.87

    MathSciNet  Article  MATH  Google Scholar 

  36. Lassas, M., Siltanen, S.: Can one use total variation prior for edge-preserving bayesian inversion? Inverse Prob. 20(5), 1537 (2004). doi:10.1088/0266-5611/20/5/013

    MathSciNet  Article  MATH  Google Scholar 

  37. Lehmann, E., Romano, J.: Testing Statistical Hypotheses. Springer Texts in Statistics. Springer, New York (2008)

    Google Scholar 

  38. de Los Reyes, J.C., Schönlieb, C.B., Valkonen, T.: Bilevel parameter learning for higher-order total variation regularisation models. arXiv:1508.07243 (2015)

  39. Luxemburg, W., Zaanen, A.: Riesz Spaces. North-Holland Publishing Company, Amsterdam (1971)

    MATH  Google Scholar 

  40. Martín, A., Schiavi, E.: Automatic total generalized variation-based DTI Rician denoising. In: Image Analysis and Recognition, Lecture Notes in Computer Science, vol. 7950, pp. 581–588. Springer, Berlin (2013). doi:10.1007/978-3-642-39094-4_66

  41. Massart, P.: The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. Ann. Prob. 18(3), 1269–1283 (1990). doi:10.1214/aop/1176990746

    MathSciNet  Article  MATH  Google Scholar 

  42. Meyer, Y.: Oscillating Patterns in Image Processing and Nonlinear Evolution Equations. American Mathematical Society, Boston (2001)

    Book  MATH  Google Scholar 

  43. Nesterov, Y.: A method of solving a convex programming problem with convergence rate \(O(1/k^2)\). Sov. Math. Dokl. 27(2), 372–376 (1983)

    MATH  Google Scholar 

  44. Pan, X., Sidky, E.Y., Vannier, M.: Why do commercial ct scanners still employ traditional, filtered back-projection for image reconstruction? Inverse Prob. 25(12), 123,009 (2009). doi:10.1088/0266-5611/25/12/123009

    MathSciNet  Article  MATH  Google Scholar 

  45. Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60, 259–268 (1992)

    MathSciNet  Article  MATH  Google Scholar 

  46. Schaefer, H.: Banach Lattices and Positive Operators. Springer, New York (1974)

    Book  MATH  Google Scholar 

  47. Setzer, S.: Operator splittings, bregman methods and frame shrinkage in image processing. Int. J. Comput. Vis. 92(3), 265–280 (2011). doi:10.1007/s11263-010-0357-3

    MathSciNet  Article  MATH  Google Scholar 

  48. Setzer, S., Steidl, G., Popilka, B., Burgeth, B.: Variational methods for denoising matrix fields. In: Weickert, J., Hagen, H. (eds.) Visualization and Processing of Tensor Fields, pp. 341–360. Springer, New York (2009)

  49. Shiryaev, A.N.: Probability. Graduate Texts in Mathematics. Springer, New York (1996)

    Google Scholar 

  50. Smith, S.M., Jenkinson, M., Woolrich, M.W., Beckmann, C.F., Behrens, T.E.J., Johansen-Berg, H., Bannister, P.R., Luca, M.D., Drobnjak, I., Flitney, D.E., Niazy, R.K., Saunders, J., Vickers, J., Zhang, Y., Stefano, N.D., Brady, J.M., Matthews, P.M.: Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage 23(Suppl 1), S208–S219 (2004). doi:10.1016/j.neuroimage.2004.07.051

    Article  Google Scholar 

  51. Temam, R.: Mathematical problems in plasticity. Gauthier-Villars, Paris (1985)

    MATH  Google Scholar 

  52. Tikhonov, A.N., Goncharsky, A.V., Stepanov, V.V., Yagola, A.G.: Numerical Methods for the Solution of Ill-Posed Problems. Kluwer, Dordrecht (1995)

    Book  MATH  Google Scholar 

  53. Tournier, J.D., Mori, S., Leemans, A.: Diffusion tensor imaging and beyond. Magn. Reson. Med. 65(6), 1532–1556 (2011). doi:10.1002/mrm.22924

    Article  Google Scholar 

  54. Tschumperlé, D., Deriche, R.: Diffusion tensor regularization with constraints preservation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 948–953 (2001)

  55. Valkonen, T.: A primal-dual hybrid gradient method for non-linear operators with applications to MRI. Inverse Prob. 30(5), 055,012 (2014). doi:10.1088/0266-5611/30/5/055012

    MathSciNet  Article  MATH  Google Scholar 

  56. Valkonen, T.: Big images. In: Emrouznejad, A. (ed.) Big Data Optimization: Recent Developments and Challenges, Studies in Big Data. Springer, New York (2015). Accepted

    Google Scholar 

  57. Valkonen, T., Bredies, K., Knoll, F.: Total generalised variation in diffusion tensor imaging. SIAM J. Imaging Sci. 6(1), 487–525 (2013). doi:10.1137/120867172

    MathSciNet  Article  MATH  Google Scholar 

  58. Valkonen, T., Knoll, F., Bredies, K.: TGV for diffusion tensors: a comparison of fidelity functions. J. Inverse Ill-Posed Prob. 21, 355–377 (2013). doi:10.1515/jip-2013-0005. Special issue for IP:M&S 2012, Antalya, Turkey

    MathSciNet  MATH  Google Scholar 

  59. Valkonen, T., Liebmann, M.: GPU-accelerated regularisation of large diffusion tensor volumes. Computing 95, 771–784 (2013). doi:10.1007/s00607-012-0277-x. Special issue for ESCO2012, Pilsen, Czech Republic

    MathSciNet  Article  Google Scholar 

  60. Weickert, J.: Anisotropic Diffusion in Image Processing, vol. 1. Teubner, Stuttgart (1998)

    MATH  Google Scholar 

Download references


While at the Center for Mathematical Modelling of the Escuela Politécnica Nacional in Quito, Ecuador, T. Valkonen has been supported by a Prometeo scholarship of the Senescyt (Ecuadorian Ministry of Science, Technology, Education, and Innovation). In Cambridge, T. Valkonen has been supported by the EPSRC Grants No. EP/J009539/1 “Sparse & Higher-order Image Restoration” and No. EP/M00483X/1 “Efficient computational tools for inverse imaging problems”. A. Gorokh and Y. Korolev are grateful to the RFBR (Russian Foundation for Basic Research) for partial financial support (Projects 14-01-31173 and 14-01-91151). The authors would also like to thank Karl Koschutnig for the in vivo dataset, Kristian Bredies for scripts used to generate the tractography images and Florian Knoll for many inspiring discussions.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Tuomo Valkonen.


Appendix 1: A Data Statement for the EPSRC

The MRI scans used for this publication have been used by the courtesy of the producer. Data that are similar for all intents and purposes, are widespread, and can be easily produced by making a measurement of a human subject with an MRI scanner. Our source codes are archived at

Appendix 2: Notation and Techniques

We recall some basic, not completely standard, mathematical notation and concepts in this appendix. We begin with partially ordered vector spaces, following the book [46]. Then we proceed to tensor calculus and tensor fields of bounded variation and of bounded deformation. These are also covered in more detail for the diffusion tensor imaging application in [57].

Banach Lattices

A linear space X endowed with a partial order relation \(\leqslant \) is called an ordered vector space if the partial order agrees with linear operations in the following way:

$$\begin{aligned} \begin{aligned}&x \leqslant y&\implies&x + z \leqslant y+z \quad&\forall x,y,z \in X, \\&x \leqslant y&\implies&\lambda x \leqslant \lambda y \quad&\forall x,y \in X \text { and } \lambda \in {\mathbb {R}}_+. \end{aligned} \end{aligned}$$

An ordered vector space is called a vector lattice if each pair of elements \(x,y \in X\) have a supremum \(x \vee y \in X\) and infimum \(x \wedge y \in X\). Supremum of two elements xy of a Banach lattice X is the element \(z = x \vee y\) with the following properties: \(z \geqslant x, \,\, z \geqslant y\) and \(\forall {\tilde{z}} \in X\) such that \({\tilde{z}} \geqslant x\) and \({\tilde{z}} \geqslant y\) we have \({\tilde{z}} \geqslant z\).

For any \(x \in X\), the element \(x_+ = x \vee 0\) is called its positive part, the element \(x_- = (-x) \vee 0 = (-x)_+\) is called its negative part and the element \(|x| = x_+ + x_-\) is its absolute value. The equalities \(x = x_+ - x_-\) and \(|x| = x \vee (-x)\) hold for any \(x \in X\).

It is obvious that suprema and infima exist for any finite number of elements of a vector lattice. A vector lattice X is said to be order complete if any bounded from above set in X has a supremum.

Let X and Y be ordered vector spaces. A linear operator \(U :X \rightarrow Y\) is called positive, if \(x \geqslant _X 0\) implies \(Ux \geqslant _Y 0\). An operator U is called regular, if it can be written as \(U = U_1 - U_2\), where \(U_1\) and \(U_2\) are positive operators.

Denote the linear space of all regular operators \(X \rightarrow Y\) by \(L^\sim (X,Y)\). A partial order can be introduced in \(L^\sim (X,Y)\) in the following way: \(U_1 \geqslant U_2\), if \(U_1 - U_2\) is a positive operator. If X and Y are vector lattices and Y is order complete, then \(L^\sim (X,Y)\) is also an order complete vector lattice.

A norm \(\Vert \cdot \Vert \) defined in a vector lattice X is called monotone if \(|x| \leqslant |y|\) implies \(\Vert x\Vert \leqslant \Vert y\Vert \). A vector lattice endowed with a monotone norm is called a Banach lattice if it is norm complete. If X and Y are Banach lattices, then all operators in \(L^\sim (X,Y)\) are continuous.

Let us list some examples of Banach lattices. The space of continuous functions \(C({\varOmega })\), where \({\varOmega }\subset {\mathbb {R}}^n\), is a Banach lattice under the natural pointwise ordering: \(f \geqslant _C g\) if and only if \(f(x) \geqslant g(x)\) for all \(x \in {\varOmega }\). The spaces \(L^p({\varOmega })\), \(1 \leqslant p \leqslant \infty \), are also Banach lattices under the following partial ordering: \(f \geqslant _{L^p} g\) if and only if \(f(x) \geqslant g(x)\) almost everywhere in \({\varOmega }\). With this partial order, \(L^p({\varOmega })\), \(1 \leqslant p \leqslant \infty \), are order complete Banach lattices. The Banach lattice of continuous functions \(C({\varOmega })\) is not order complete.

Tensors in the Euclidean Setting

We call a k-linear mapping \(A: \mathbb {R}^m \times \cdots \times \mathbb {R}^m \rightarrow \mathbb {R}\) a k-tensor, denoted \(A \in \mathcal {T}^k(\mathbb {R}^m)\). This is a simplification from the full differential-geometric definition, sufficient for our finite-dimensional setting. We say that A is symmetric, denoted \(A \in {{\mathrm{Sym}}}^k(\mathbb {R}^m)\), if it satisfies for any permutation \(\pi \) of \(\{1,\ldots ,k\}\) that

$$\begin{aligned} A(c_{\pi 1},\ldots ,c_{\pi k})=A(c_1,\ldots ,c_k). \end{aligned}$$

With \(e_1,\ldots ,e_m\) the standard basis of \(\mathbb {R}^m\), we define on \(\mathcal {T}^k(\mathbb {R}^m)\) the inner product

$$\begin{aligned} \langle A,B\rangle :=\sum _{p \in \{1,\ldots ,m\}^k} A(e_{p_1},\ldots ,e_{p_k}) B(e_{p_1},\ldots ,e_{p_k}), \end{aligned}$$

and the Frobenius norm

$$\begin{aligned} \Vert A\Vert _F :=\sqrt{\langle A,A\rangle }. \end{aligned}$$

The Frobenius norm is rotationally invariant in a sense crucial for DTI. We refer to [57] for a detailed discussion of this, as well of alternative rotationally invariant norms.

Example 1

(Vectors) Vectors \(A \in \mathbb {R}^m\) are of course symmetric 1-tensors, the inner product is the usual inner product in \(\mathbb {R}^m\) and the Frobenius norm is the two-norm, \(\Vert A\Vert _F=\Vert A\Vert _2\).

Example 2

(Matrices) Matrices are 2-tensors: \(A(x, y) = \langle A x,y\rangle \), while symmetric matrices \(A=A^T\) are symmetric 2-tensors. The inner product is \(\langle A,B\rangle =\sum _{i,j} A_{ij}B_{ij}\) and \(\Vert A\Vert _F\) is the matrix Frobenius norm.

We use the notation \(A \ge 0\) for positive-semidefinite matrices A. One can verify that this relation indeed defines a partial order in the space of symmetric matrices:

$$\begin{aligned} A \ge B \quad \text { iff } A-B\text { is positive semidefinite}. \end{aligned}$$

With this partial order, the space of all symmetric matrices becomes an ordered vector space, but not a vector lattice. However, it enjoys some properties similar to those of vector lattices: for example, any directed upwards subsetFootnote 2 has a supremum [39, Ch.8].

Symmetric Tensor Fields of Bounded Deformation

Let \(u: {\varOmega }\rightarrow {{\mathrm{Sym}}}^k(\mathbb {R}^m)\) for a domain \({\varOmega }\subset \mathbb {R}^m\). We set

$$\begin{aligned} \Vert u\Vert _{F,p} :=\Bigl (\int _{\varOmega }\Vert u(x)\Vert _F^p {{\mathrm{d}}}x\Bigr )^{1/p}, \, (p \in [1,\infty )), \end{aligned}$$


$$\begin{aligned} \Vert u\Vert _{F,\infty } :={{\mathrm{ess\,sup}}}_{x \in {\varOmega }} \Vert u(x)\Vert _F, \end{aligned}$$

The spaces \(L^p({\varOmega }; {{\mathrm{Sym}}}^{k}(\mathbb {R}^m))\) are defined in the natural way using these norms, and clearly satisfy all the usual properties of \(L^p\) spaces.

In the particular case of matrices (\(k=2\)), partial order can be introduced in the space \(L^p({\varOmega }; {{\mathrm{Sym}}}^{2}(\mathbb {R}^m))\) in the following way:

$$\begin{aligned} u \geqq v \quad \text { iff } u(x) \ge v(x) \text { almost everywhere in } {\varOmega }. \end{aligned}$$

Recall that the symbol \(\ge \) stands for the positive-semidefinite order (25) in the space of symmetric matrices. Since the positive-semidefinite order is not a lattice, neither is the order (26).

If \(u \in C^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), \(k\ge 1\), we define by contraction the divergence \({{\mathrm{div}}}u \in C({\varOmega }; {{\mathrm{Sym}}}^{k-1}(\mathbb {R}^m)\) as

$$\begin{aligned}{}[{{\mathrm{div}}}u(\cdot )](e_{i_2},\ldots ,e_{i_k}) :=\sum _{i_1=1}^m \langle e_{i_1},\nabla u(\cdot )(e_{i_1},\ldots ,e_{i_k})\rangle . \end{aligned}$$

It is easily verified that \({{\mathrm{div}}}u(x)\) is indeed symmetric. Given a tensor field \(u \in L^1({\varOmega }; \mathcal {T}^{k}(\mathbb {R}^m))\) we then define the symmetrised distributional gradient \(E u \in [C_c^\infty ({\varOmega }; {{\mathrm{Sym}}}^{k+1}(\mathbb {R}^m))]^*\) by

$$\begin{aligned} E u(\varphi ):= & {} -\int _{\varOmega }\langle u(x),{{\mathrm{div}}}\varphi (x)\rangle {{\mathrm{d}}}x, \nonumber \\&(\varphi \in C_c^\infty ({\varOmega }; {{\mathrm{Sym}}}^{k+1}(\mathbb {R}^m))). \end{aligned}$$

With these notions at hand, we now define the spaces of symmetric tensor fields of bounded deformation as (see also [8, 57])

$$\begin{aligned}&{{\mathrm{BD}}}({\varOmega }; {{\mathrm{Sym}}}^{k}(\mathbb {R}^m)) \\&\quad :=\Bigl \{u \in L^1({\varOmega }; {{\mathrm{Sym}}}^{k}(\mathbb {R}^m)) \Bigm | \sup \nolimits _{\varphi \in V^{k+1}_{F,s }} Eu(\varphi ) < \infty \Bigr \}, \end{aligned}$$


$$\begin{aligned} V^{k}_{F,s } :=\{ \varphi \in C_c^\infty ({\varOmega }; {{\mathrm{Sym}}}^{k}(\mathbb {R}^m)) \mid \Vert \varphi \Vert _{F,\infty } \le 1 \}. \end{aligned}$$

For \(u \in {{\mathrm{BD}}}({\varOmega }; {{\mathrm{Sym}}}^{k}(\mathbb {R}^m))\), the symmetrised gradient Eu is a Radon measure, \(Eu \in \mathcal {M}({\varOmega }; {{\mathrm{Sym}}}^{k+1}(\mathbb {R}^m))\). For the proof of this fact we refer to [19, §4.1.5].

Example 3

The space \({{\mathrm{BD}}}({\varOmega }; {{\mathrm{Sym}}}^{0}(\mathbb {R}^m))\) is the space \({{\mathrm{BV}}}({\varOmega })\) of functions of bounded variation while \({{\mathrm{BD}}}({\varOmega }; {{\mathrm{Sym}}}^{1}(\mathbb {R}^m))\) is the space \({{\mathrm{BD}}}({\varOmega })\) of functions of bounded deformation of [51].

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Gorokh, A., Korolev, Y. & Valkonen, T. Diffusion Tensor Imaging with Deterministic Error Bounds. J Math Imaging Vis 56, 137–157 (2016).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Diffusion tensor imaging
  • Noise modelling
  • Total generalised variation
  • Error bounds
  • Deterministic

Mathematics Subject Classification

  • 92C55
  • 94A08