1 Introduction

Often in inverse problems, we have only very rough knowledge of noise models, or the exact model is too difficult to realise in a numerical reconstruction method. The data may also contain process artefacts from black-box devices [44]. Partial order in Banach lattices has therefore recently been investigated in [3234] as a less-assuming error modelling approach for inverse problems. This framework allows the representation of errors in the data as well as in the forward operator of an inverse problem by means of order intervals (i.e. lower and upper bounds by means of appropriate partial orders). An important advantage of this approach vs. statistical noise modelling is that deterministic error bounds preserve their simple structure under monotone transformations.

Fig. 1
figure 1

The noise in the absolute values of complex MRI data should be Rician. Here we have taken a 50-bin histogram of the noise in real data. This divides the pixels into bins of 50 different noise levels. However, we only find approximately 10 noise levels to have non-zero pixel count. As the Rician distribution is continuous, we see that the noise cannot be Rician, some bins of the 50-bin histogram being empty. The measurement setup of the data used here is described in Sect. 5.3. a Slice of a real MRI measurement. b 50-bin histogram of noise estimated from background. c 50-bin histogram after eddy-current correction with FSL

We apply partial order in Banach lattices to diffusion tensor imaging (DTI). We will in due course explain the diffusion tensor imaging progress, as well as the theory of inverse problems in Banach lattices, but start by introducing our model

$$\begin{aligned} \min _u~ R(u) \quad \text {s.t.} \quad u\geqslant & {} 0,\\ g_j^l\leqslant & {} A_j u \leqslant g_j^u,\quad (j=1,\ldots ,N). \end{aligned}$$

That is, we want to find a field of symmetric 2-tensors \(u: {\varOmega }\rightarrow {{\mathrm{Sym}}}^2(\mathbb {R}^3)\) on the domain \({\varOmega }\subset \mathbb {R}^3\), minimising the value of the regulariser R on the feasible set. The tensor field u is our unknown image. It is subject to a positivity constraint, as well as partial order constraints imposed through the operators \([A_j u](x) :=-\langle b_j,u(x)b_j\rangle \), and the upper and lower bounds \(g_j^l :=\log ({\hat{s}}_j^l/{\hat{s}}_0^u)\) and \(g_j^u :=\log ({\hat{s}}_j^u/{\hat{s}}_0^l)\). These arise from the linearisation (via the monotone logarithmic transformation) of the Stejskal-Tanner equation

$$\begin{aligned} s_j(x)=s_0(x) \exp (-\langle b_j,u(x)b_j\rangle ), \quad (j=1,\ldots ,N), \end{aligned}$$
(1)

central to the diffusion tensor imaging process.

To shed more light on u and the equation (1), let us briefly outline the diffusion tensor imaging process. As a first step towards DTI, diffusion-weighted magnetic resonance imaging (DWI) is performed. This process measures the anisotropic diffusion of water molecules. To capture the diffusion information, the magnetic resonance images have to be measured with diffusion-sensitising gradients in multiple directions. These are the different \(b_i\)’s in (1). Eventually, multiple DWI images \(\{s_j\}\) are related through the Stejskal–Tanner equation (1) to the symmetric positive-definite diffusion tensor field \(u: {\varOmega }\rightarrow {{\mathrm{Sym}}}^2(\mathbb {R}^3)\) [4, 29]. At each point \(x \in {\varOmega }\), the tensor u(x) is the covariance matrix of a normal distribution for the probability of water diffusing in different spatial directions.

The fact that multiple \(b_i\)’s are needed to recover u leads to very long acquisition times, even with ultra fast sequences like echo planar imaging (EPI). Therefore, DTI is inherently a low-resolution and low-SNR method. In theory, the amplitude DWI images exhibit Rician noise [24]. However, as the histogram of an in vivo measurement in Fig. 1 illustrates, this may not be the case for practical datasets from black-box devices. Moreover, the DWI process is prone to eddy-current distortions [53], and due to the slowness of it, it is very sensitive to patient motion [1, 27]. We therefore have to use techniques that remove these artefacts in solving for u(x). We also need to ensure the positivity u, as non-positive-definite diffusion tensor are non-physical. One proposed approach for the satisfaction of this constraint is that of log-Euclidean metrics [3]. This approach has several theoretically desirable aspects, but some practical shortcomings [57]. Special Perona–Malik-type constructions on Riemannian manifolds can also be used to maintain the structure of the tensor field [14, 54]. Such anisotropic diffusion is, however, severely ill-posed [60]. Recently manifold-valued discrete-domain total variation models have also been applied to diffusion tensor imaging [6].

Our approach is also in the total variation family, first considered for diffusion tensor imaging in [48]. Namely, we follow up on the work in [55, 5759] on the application of total generalised variation regularisation [9] to DTI. We should note that in all of these works, the fidelity function was the ROF-type [45] \(L^2\) fidelity. This would only be correct, according to the assumption that noise of MRI measurements is Gaussian, if we had access to the original complex k-space MRI data. The noise of the inverse Fourier-transformed magnitude data \(s_j\), that we have in practice access to, is however Rician under the Gaussian assumption on the original complex data [24]. This is not modelled by the \(L^2\) fidelity.

Numerical implementation of Rician noise modelling has been studied in [22, 40]. As already discussed, in this work, we take the other direction. Instead of modelling the errors in a statistically accurate fashion, not assuming to know an exact noise model, we represent them by means of pointwise bounds. The details of the model are presented in Sect. 3. We study the practical performance in Sect. 5 using the numerical method presented in Sect. 4. First we, however, start with the general error modelling theory in Sect. 2. Readers who are not familiar with notation for Banach lattices or symmetric tensor fields are advised to start with the Appendix 2 where we introduce our mathematical notation and techniques.

2 Deterministic Error Modelling

2.1 Mathematical Basis

We now briefly outline the theoretical framework [32] that is the basis for our approach. Consider a linear operator equation

$$\begin{aligned} Au = f, \quad u \in U, \,\, f \in F, \end{aligned}$$
(2)

where U and F are Banach lattices, \(A :U \rightarrow F\) is a regular injective operator. The inaccuracies in the right-hand side f and the operator A are represented as bounds by means of appropriate partial orders, i.e.

$$\begin{aligned} \begin{aligned}&f^l, f^u :&\,&f^l \leqslant _F f \leqslant _F f^u, \\&A^l, A^u :&\,&A^l \leqslant _{L^\sim (U,F)} A \leqslant _{L^\sim (U,F)} A^u, \end{aligned} \end{aligned}$$
(3)

where the symbol \(\leqslant _F\) stands for the partial order in F and \(\leqslant _{L^\sim (U,F)}\) for the partial order for regular operators induced by partial orders in U and F. Further, we will drop the subscripts at inequality signs where it will not cause confusion.

The exact right-hand side f and operator A are not available. Given the approximate data \((f^l,f^u,A^l,A^u)\), we need to find an approximate solution u that converges to the exact solution \({\bar{u}}\) as the inaccuracies in the data diminish. This statement needs to be formalised. We consider monotone convergent sequences of lower and upper bounds

$$\begin{aligned} \begin{aligned}&f^l_n :f^l_{n+1} \geqslant f^l_n,&\quad&A^l_n :A^l_{n+1} \geqslant A^l_n, \\&f^u_n :f^u_{n+1} \leqslant f^u_n,&\quad&A^u_n :A^u_{n+1} \leqslant A^u_n, \\&f^l_n \leqslant f \leqslant f^u_n,&\quad&A^l_n \leqslant A \leqslant A^u_n \quad \forall n \in {\mathbb {N}},\\&\Vert f^l_n - f^u_n \Vert \rightarrow 0,&\quad&\Vert A^l_n - A^u_n \Vert \rightarrow 0 \quad \text {as } n \rightarrow \infty . \end{aligned} \end{aligned}$$
(4)

We are looking for an approximate solution \(u_n\) such that \(\Vert u_n-{\bar{u}}\Vert _U \rightarrow 0\) as \(n \rightarrow \infty \).

Let us ask the following question. What are the elements \(u \in U\) that could have produced data within the tolerances (4)? Obviously, the exact solution is one of such elements. Let us call the set containing all such elements the feasible set \(U_n \subset U\).

Suppose that we know a priori that the exact solution is positive (by means of the appropriate partial order in U), then it is easy to verify that the following inequalities hold for all \(n \in {\mathbb {N}}\)

$$\begin{aligned} {\bar{u}} \geqslant _U 0, \quad A^u_n {\bar{u}} \geqslant _F f^l_n, \quad A^l_n {\bar{u}} \leqslant _F f^u_n. \end{aligned}$$

This observation motivates our choice of the feasible set:

$$\begin{aligned} U_n = \left\{ u \in U :u \geqslant _U 0, \quad A^u_n u \geqslant _F f^l_n, \quad A^l_n u \leqslant _F f^u_n \right\} . \end{aligned}$$

It is clear that the exact solution \({\bar{u}}\) belongs to the sets \(U_n\) for all \(n \in {\mathbb {N}}\). Our goal is to define a rule that will choose for any n an element \(u_n\) of the set \(U_n\) such that the sequence \(u_n \in U_n\) will strongly converge to the exact solution \({\bar{u}}\). We do so by minimising an appropriate regularisation functional R(u) on \(U_n\):

$$\begin{aligned} u_n = \mathop {\hbox {arg min}}\limits _{u \in U_n} R(u). \end{aligned}$$
(5)

This method, in fact, is a lattice analogue of the well-known residual method [23, 52]. The convergence result is as follows [32].

Theorem 1

Suppose that

  1. 1.

    R(u) is bounded from below on U,

  2. 2.

    R(u) is lower semi-continuous,

  3. 3.

    level sets \(\{u :R(u) \leqslant C\}\) (\(C=\mathrm{const}\)) are sequentially compact in U (in the strong topology induced by the norm).

Then the sequence defined in (5) strongly converges to the exact solution \({\bar{u}}\) and \({\mathcal {R}}(u_n) \rightarrow {\mathcal {R}}({\bar{u}})\).

Examples of regularisation functionals that satisfy the conditions of Theorem 1 are as follows. Total Variation in \(L^1({\varOmega })\), where \({\varOmega }\) is a subset of \({\mathbb {R}}^n\), assures strong convergence in \(L^1\), given that the \(L^1\)-norm of the solution is bounded. The Sobolev norm \(\Vert u \Vert _{W^{1,q}({\varOmega })}\) yields strong convergence in the spaces \(L^p({\varOmega })\), where \(p \geqslant 1\), \(q > \frac{np}{p+n}\). The latter fact follows from the compact embedding of the corresponding Sobolev \(W^{1,q}({\varOmega })\) space into \(L^p({\varOmega })\) [16].

However, the assumption that the sets \(\{u :R(u) \leqslant C\}\) are strong compacts in U is quite strong. It can be replaced by the assumption of weak compactness, provided that the regularisation functional possesses the so-called Radon–Riesz property.

Definition 1

A functional \(F :U \rightarrow {\mathbb {R}}\) has the Radon–Riesz property (sometimes called the H-property), if for any sequence \(u_n \in U\) weak convergence \(u_n \rightharpoonup u_0\) and simultaneous convergence of the values \(F(u_n) \rightarrow F(u_0)\) imply strong convergence \(u_n \rightarrow u_0\).

Theorem 2

Suppose that

  1. 1.

    R(u) is bounded from below on U,

  2. 2.

    R(u) is weakly lower semi-continuous,

  3. 3.

    level sets \(R(u) \leqslant C\) (\(C=\mathrm{const}\)) are weakly sequentially compact in U,

  4. 4.

    R(u) possesses the Radon–Riesz property.

Then the sequence defined in (5) strongly converges to the exact solution \({\bar{u}}\) and \({\mathcal {R}}(u_n) \rightarrow {\mathcal {R}}({\bar{u}})\).

It is easy to verify that the norm in any Hilbert space possesses the Radon–Riesz property. Moreover, this holds for the norm in any reflexive Banach space [16].

As we explain in the Appendix 2 \(L^p({\varOmega }; {{\mathrm{Sym}}}^{2}(\mathbb {R}^m))\) is not a Banach lattice. Therefore, Theorems 1 and 2 cannot be applied directly. Further theoretical work will be undertaken to extend the framework to the non-lattice case. For the moment, however, we will prove that if there are no errors in the operator A in (2), the requirement that the solution space U is a lattice can be dropped.

Theorem 3

Let U be a Banach space, and F be a Banach lattice. Let the operator A in (2) be a linear, continuous and injective operator. Let \(f^l_n\) and \(f^u_n\) be sequences of lower and upper bounds for the right-hand side defined in (4), and suppose that there are no errors in the operator A. Let us redefine the feasible set in the following way

$$\begin{aligned} U_n = \left\{ u \in U :\quad f^l_n \leqslant _F A u \leqslant _F f^u_n \right\} . \end{aligned}$$

Suppose also that the regulariser R(x) satisfies conditions of either Theorem 1 or Theorem 2. Then the sequence defined in (5) strongly converges to the exact solution \({\bar{u}}\) and \(R(u_n) \rightarrow R({\bar{u}})\).

Proof

Let us define an approximate right-hand side and its approximation error in the following way

$$\begin{aligned} f_{\delta _n} = \frac{f^u_n + f^l_n}{2}, \quad \delta _n = \frac{\Vert f^u_n-f^l_n\Vert }{2}. \end{aligned}$$

One can easily verify that the inequality \(\Vert f-f_{\delta _n}\Vert \leqslant \delta _n\) holds. Indeed, we have

$$\begin{aligned} \begin{aligned}&f - f_{\delta _n} \leqslant f^u_n - f_{\delta _n} = \frac{f^u_n-f^l_n}{2}, \\&-(f - f_{\delta _n}) \leqslant f_{\delta _n} - f^l_n = \frac{f^u_n-f^l_n}{2},\\&|f - f_{\delta _n}| = (f - f_{\delta _n}) \vee (-(f - f_{\delta _n})) \leqslant \frac{f^u_n-f^l_n}{2},\\&\Vert f - f_{\delta _n}\Vert \leqslant \frac{\Vert f^u_n-f^l_n\Vert }{2}. \end{aligned} \end{aligned}$$

The first two inequalities are consequences of the conditions (4), the third one holds by the definition of supremum and the equality \(|f| = f \vee (-f)\) that holds for all \(f \in F\), and the last inequality is due to the monotonicity of the norm in a Banach lattice.

Similarly, one can show that for any \(u \in U_n\), we have

$$\begin{aligned} \Vert Au - f_{\delta _n}\Vert \leqslant \delta _n. \end{aligned}$$

Therefore, the inclusion \(U_n \subset \{u :\Vert Au-f_{\delta _n}\Vert \leqslant \delta _n\}\) holds.

Now we will proceed with the proof of convergence \(\Vert u_n-{\bar{u}}\Vert \rightarrow 0\). Will prove it for the case when the regulariser R(u) satisfies conditions of Theorem 1. Suppose that the sequence \(u_n\) does not converge to the exact solution \({\bar{u}}\), then it contains a subsequence \(u_{n_k}\) such that \(\Vert u_{n_k} - {\bar{u}}\Vert \geqslant \varepsilon \) for any \(k \in {\mathbb {N}}\) and some fixed \(\varepsilon >0\).

Since the inclusion \({\bar{u}} \in U_n\) holds for all \(n \in {\mathbb {N}}\), we have \(R(u_n) \leqslant R({\bar{u}})\) for all \(n \in {\mathbb {N}}\). Since the level set \(\{u :R(u) \leqslant R({\bar{u}})\}\) is a compact set by assumptions of the theorem, the sequence \(u_{n_k}\) contains a strongly convergent subsequence. With no loss of generality, let us assume that \(u_{n_k} \rightarrow u_0\). We will now show that \(u_0 = {\bar{u}}\). Indeed, we have

$$\begin{aligned} \Vert A u_{n_k} - A {\bar{u}}\Vert \leqslant \Vert A u_{n_k} - f_{\delta _n}\Vert + \Vert f-f_{\delta _n}\Vert \leqslant 2\delta _{n_k} \rightarrow 0. \end{aligned}$$

On the other hand, we have

$$\begin{aligned} \Vert A u_{n_k} - A {\bar{u}}\Vert \rightarrow \Vert A u_0 - A {\bar{u}} \Vert \end{aligned}$$

due to continuity of A and \(\Vert \cdot \Vert \). Therefore, \(A u_0 = A {\bar{u}}\) and \(u_0 = {\bar{u}}\), since A is an injective operator. By contradiction, we get \(\Vert u_n - {\bar{u}}\Vert \rightarrow 0\).

Finally, since the regulariser R(u) is lower semi-continuous, we get that \(\lim \inf R(u_n) = R({\bar{u}})\). However, for any n we have \(R(u_n) \leqslant R({\bar{u}})\), therefore, we get the convergence \(R(u_n) \rightarrow R({\bar{u}})\) as \(n \rightarrow \infty \). \(\square \)

2.2 Philosophical Discussion and Statistical Interpretation

In practice, our data are discrete. So let us momentarily switch to measurements \({\hat{f}} = ({\hat{f}}^1, \ldots , {\hat{f}}^n) \in \mathbb {R}^n\) of a true data \(f \in \mathbb {R}^n\). If we actually knew the pointwise noise model of the data, then one way to obtain potentially useful upper and lower bounds for the deterministic model is by means of statistical interval estimates: confidence intervals. Roughly, the idea is to find for each true signal f individual random upper and lower bounds \({\hat{f}}^u\) and \({\hat{f}}^l\) such that

$$\begin{aligned} P\left( {\hat{f}^u} \le f \le {\hat{f}}^l\right) = 1-\theta . \end{aligned}$$

If \({\hat{f}}^u\) and \({\hat{f}}^l\) are computed based on multiple experiments (i.e. multiple noisy samples \({\hat{f}}_1,\dots ,{\hat{f}}_m\), of the true data f), the interval \([{\hat{f}}^{u,i}, {\hat{f}}^{l,i}]\) will converge in probability to the true data \({\hat{f}}^i\), as the number of experiments m increases. Thus we obtain a probabilistic version of the convergences in (4).

Let us try to see, how such intervals might work in practice. For the purpose of the present discussion, assume that the noise is additive and normal-distributed with variance \(\sigma \) and zero mean—an assumption that does not hold in practice, as we have already seen in Fig. 1, but will suffice for the next thought experiments. That is, \({\hat{f}}_j=f+\nu _j\) for the noise \(\nu _j\). Let the sample mean of \(\{{\hat{f}}_j\}_{j=1}^m\) be \({\bar{f}}_m = ({\bar{f}}^1_m, \ldots {\bar{f}}^n_m)\), and pointwise sample variance \(\sigma =(\sigma ^1,\ldots ,\sigma ^n)\). The product of the pointwise confidence intervals \(I_i\) with confidence \(1-\theta \) is [15, 49]

$$\begin{aligned} \prod _{i=1}^n I_i = \left[ {\bar{f}}_m - k^*_{1-\theta /2}\frac{\sigma }{\sqrt{m}}, {\bar{f}}_M +k^*_{1-\theta /2}\frac{\sigma }{\sqrt{m}}\right] , \end{aligned}$$

where \(k^*_{1-t} :={\varPhi }^{-1}(t)\) with \({\varPhi }\) the cumulative normal distribution function. For \(\theta =0.05\), i.e. the \(95\,\%\) confidence interval, \({\varPhi }^{-1}(0.05/2)=1.96\). Now, the probability that \(I_i\) covers the true \(f^i\) is \(1-\theta \), e.g. \(95\,\%\). If we have only a single sample \(m=1\), the intervals stay large, but the joint probability \((1-\theta )^n\) goes to zero as n increases. As an example, for a rather typical single \(128 \times 128\) slice of a DTI measurement, the probability that exactly \(\phi =5\,\%\) (to the closest discrete value possible) of the \(1-\theta =95\,\%\) confidence intervals do not cover the true parameter would be about \(1.4\,\%\), or

$$\begin{aligned}&1.4\,\% \approx {n \atopwithdelims ()m} \theta ^m {(1-\theta )}^{n-m}, \\&\text {where } n=128^2 \text { and } m=\lceil \phi n \rceil . \end{aligned}$$

The probability of at least \(\phi =5\,\%\) of the pointwise \(95\,\%\) confidence intervals not covering the true parameter is in this setting approximately \(49\,\%\). This can be verified by summing the above estimates over \(m=\lceil \phi n \rceil ,\ldots ,n\).

In summary, unless \(\theta \) simultaneously goes to 1, the product intervals are very unlikely to cover the true parameter. Based on a single experiment, the deterministic approach as interpreted statistically through confidence intervals is therefore very likely to fail to discover the true solution as the data size n increases unless the pointwise confidence is very low. But, if we let the pointwise confidences be arbitrarily high, such that the intervals are very large, the discovered solution in our applications of interest would be just a constant!

Asymptotically, the situation is more encouraging. Indeed, if we could perform more experiments to compute the confidence intervals, then for any fixed n and \(\theta \), it is easy to see that the solution of the “deterministic” error model is an asymptotically consistent and hence asymptotically unbiased estimator of the true f. That is, the estimates converge in probability to f as the experiment count m increases. Indeed, the error bounds-based estimator \({\tilde{f}}_m\), based on m experiments, by definition satisfies \({\tilde{f}}_m \in \prod _{i=1}^n I_i\). Therefore, we have

$$\begin{aligned}&P \left( |{\tilde{f}}^i_m-{\bar{f}}^i_m|>\epsilon \text { for some } i\right) = 0\\&\text {whenever} \quad m \ge (k^*_{1-\theta /2}\sigma /\epsilon )^2. \end{aligned}$$

Thus \(f \overset{P}{\rightarrow } {\bar{f}}\) in probability. Since by the law of large numbers also \({\bar{f}}_m \overset{P}{\rightarrow } f\), this proves the claim, and to some extent justifies our approach from the statistical viewpoint.

It should be noted that this is roughly the most that has previously been known of the maximum a posteriori estimate (MAP), corresponding to the Tikhonov models

$$\begin{aligned} \min \frac{1}{2}\Vert {\hat{f}}-u\Vert ^2 + \alpha R(u). \end{aligned}$$

In particular, the MAP is not the Bayes estimator for the typical squared cost functional. This means that it does not minimise \({\tilde{f}} \mapsto \mathbb {E}[\Vert f-{\tilde{f}}\Vert ^2]\). The minimiser in this case is the conditional mean (CM) estimate, which is why it has been preferred by Bayesian statisticians despite its increased computational cost. The MAP estimate is merely an asymptotic Bayes estimator for the uniform cost function. In a very recent work [12], it has, however, been proved that the MAP estimate is the Bayes estimator for certain Bregman distances. One possible critique of the result is that these distances are not universal and do depend on the regulariser R, unlike the squared distance for CM. The CM estimate, however, has other problems in the setting of total variation and its discretisation [35, 36].

3 Application to Diffusion Tensor Imaging

We now build our model for applying the deterministic error modelling theory to diffusion tensor imaging. We start by building our forward model based on the Stejskal–Tanner equation, and then briefly introduce the regularisers we use.

3.1 The Forward Model

For \(u: {\varOmega }\rightarrow {{\mathrm{Sym}}}^2(\mathbb {R}^3)\), \({\varOmega }\subset \mathbb {R}^3\), a mapping from \({\varOmega }\) to symmetric second-order tensors, let us introduce non-linear operators \(T_j\), defined by

$$\begin{aligned}{}[T_j(u)](x) :=s_0(x) \exp (-\langle b_j,u(x)b_j\rangle ), \quad (j=1,\ldots ,N). \end{aligned}$$

Their role is to model the so-called Stejskal–Tanner equation [4]

$$\begin{aligned} s_j(x)=s_0(x) \exp (-\langle b_j,u(x)b_j\rangle ), \quad (j=1,\ldots ,N). \end{aligned}$$
(6)

Each tensor u(x) models the covariance of a Gaussian probability distribution at x for the diffusion of water molecules. The data \(s_j \in L^2({\varOmega })\), (\(j=1,\ldots ,N\)), are the diffusion-weighted MRI images. Each of them is obtained by performing the MRI scan with a different non-zero diffusion-sensitising gradient \(b_j\), while \(s_0\) is obtained with a zero gradient. After correcting the original k-space data for coil sensitivities, each \(s_j\) is assumed real. As a consequence, any measurement \({\hat{s}}_j\) of \(s_j\) has—in theory—Rician noise distribution [24].

Our goal is to reconstruct u with simultaneous denoising. Following [31, 55], we consider using a suitable regulariser R the Tikhonov model

$$\begin{aligned} \min _{u \geqq 0}~ \sum _{j=1}^N \frac{1}{2} \Vert {\hat{s}}_j-T_j(u)\Vert ^2 + \alpha R(u). \end{aligned}$$
(7)

The constraint \(u\geqq 0\) is to be understood in the sense that u(x) is positive semidefinite for \(\mathcal {L}^n\)-a.e. \(x \in {\varOmega }\) (see Appendix 2 for more details). Due to the Rician noise of \({\hat{s}}_j\), the Gaussian noise model implied by the \(L^2\)-norm in (7) is not entirely correct. However, in some cases the \(L^2\) model may be accurate enough, as for suitable parameters the Rician distribution is not too far from a Gaussian distribution. If one were to model the problem correctly, one should either modify the fidelity term to model Rician noise or include the (unit magnitude complex number) coil sensitivities in the model. The Rician noise model is highly non-linear due to the Bessel functional logarithms involved. Its approximations have been studied in [5, 22, 40] for single MR images and DTI. Coil sensitivities could be included either by knowing them in advance or by simultaneous estimation as in [30]. Either way, significant complexity is introduced into the model, and for the present work, we are content with the simple \(L^2\) model.

We may also consider, as is often the case, and as was done with TGV in [57], the linearised model

$$\begin{aligned} \min _{u \geqq 0}~ \Vert f-u\Vert ^2 + \alpha R(u), \end{aligned}$$
(8)

where, for each \(x \in {\varOmega }\), f(x) is solved by regression for u(x) from the system of equations (6) with \(s_j(x)={\hat{s}}_j(x)\). Further, as in [58], we may also consider

$$\begin{aligned} \min _{u \geqq 0}~ \sum _{j=1}^N \frac{1}{2} \Vert g_j-A_j u\Vert ^2 + \alpha R(u), \end{aligned}$$
(9)

with

$$\begin{aligned}{}[A_j u](x)&:=\langle b_j,u(x)b_j\rangle , \quad \text {and} \nonumber \\ g_j(x)&:=\log ({\hat{s}}_j(x)/{\hat{s}}_0(x)). \end{aligned}$$
(10)

In both of these linearised models, the assumption of Gaussian noise is in principle even more remote from the truth than in the non-linear model (7). We will employ (8) and (7) as benchmark models.

We want to further simplify the model, and forgo with accurate noise modelling. After all, we often do not know the real noise model for the data available in practice. It can be corrupted by process artefacts from black-box algorithms in the MRI devices. This problem of black-box devices has been discussed extensively in [44], in the context of Computed Tomography. Moreover, as we have discussed above, even without such artefacts, the correct model may be difficult to realise numerically. So we might be best off choosing the least assuming model of all—that of error bounds. This is what we propose in the reconstruction model

$$\begin{aligned} \min _u~ R(u) \quad \text {s.t.} \quad&u \geqq 0, \nonumber \\&g_j^l \leqslant A_j u \leqslant g_j^u, \nonumber \\&\quad \mathcal {L}^n\text {-a.e.},\ (j=1,\ldots ,N). \end{aligned}$$
(11)

Here \(g_j^l :=\log ({\hat{s}}_j^l/{\hat{s}}_0^u)\) and \(g_j^u :=\log ({\hat{s}}_j^u/{\hat{s}}_0^l)\), \(g_j^l, g_j^u \in L^2({\varOmega })\), are our upper and lower bounds on \(g_j\) that we derive from the data.

3.2 Choice of the Regulariser R

A prototypical regulariser in image processing is the total variation, first studied in this context in [45]. It can be defined for a \(u \in L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\) as

$$\begin{aligned} \begin{aligned} {{\mathrm{TV}}}(u)&:=\Vert Eu\Vert _{\mathcal {M}({\varOmega }; {{\mathrm{Sym}}}^{k+1}(\mathbb {R}^m))} \\&:=\sup \left\{ \int _{\varOmega }\langle {{\mathrm{div}}}\phi (x),u(x)\rangle {{\mathrm{d}}}x \right. \\&\left. \qquad \qquad \qquad \Bigm | \begin{array}{l} \phi \in C_c^\infty ({\varOmega }; {{\mathrm{Sym}}}^{k+1}(\mathbb {R}^m)) \\ \sup _x \Vert \phi (x)\Vert _F \le 1 \end{array} \right\} . \end{aligned} \end{aligned}$$

Observe that for scalar or vector fields, i.e. the cases \(k=0,1\), we have \({{\mathrm{Sym}}}^0(\mathbb {R}^m)=\mathcal {T}^0(\mathbb {R}^m)=\mathbb {R}\), and \({{\mathrm{Sym}}}^1(\mathbb {R}^m)=\mathcal {T}^1(\mathbb {R}^m)=\mathbb {R}^m\). Therefore, for scalars in particular, this gives the usual isotropic total variation

$$\begin{aligned} {{\mathrm{TV}}}(u) = \Vert Du\Vert _{\mathcal {M}({\varOmega }))}. \end{aligned}$$

Total generalised variation was introduced in [9] as a higher-order extension of \({{\mathrm{TV}}}\). Following [11, 57], the second-order variant may be defined with the differentiation cascade formulation for symmetric tensor fields \(u \in L^1({\varOmega }; {{\mathrm{Sym}}}^{k}(\mathbb {R}^m))\) as the marginal

$$\begin{aligned} {{\mathrm{TGV}}}^2_{(\beta ,\alpha )}(u):= & {} \min \Big \{ {\varPhi }_{(\beta ,\alpha )}(u, w)\nonumber \\&\mid w \in L^1({\varOmega }; {{\mathrm{Sym}}}^{k+1}(\mathbb {R}^m)) \Big \} \end{aligned}$$
(12)

of

$$\begin{aligned} \begin{aligned} {\varPhi }_{(\beta ,\alpha )}(u, w)&:=\alpha \Vert E u - w\Vert _{F,\mathcal {M}({\varOmega }; {{\mathrm{Sym}}}^{k+1}(\mathbb {R}^m))}\\&\phantom {:=} +\beta \Vert E w\Vert _{F,\mathcal {M}({\varOmega }; {{\mathrm{Sym}}}^{k+2}(\mathbb {R}^m))}. \end{aligned} \end{aligned}$$

It turns out that the standard BV-norm

$$\begin{aligned} \Vert u\Vert _{{{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))} :=\Vert u\Vert _{L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))}+{{\mathrm{TV}}}(u) \end{aligned}$$

and the “BGV norm” [9]

$$\begin{aligned} \Vert u\Vert ' :=\Vert u\Vert _{L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))}+{{\mathrm{TGV}}}^2_{(\beta ,\alpha )}(u) \end{aligned}$$

are topologically equivalent norms [10, 11] on the space \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), yielding the same convergence results for TGV regularisation as for TV regularisation. The geometrical regularisation behaviour is, however, different, and TGV tends to avoid the staircasing observed in TV regularisation.

Regarding topologies, we say that a sequence \(\{u^i\}\) in \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\) converges weakly* to u, if \(u^i \rightarrow u\) strongly in \(L^1\), and weakly* as Radon measures [2, 51, 57]. The latter means that for all \(\phi \in C_c^\infty ({\varOmega }; {{\mathrm{Sym}}}^{k+1}(\mathbb {R}^m))\) holds \(\int _{\varOmega }\langle {{\mathrm{div}}}\phi (x),u^i(x)\rangle {{\mathrm{d}}}x \rightarrow \int _{\varOmega }\langle {{\mathrm{div}}}\phi (x),u(x)\rangle {{\mathrm{d}}}x\).

3.3 Compact Subspaces

Now, for a weak* lower semi-continuous seminorm R on \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), let us set

$$\begin{aligned} {{\mathrm{BV}}}_{0,R}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m)) :={{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m)) / \ker R. \end{aligned}$$

That is, we identify elements \(u, {\tilde{u}} \in {{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), such that \(R(u - {\tilde{u}})=0\). Now R is a norm on the space \({{\mathrm{BV}}}_{0,R}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\); compare, e.g. [42] for the case of \(R={{\mathrm{TV}}}\).

Suppose

$$\begin{aligned} \Vert u\Vert ' :=\Vert u\Vert _{L^1({\varOmega })} + R(u) \end{aligned}$$

is a norm on \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), equivalent to the standard norm. If also the R -Sobolev-Korn-Poincaré inequality

$$\begin{aligned} \inf _{R(v)=0} \Vert u-v\Vert _{L^1({\varOmega })} \le C R(u) \end{aligned}$$
(13)

holds, we may then bound

$$\begin{aligned}&\inf _{R(v)=0} \Vert u-v\Vert _{{{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))} \le \inf _{R(v)=0} C' \Vert u-v\Vert ' \\&\quad = \inf _{R(v)=0} C' \bigl ( \Vert u-v\Vert _{L^1({\varOmega })} + R(u-v)\bigr ) \\&\quad \le C' (1+C)R(u). \end{aligned}$$

Now, using the weak* lower semicontinuity of the BV-norm, and the weak* compactness of the unit ball in \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\)—we refer to [2] for these and other basic properties of BVspaces—we may thus find a representative \({\tilde{u}}\) in the \({{\mathrm{BV}}}_{0,R}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\) equivalence class of u, satisfying

$$\begin{aligned} \Vert {\tilde{u}}\Vert _{{{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))} \le C' (1+C)R(u). \end{aligned}$$

Again using the weak* compactness of the unit ball in \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), and the weak* lower semicontinuity of R, it follows that the sets

$$\begin{aligned} {{\mathrm{lev}}}_a R:= & {} \left\{ u \in {{\mathrm{BV}}}_{0,R}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m)) \mid R(u) \le a \right\} , \nonumber \\&\quad (a>0), \end{aligned}$$
(14)

are weak* compact in \({{\mathrm{BV}}}_{0,R}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), in the topology inherited form \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\). Consequently, they are strongly compact subsets of the space \(L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\). This feature is crucial for the application of the regularisation theory in Banach lattices above.

On a connected domain \({\varOmega }\), in particular

$$\begin{aligned} {{\mathrm{BV}}}_{0,{{\mathrm{TV}}}}({\varOmega }) \simeq \left\{ u \in {{\mathrm{BV}}}({\varOmega }) \big | \int _{\varOmega }u {{\mathrm{d}}}x=0 \right\} . \end{aligned}$$

That is, the space consists of zero-mean functions. Then \(u \mapsto \Vert Du\Vert _{\mathcal {M}({\varOmega }; \mathbb {R}^m)}\) is a norm on \({{\mathrm{BV}}}_{0,{{\mathrm{TV}}}}({\varOmega })\) [42], and this space is weak* compact. In particular, the sets \({{\mathrm{lev}}}_a {{\mathrm{TV}}}\) are compact in \(L^1({\varOmega })\).

More generally, we know from [8] that on a connected domain \({\varOmega }\), \(\ker {{\mathrm{TV}}}\) consists of \({{\mathrm{Sym}}}^k(\mathbb {R}^m)\)-valued polynomials of maximal degree k. By extension, the kernel of \({{\mathrm{TGV}}}^2\) consists of \({{\mathrm{Sym}}}^k(\mathbb {R}^m)\)-valued polynomials of maximal degree \(k+1\). In both cases, (13), weak* lower semicontinuity of R and the equivalence of \(\Vert \,\varvec{\cdot }\,\Vert '\) to \(\Vert \,\varvec{\cdot }\,\Vert _{{{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))}\) hold by the results in [8, 11, 51]. Therefore, we have proved the following.

Lemma 1

Let \({\varOmega }\subset \mathbb {R}^m\) and \(k \ge 0\). Then the sets \({{\mathrm{lev}}}_a {{\mathrm{TV}}}\) and \({{\mathrm{lev}}}_a {{\mathrm{TGV}}}^2\) are weak* compact in \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\) and strongly compact in \(L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\).

Now, in the above cases, \(\ker R\) is finite-dimensional, and we may write

$$\begin{aligned} {{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m)) \simeq {{\mathrm{BV}}}_{0,R}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m)) \oplus \ker R. \end{aligned}$$

Denoting by

$$\begin{aligned} B_X(r) :=\{x \in X \mid \Vert x\Vert \le r\}, \end{aligned}$$

the closed ball of radius r in a normed space X, we obtain by the finite-dimensionality of \(\ker R\) the following result.

Proposition 1

Let \({\varOmega }\subset \mathbb {R}^m\) and \(k \ge 0\). Pick \(a>0\). Then the sets

$$\begin{aligned} V :={{\mathrm{lev}}}_a R \oplus B_{\ker R}(a) \end{aligned}$$

for both regularisers \(R={{\mathrm{TV}}}\) and \(R={{\mathrm{TGV}}}^2\), are weak* compact in \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\) and strongly compact in \(L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\).

The next result summarises Theorem 3 and Proposition 1.

Theorem 4

With \(U=L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\), let the operator \(A: U \rightarrow F\) be linear, continuous and injective. Let \(f^l_n\) and \(f^u_n\) be sequences of lower and upper bounds for the right-hand such that

$$\begin{aligned} \begin{aligned}&f^l_n :f^l_{n+1} \geqslant f^l_n,&\quad&f^u_n :f^u_{n+1} \leqslant f^u_n, \\&f^l_n \leqslant f \leqslant f^u_n,&\quad&\Vert f^l_n - f^u_n \Vert \rightarrow 0 \quad \text {as } n \rightarrow \infty . \end{aligned} \end{aligned}$$

Supposing that there are no errors in the operator A and the exact solution \({\bar{u}}\) exists, define the feasible set as follows

$$\begin{aligned} U_n = \left\{ u \in U :\quad f^l_n \leqslant _F A u \leqslant _F f^u_n \right\} . \end{aligned}$$

Decomposing \(u \in U\) as \(u=u_0+u^\perp \) with \(u^\perp \in \ker R\), suppose

$$\begin{aligned} u \in U_n \implies \Vert u^\perp \Vert \le a \end{aligned}$$
(15)

for some constant \(a>0\), then for \(R={{\mathrm{TV}}}\) and \(R={{\mathrm{TGV}}}^2\), the sequence

$$\begin{aligned} u_n = \mathop {\hbox {arg min}}\limits _{u \in U_n} R(u) \end{aligned}$$

converges strongly in \(L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\) to the exact solution \({\bar{u}}\) and \(R(u_n) \rightarrow R({\bar{u}})\).

Proof

With the decomposition \(u_n = u_{0,n} + u_n^\perp \), where \(u_n^\perp \in \ker R\), we have \(u_{0,n} \in {{\mathrm{lev}}}_a R\) for suitably large \(a>0\) through

$$\begin{aligned} R(u_{0,n}) = R(u_n) = \min _{u' \in U_n} R(u') \le R({\bar{u}}). \end{aligned}$$

The assumption (15) bounds \(\Vert u_n^\perp \Vert \le a\). Thus \(u_n \in V\) for V as in Proposition 1. The proposition thus implies the necessary compactness in \(U=L^1({\varOmega }; {{\mathrm{Sym}}}^k(\mathbb {R}^m))\) for the application of Theorem 3.

Remark 1

The condition (15) simply says for \(R={{\mathrm{TV}}}\) that the data have to bound the solution in mean. This is very reasonable to expect for practical data; anything else would be very non-degenerate. For \(R={{\mathrm{TGV}}}^2\) we also need that the data bound the entire affine part of the solution. Again, this is very likely for real data. Indeed, in DTI practice, with at least 6 independent diffusion-sensitising gradients, A is an invertible or even over-determined linear operator. In that typical case, the bounds \(f^l_n\) and \(f^u_n\) will be translated into \(U_n\) being a bounded set.

4 Solving the Optimisation Problem

4.1 The Chambolle–Pock Method

The Chambolle–Pock algorithm is an inertial primal-dual backward-backward splitting method, classified in [18] as the modified primal-dual hybrid gradient method (PDHGM). It solves the minimax problem

$$\begin{aligned} \min _x \max _y\ G(x) + \langle Kx,y\rangle - F^*(y), \end{aligned}$$
(16)

where \(G: X \rightarrow \overline{\mathbb {R}}\) and \(F^*: Y \rightarrow \overline{\mathbb {R}}\) are convex, proper, lower semi-continuous functionals on (finite-dimensional) Hilbert spaces X and Y. The operator \(K:X \rightarrow Y\) is linear, although an extension of the method to non-linear K has recently been derived [55]. The PDHGM can also be seen as a preconditioned ADMM (alternating directions method of multipliers); we refer to [18, 47, 56] for reviews of optimisation methods popular in image processing. For step sizes \(\tau ,\sigma >0\), and an over-relaxation parameter \(\omega >0\), each iteration of the algorithm consists of the updates

$$\begin{aligned} u_{i+1}&:=(I+\tau \partial G)^{-1}(u_i - \tau K^* y_{i}), \end{aligned}$$
(17a)
$$\begin{aligned} {\bar{u}}_{i+1}&:=u_{i+1} + \omega (u_{i+1}-u_i), \end{aligned}$$
(17b)
$$\begin{aligned} y_{i+1}&:=(I+\sigma \partial F^*)^{-1}(y_i + \sigma K {\bar{u}}_{i+1}). \end{aligned}$$
(17c)

We should remark that the order of the primal (u) and dual (y) updates here is reversed from the original presentation in [13]. The reason is that when reordered, the updates can, as discovered in [26], be easily written in a proximal point form.

The first and last update are the backward (proximal) steps for the primal (x) and dual (y) variables, respectively, keeping the other fixed. However, the dual step includes some “inertia” or over-relaxation, as specified by the parameter \(\omega \). Usually \(\omega =1\), which is required for convergence proofs of the method. If G or \(F^*\) is uniformly convex, by smartly choosing for each iteration the step length parameters \(\tau ,\sigma \), and the inertia \(\omega \), the method can be shown to have convergence rate \(O(1/N^2)\). This is similar to Nesterov’s optimal gradient method [43]. In the general case the rate is O(1 / N). In practice the method produces visually pleasing solutions in rather few iterations, when applied to image processing problems.

In implementation of the method, it is crucial that the resolvents \((I+\tau \partial G)^{-1}\) and \((I+\sigma \partial F^*)^{-1}\) can be computed quickly. We recall that they may be written as

$$\begin{aligned} (I+\tau \partial G)^{-1}(u) = \mathop {\hbox {arg min}}\limits _{u'}\left\{ \frac{\Vert u'-u\Vert ^2}{2\tau } + G(u')\right\} . \end{aligned}$$

Usually in applications, these computations turn out to be simple projections or linear operations—or the soft-thresholding operation for the \(L_1\)-norm.

As a further implementation note, since the algorithm (17) is formulated in Hilbert spaces (see however [28]) while our problems are formulated in the Banach space \({{\mathrm{BV}}}({\varOmega }; {{\mathrm{Sym}}}^2(\mathbb {R}^3))\), we have to discretise our problems before application of the algorithm. We do this by simple forward-differences discretisation of the operator E with cell width \(h=1\) on a regular rectangular grid corresponding to the image voxels.

4.2 Implementation of Deterministic Constraints

We now reformulate the problem (11) of DTI imaging with deterministic error bounds in the form (16). Suppose we have some upper and lower bounds \(s_j^l\leqslant s_j\leqslant s_j^u\) on the DWI signals \(s_j\), (\(j=0,\ldots ,N\)). Then the bounds for \(g_j=\log (s_j/s_0)\) are

$$\begin{aligned} g_j^l=\log \left( s_j^l/s_0^u\right) ;\quad g_j^u=\log \left( s_j^u/s_0^l\right) , \quad (j=1,\ldots ,N), \end{aligned}$$
(18)

because \(g_j\) is monotone in regards to \(s_j\). We are thus trying to solve

$$\begin{aligned} u=\mathop {\hbox {arg min}}\limits _{u'\in U=\cap U^j} R(u'), \end{aligned}$$
(19)

where

$$\begin{aligned} U^j=\left\{ u:g_j^l\leqslant A_ju\leqslant g_j^u\right\} . \end{aligned}$$

For the ease of notation, we write

$$\begin{aligned} g&=\begin{pmatrix}g_1,&\ldots ,&g_N\end{pmatrix}, \quad \text {and} \nonumber \\ Au&= \begin{pmatrix} A_1 u_1,&\ldots ,&A_N u_N\end{pmatrix}, \end{aligned}$$
(20)

so that the Stejskal–Tanner equation is satisfied when

$$\begin{aligned} Au=g. \end{aligned}$$

The problem (19) may be rewritten as

$$\begin{aligned} \min _{u'}~ F_0(Au')+R(u'), \end{aligned}$$

for

$$\begin{aligned} F_0(y) = \delta (g^l \leqslant y\leqslant g^r) \end{aligned}$$

with \(\delta (g^l \leqslant y\leqslant g^u)\) denoting the indicator function of the convex set \(\{y :g^l \leqslant y\leqslant g^r\}\). Solving the conjugate

$$\begin{aligned} F_0^*(y)={\left\{ \begin{array}{ll} \langle g^l,y\rangle , &{} y<0 \\ \langle g^u,y\rangle , &{} y\ge 0. \end{array}\right. }, \end{aligned}$$

and also writing

$$\begin{aligned} R(u)=R_0(K_0 u) \end{aligned}$$

for some \(R_0\) and \(K_0\), the problem can further be written in the saddle point form (16) with

$$\begin{aligned} \begin{aligned} G(u)&= 0, \\ K&= \begin{pmatrix} A \\ K_0 \end{pmatrix}, \\ F^*(y,\psi )&= F_0^*(y) + R_0^*(\psi ). \end{aligned} \end{aligned}$$

To apply algorithm (17), we need to compute the resolvents of \(G_0^*\) and \(R_0^*\). For details regarding \(R_0^*\) for \(R={{\mathrm{TGV}}}_{(\beta ,\alpha )}^2\) and \(R=\alpha {{\mathrm{TV}}}\) in the discretised setting, we refer to [57, 58]; here it suffices to note that for \(R=\alpha {{\mathrm{TV}}}\), we have \(K_0=\alpha E\) and \(R_0^*(\phi )\) is the indicator function of the dual ball \(\{\phi \mid \sup _x \Vert \phi (x)\Vert _F \le 1\}\). Thus the resolvent \((I+\tau \partial R_0^*)^{-1}\) becomes a projection to the dual ball. The case of \(R={{\mathrm{TGV}}}_{(\beta ,\alpha )}^2\) is similar. For \(F_0^*\) we have

$$\begin{aligned} (I+\tau \partial F_0^*)^{-1}(y)&= \mathop {\hbox {arg min}}\limits _{y'} F_0^*(y')+\frac{|y-y'|^2}{2\tau }, \end{aligned}$$

which resolves pointwise at each \(\xi \in {\varOmega }\) into the expression

$$\begin{aligned}{}[(I+\tau \partial F_0^*)^{-1}(y)](\xi )=S(y(\xi )) \end{aligned}$$

for

$$\begin{aligned} S(y(\xi ))={\left\{ \begin{array}{ll} y(\xi )-g^u(\xi )\sigma , &{} y(\xi )\ge g^u(\xi )\sigma , \\ 0, &{} g^u(\xi )\sigma \le y(\xi )\le g^l(\xi )\sigma , \\ y(\xi )-g^l(\xi )\sigma , &{} y(\xi )\le g^l(\xi )\sigma . \end{array}\right. } \end{aligned}$$

Finally, we note that the saddle point system (16) has to have a solution for the Chambolle–Pock algorithm to converge. In our setting, in particular, we need to find error bounds \(g^l\) and \(g^u\), for which there exists a solution u to

$$\begin{aligned} g^l \leqslant Au \leqslant g^u. \end{aligned}$$
(21)

If one uses at most six independent diffusion directions (\(N=6\)), as we will, then, for any g, there in fact exists a solution to \(g=Au\). The condition (21) becomes \(g^l \leqslant g^u\), immediately guaranteed through the monotonicity of (18), and the trivial conditions \(s_j^l \leqslant s_j^u\).

We are thus ready to apply the algorithm (17) to diffusion tensor imaging with deterministic error bounds. For the realisation of (17) for models (8), (9), and (7), we refer to [55, 5759].

Fig. 2
figure 2

Helix vector field for the principal eigenvector of the ground-truth tensor field

Table 1 Numerical results for the synthetic data
Fig. 3
figure 3

Synthetic test data results. a ground-truth plot. b regression result plot. ch Plot of a slice of the solution for \(L^2\), non-linear \(L^2\) and constrained models. The legend on the right indicates the colour-coding of directions of the principal eigenvector plotted

5 Experimental Results

We now study the efficiency of the proposed reconstruction model in comparison to the different \(L^2\)-squared models, i.e. ones with Gaussian error assumption. This is based on a synthetic data, for which a ground-truth is available, as well as a real in vivo DTI data set. First we, however, have to describe in detail the procedure for obtaining upper and lower bounds for real data, when we do not know the true noise model, and are unable to perform multiple experiments as required by the theory of Sect. 2.2.

Fig. 4
figure 4

Fractional anisotropy in greyscale superimposed by principal eigenvector. Legend on left indicates the greyscale intensities of the fractional anisotropy. a Ground-truth. b Regression result. c Linear \(L^2\), discrepancy principle. d Linear \(L^2\), error-optimal. e Non-linear \(L^2\), discrepancy principle. f Non-linear \(L^2\), error-optimal. g Constrained, 95 % confidence intervals. h Constrained, 90 % confidence intervals

Fig. 5
figure 5

Colour-coded errors of fractional anisotropy and principal eigenvector for the computations on the synthetic test data. Legend on the right indicates the colour-coding of errors between u and \(g_0\) as functions of the principal eigenvector angle error \(\theta =\cos ^{-1}(\langle {\hat{v}}_u,{\hat{v}}_{g_0}\rangle )\) in terms of the hue, and the fractional anisotropy error \(e_FA =|FA _u -FA _{g_\mathrm {0}}|\) in terms of whiteness. a Regression result. b Linear \(L^2\), discrepancy principle. c Linear \(L^2\), error-optimal. d Non-linear \(L^2\), discrepancy principle. f Non-linear \(L^2\), error-optimal. g Constrained, 95 % confidence intervals. h Constrained, 90 % confidence intervals

Table 2 Numerical results for the in vivo brain data. For the \(L^2\) and non-linear \(L^2\) reconstruction models the free parameter chosen by the parameter choice criterion is the regularisation parameter \(\alpha \), and for the constrained problem it is the confidence interval
Fig. 6
figure 6

Reconstruction results on the in vivo brain data. a Pseudo-ground-truth plot. b Regression result. cg Plot of a slice of the solution for \(L^2\), non-linear \(L^2\) and constrained problem approach. The legend on the bottom-right indicates the colour-coding of directions of the principal eigenvector plotted

Fig. 7
figure 7

Fractional anisotropy of the corpus callosum in greyscale superimposed by principal eigenvector. Legend on the right indicates the greyscale intensities of the fractional anisotropy. a Pseudo-ground-truth. b Linear \(L^2\), discrepancy principle. c Linear \(L^2\), error-optimal. d Non-linear \(L^2\), discrepancy principle. e Non-linear \(L^2\), error-optimal. f Constrained, 95 % confidence intervals

5.1 Estimating Lower and Upper Bounds from Real Data

As we have already discussed, in practice the noise in the measurement signals \({\hat{s}}_j\) is not Gaussian or Rician; in fact we do not know the true noise distribution and other corruptions. Therefore, we have to estimate the noise distribution from the image background. To do this, we require a known correspondence between the measurement, the noise and the true value. As we have no better assumptions available, the standard one that we use is that of additive noise. Continuing in the statistical setting of Sect. 2.2, we now describe the procedure, working on discrete images expressed as vectors \({\hat{f}}=s_j \in \mathbb {R}^n\) for some fixed \(j \in \{0,1,\dots ,N\}\). We use superscripts to denote the voxel indices, that is \({\hat{f}}=(f^1,\ldots ,f^n)\).

In the i-th voxel, the measured value \({\hat{f}}^i\) is the sum of the true value \(f^i\) and additive noise \(\nu ^i\):

$$\begin{aligned} {\hat{f}}^i = f^i + \nu ^i. \end{aligned}$$

All \(\nu ^i\) are assumed independent and identically distributed (i.i.d.), but their distribution is unknown. If we did know the true underlying cumulative distribution function F of the noise, we could choose a confidence parameter \(\theta \in (0,1)\) and use the cumulative distribution function to calculate \(\nu _{\theta /2}, \nu _{1-\theta /2}\) such thatFootnote 1

$$\begin{aligned} P(\nu _{\theta /2} \leqslant \nu ^i \leqslant \nu _{1-\theta /2}) = 1-\theta . \end{aligned}$$
(22)

Let us instead proceed non-parametrically, and divide the whole image into two groups of voxels: the ones belonging to the background region and the rest. For simplicity, let the indices \(i=1,\ldots ,k\), (\(k < n\)), stand for the background voxels. In this region, we have \(f^i = 0\) and \({\hat{f}}^i = \nu ^i\). Therefore, the background region provides us with a number of independent samples from the unknown distribution of \(\nu \). The Dvoretzky–Kiefer–Wolfowitz inequality [17, 37, 41] states that

$$\begin{aligned} P\bigl (\sup _t |F(t)-F_k(t)| > \epsilon \bigr ) \le 2e^{-2k\epsilon ^2}, \end{aligned}$$

for the empirical cumulative distribution function

$$\begin{aligned} F_k(t) :=\frac{1}{k} \sum _{i=1}^k \chi _{(-\infty , {\hat{f}}^i]}(t). \end{aligned}$$

Therefore, computing \(\nu _{\theta /2}\) and \(\nu _{1-\theta /2}\) such that

$$\begin{aligned} P_k(\nu _{\theta /2} \leqslant \nu \leqslant \nu _{1-\theta /2}) = 1-\theta , \end{aligned}$$

we also have

$$\begin{aligned} P_k\left( f^i + \nu _{\theta /2} \leqslant {\hat{f}}^i \leqslant f^i + \nu _{1-\theta /2}\right) = 1-\theta . \end{aligned}$$

We may therefore use the values

$$\begin{aligned} {\hat{f}}^{l,i} = {\hat{f}}^i - \nu _{1-\theta /2}, \quad {\hat{f}}^{u,i} = {\hat{f}}^i - \nu _{\theta /2}, \end{aligned}$$
(23)

as lower and upper bounds for the true values \(f^i\) outside the background region.

The Dvoretzky–Kiefer–Wolfowitz inequality implies that the interval estimates converge to the true intervals, determined by (22), as the number of background pixels k increases with the image size n. This procedure, with large k, will therefore provide an estimate of a single-experiment (\(m=1\)) confidence interval for \(f^i\). We note that this procedure will, however, not yield the convergence of the interval estimate \([{\hat{f}}^{l,i}, {\hat{f}}^{u,i}]\) to the true data; for that we would need multiple experiments, i.e. multiple sample images (\(m>1\)), not just agglomeration of the background voxels into a single noise distribution estimate. In practice, however, we can only afford a single experiment (\(m=1\)), and cannot go to the limit.

5.2 Verification of the Approach with Synthetic Data

To verify the effectiveness of the considered approach and to compare it to the other models, we use synthetic data. For the ground-truth tensor field \(u_{g.t.}\) we take a helix region in a 3D box \(100 \times 100 \times 30\), and choose the tensor in each point inside the helix in such a way that the principal eigenvector coincides with the helix direction (Fig. 2). The helix region is described by the following equations:

$$\begin{aligned}&x=(R+r\cos (\theta ))\cos (\phi ),\\&y=(R+r\cos (\theta ))\sin (\phi ),\\&z=r\sin (\theta )+\phi /\phi _{\max },\\&\phi \in [0,\phi _{\max }],\quad r \in [0,r_{\max }], \quad \theta \in [0,2\pi ]. \end{aligned}$$

The vector direction in every point coincides with helix direction:

$$\begin{aligned} \mathbf {r}=\begin{pmatrix}-R\sin (\phi )\\ R\cos (\phi )\\ 1/\phi _{\max }\end{pmatrix}. \end{aligned}$$

We take the parameters \(R=0.3, \phi _{\max }=4\pi , r_{\max }=0.07\) in this numerical experiment.

We apply the forward operators \(T_j(u_{g.t.})\) for each \(j=0, \ldots , 6\) to obtain the data \(s_j(x)\). We then add Rician noise to this data \(\bar{s_j}=s_j+\delta \) with \(\sigma =2\), which corresponds to \(PSNR \approx 27\hbox {dB}\).

We apply several models for solving the inverse problem of reconstructing u: the linear and non-linear \(L^2\) approaches (8) and (7), and the constrained problem (11). As the regulariser we use \(R={{\mathrm{TGV}}}^2_{(0.9\alpha ,\alpha )}\), where the choice \(\beta =0.9\alpha \) was made somewhat arbitrarily, however yielding good results for all the models. This is slightly lower than the range \([1, 1.5]\alpha \) discovered in comprehensive experiments for other imaging modalities [7, 38].

For the linear and non-linear \(L^2\) models (8) and (7), respectively, the regularisation parameter \(\alpha \) is chosen either by a version of the discrepancy principle for inconsistent problems [52] or optimally with regard to the \(\Vert \,\varvec{\cdot }\,\Vert _{F,2}\) distance between the solution and the ground-truth. In case of the discrepancy principle, such an \(\alpha \) was chosen that the following equality holds:

$$\begin{aligned} {\varDelta }\rho (\alpha ) :=\sum _j||T_j(u)-\bar{s_j}||^2-\tau \sum _j||\bar{s_j}-s_j||^2=0. \end{aligned}$$
(24)
Fig. 8
figure 8

Colour-coded errors of fractional anisotropy and principal eigenvector for the computations on the synthetic test data. Legend on the right indicates the colour-coding of errors between u and \(g_0\) as functions of the principal eigenvector angle error \(\theta =\cos ^{-1}(\langle {\hat{v}}_u,{\hat{v}}_{g_0}\rangle )\) in terms of the hue, and the fractional anisotropy error \(e_FA =|FA _u -FA _{g_\mathrm {0}}|\) in terms of whiteness. a Linear \(L^2\), discrepancy principle. b Linear \(L^2\), error-optimal. c Non-linear \(L^2\), discrepancy principle. d Non-linear \(L^2\), error-optimal. e Constrained, 95 % confidence intervals

Fig. 9
figure 9

Tractography visualisation results on the in vivo brain data. a Pseudo-ground-truth plot. b Regression result. cf Plot of a slice of the solution for \(L^2\), non-linear \(L^2\) and constrained problem approach

We find \(\alpha \) by solving this equation numerically using bisection method. We start by finding such \(\alpha _1,\alpha _2\) that \({\varDelta }\rho (\alpha _1)>0\) and \({\varDelta }\rho (\alpha _2)<0\). We calculate \({\varDelta }\rho (\alpha _3)\) for \(\alpha _3=\frac{\alpha _1+\alpha _2}{2}\) and depending on its sign replace either \(\alpha _1\) or \(\alpha _2\) with \(\alpha _3\). We repeat this procedure until the stopping criteria are reached.

As stopping criteria we use \(|f(\alpha )|<\epsilon \). We use \(\tau =1.05\), \(\epsilon =0.01\) for linear and \(\tau =1.2\), \(\epsilon =0.0001\) for non-linear \(L^2\) solution. A value of \(\tau \) yielding a reasonable degree of smoothness has been chosen by trial and error, and is different for the non-linear model, reflecting a different non-linear objective in the discrepancy principle. For the constrained problem we calculate \(\theta =90\), 95 and \(99\,\%\) confidence intervals to generate the upper and lower bounds. We, however, digress a little bit from the approach of Sect. 2.2. Minding that we do not know the true underlying distribution, which fails to be Rician as illustrated in Fig. 1, we do not use it to calculate the confidence intervals, but use the estimation procedure described in Sect. 5.1. We stress that we only have a single sample of each signal \(s_j\), so are unable to verify any asymptotic estimation properties.

The numerical results are in Table 1 and Figs. 3, 4, and 5, with the first of the figures showing the colour-coded principal eigenvector of the reconstruction, the second showing the fractional anisotropy and principal eigenvectors and the last one the errors in the latter two, in a colour-coded manner. All plots are masked to represent only the non-zero region. The field of fractional anisotropy is defined for a field u of 2-tensors on \({\varOmega }\subset \mathbb {R}^m\) as

$$\begin{aligned} FA _u(x) =\frac{\Bigl (\sum _{i=1}^m (\lambda _i-{\bar{\lambda }})^2\Bigr )^{1/2}}{\Bigl ({\sum _{i=1}^m \lambda _i^2}\Bigr )^{1/2}} \in [0, 1], \end{aligned}$$

with \(\lambda _1,\ldots ,\lambda _m\) denoting the eigenvalues of u(x) at a point \(x \in {\varOmega }\). It measures how far the ellipsoid prescribed by the eigenvalues and eigenvectors is from a sphere, with \(FA _u(x)=1\) corresponding a full sphere, and \(FA _u(x)=0\) corresponding to a degenerate object not having full dimension.

As we can see, the non-linear approach (7) performs overall the best by a wide margin, in terms of the pointwise Frobenius error, i.e. error in \(\Vert \cdot \Vert _{F,2}\). This is expressed as a PSNR in Table 1. What is, however, interesting, is that the constraint-based approach (11) has a much better reconstruction of the principal eigenvector angle, and a comparable reconstruction of its magnitude. Indeed, the 95 % confidence interval in Figs. 3(g) and 4(g) suggests a nearly perfect reconstruction in terms of smoothness. But, the Frobenius PSNR in Table 1 for this approach is worse than the simple unregularised inversion by regression. The problem is revealed by Fig. 5(f): the large white cloudy areas indicate huge fractional anisotropy errors, while at the same time, the principal eigenvector angle errors expressed in colour are much lower than for other approaches. Good reconstruction of the principal eigenvector is important for the process of tractography, i.e. the reconstruction of neural pathways in a brain. One explanation for our good results is that the regulariser completely governs the solution in areas where the error bounds are inactive due to generally low errors. This results in very smooth reconstructions, which is in the present case desirable as our synthetic tensor field is also smooth within the helix.

5.3 Results with In Vivo Brain Imaging Data

We now wish to study the proposed regularisation model on a real in vivo diffusion tensor image. Our data are that of a human brain, with the measurements of a volunteer performed on a clinical 3T system (Siemens Magnetom TIM Trio, Erlangen, Germany), with a 32 channel head coil. A 2D diffusion-weighted single-shot EPI sequence with diffusion-sensitising gradients applied in 12 independent directions (\(b = 1000\,\hbox {s}/\hbox {mm}^2\)). An additional reference scan without diffusion was used with the parameters: \(\hbox {TR}=7900\hbox {ms}\), \(\hbox {TE}=94\hbox {ms}\), flip angle \(90^\circ \). Each slice of the 3D dataset has plane resolution \(1.95\,\hbox {mm} \times 1.95\,\hbox {mm}\), with a total of \(128 \times 128\) pixels. The total number of slices is 60 with a slice thickness of 2mm. The dataset consists of 4 repeated measurements. The GRAPPA acceleration factor is 2. Prior to the reconstruction of the diffusion tensor, eddy-current correction was performed with FSL [50]. Written informed consent was obtained from the volunteer before the examination.

For error bounds calculation according to the procedure of Sect. 5.1, to avoid systematic bias near the brain, we only use about 0.6 % of the total volume near the borders, or roughly \(k \approx 6000\) voxels.

To estimate errors for the all the considered reconstruction models, for each gradient direction \(b_i\) we use only one out of the four duplicate measurements. We then calculate the errors using a somewhat less than ideal pseudo-ground-truth, which is the linear regression reconstruction from all the available measurements.

The results are in Table 2 and Figs. 6, 7, and 8, again with the first of the figures showing the colour-coded principal eigenvector of the reconstruction, the second showing the fractional anisotropy and principal eigenvectors and the last one the errors in the latter two, in a colour-coded manner. Again, all plots are masked to represent only the non-zero region. In the figures, we concentrate on error bounds based on 95 % confidence intervals, as the results for the 90 and 99 % cases do not differ significantly according to Table 2.

This time, the linear \(L^2\) approach (8) has best overall reconstruction (Frobenius PSNR), while the non-linear \(L^2\) approach (7) has clearly the best principal eigenvector angle reconstruction besides the regression, which does not seem entirely reliable regarding our regression-based pseudo-ground-truth. The constraints-based approach (11), with 95 % confidence intervals is, however, not far behind in terms of numbers. More detailed study of the corpus callosum in Fig. 8 (small picture in picture) and Fig. 7, however, indicates a better reconstruction of this important region by the non-linear approach. The constrained approach has some very short vectors there in the white region. Naturally, however, these results on the in vivo data should be taken with a grain of salt, as we have only a somewhat unreliable pseudo-ground-truth available for comparison purposes.

5.4 Conclusions from the Numerical Experiments

Our conclusion is that the error bounds-based approach is a feasible alternative to standard modelling with incorrect Gaussian assumptions. It can produce good reconstructions, although the non-linear \(L^2\) approach of [55] is possibly slightly more reliable. The latter does, however, in principle depend on a good initialisation of the optimisation method, unlike the convex bounds-based approach.

Further theoretical work will be undertaken to extend the partial-order-based approach to modelling errors in linear operators to the non-lattice case of the semidefinite partial order for symmetric matrices, which will allow us to consider problems of diffusion MRI with errors in the forward operator.

It also needs to be investigated whether the error bounds approach needs to be combined with an alternative, novel, regulariser that would ameliorate the fractional anisotropy errors that the approach exhibits. It is important to note, however, that from the practical point of view, of using the reconstruction tensor field for basic tractography methods based solely on principal eigenvectors, these are not that critical. As pointed out by one of the reviewers, the situation could differ with more recent geodesic tractography methods [20, 21, 25] employing the full tensor. We provide basic principal eigenvector tractography results for reference in Fig. 9, without attempting to extensively interpret the results. It suffices to say that the results look comparable. With this in sight, the error bounds approach produces a very good reconstruction of the direction of the principal eigenvectors, although we saw some problems with the magnitude within the corpus callosum.