Abstract
We propose a variational regularisation approach for the problem of templatebased image reconstruction from indirect, noisy measurements as given, for instance, in Xray computed tomography. An image is reconstructed from such measurements by deforming a given template image. The image registration is directly incorporated into the variational regularisation approach in the form of a partial differential equation that models the registration as either mass or intensitypreserving transport from the template to the unknown reconstruction. We provide theoretical results for the proposed variational regularisation for both cases. In particular, we prove existence of a minimiser, stability with respect to the data, and convergence for vanishing noise when either of the abovementioned equations is imposed and more general distance functions are used. Numerically, we solve the problem by extending existing Lagrangian methods and propose a multilevel approach that is applicable whenever a suitable downsampling procedure for the operator and the measured data can be provided. Finally, we demonstrate the performance of our method for templatebased image reconstruction from highly undersampled and noisy Radon transform data. We compare results for mass and intensitypreserving image registration, various regularisation functionals, and different distance functions. Our results show that very reasonable reconstructions can be obtained when only few measurements are available and demonstrate that the use of a normalised cross correlationbased distance is advantageous when the image intensities between the template and the unknown image differ substantially.
Introduction
In medical imaging, an image can typically not be observed directly but only through indirect and potentially noisy measurements, as it is the case, for example, in computed tomography (CT) [41]. Due to the severe illposedness of the problem, reconstructing an image from measurements is rendered particularly challenging when only few or partial measurements are available. This is, for instance, the case in limitedangle CT [22, 41], where limitedangle data is acquired in order to minimise exposure time of organisms to Xradiation. Therefore, it can be beneficial to impose a priori information on the reconstruction, for instance, in the form of a template image. However, typically neither its exact position nor its exact shape is known.
In image registration, the goal is to find a reasonable deformation of a given template image so that it matches a given target image as closely as possible according to a predefined similarity measure, see [39, 40] for an introduction. When the target image is unknown and only given through indirect measurements, it is referred to as indirect image registration and has been explored only recently [13, 24, 31, 45]. As a result, a deformation together with a transformed template can be computed from tomographic data. The prescribed template acts as a prior for the reconstruction and, when chosen reasonably close in a deformation sense, gives outstanding reconstructions in situations where only few measurements are available and competing methods such as filtered backprojection [41] or total variation regularisation [47] fail, see [13, Sect. 10].
In our setting, deformations are maps from the image domain \(\Omega \subset \mathbb {R}^{n},\) \(n \in \mathbb {N},\) to itself together with an action that specifies exactly how such a map deforms elements in the shape space, which in this work is the space \(L^{2}(\Omega , \mathbb {R})\) of greyscale images supported in the image domain. Natural problems are to characterise admissible deformations and to compute these numerically in an efficient manner.
One possible approach is diffeomorphic image registration, where the set of admissible deformations is restricted to diffeomorphisms in order to preserve the topology of structures within an image [58]. One can, for instance, consider the group of diffeomorphisms together with the composition as group operation. Elements in this group act on greyscale images by means of the group action and thereby allow for a rich set of nonrigid deformations, as required in many applications. For instance, the geometric group action transforms greyscale images in a way such that its intensity values are preserved, whereas the masspreserving group action ensures that, when the image is regarded as a density, the integral over the density is preserved.
A computational challenge in using the above group formalism is that it lacks a natural vector space structure, which is typically desired for the numerical realisation of the scheme. Hence, it is convenient to further restrict the set of admissible deformations. One way to obtain diffeomorphic deformations is to perturb the identity map with a displacement vector field. Provided that the vector field is reasonably small and sufficiently regular, the resulting map is invertible [58, Proposition 8.6]. For indirect image registration this idea was pursued in [45].
The basic idea of the large deformation diffeomorphic metric mapping (LDDMM) [4, 18, 37, 38, 50, 53, 58] framework is to generate large deformations by considering flows of diffeomorphisms that arise as the solution of an ordinary differential equation (ODE), the socalled flow equation, with velocity fields that stem from a reproducing kernel Hilbert space. In order to ensure that the flow equation admits a unique solution, one typically chooses this vector space so that it can be continuously embedded into \(C^1(\Omega , \mathbb {R}^{n}),\) allowing the application of existence and uniqueness results from Cauchy–Lipschitz theory for ODEs, see [15, Chap. 1] for a brief introduction. In [13], the LDDMM framework is adapted for indirect image registration and the authors prove existence, stability, and convergence of solutions for their variational formulation. Numerically, the problem is solved by gradient descent.
The variational problem associated with LDDMM is typically formulated as an ODEconstrained optimisation problem. As the flow equation can be directly related to hyperbolic partial differential equations (PDEs) via the method of characteristics [21, Chap. 3.2], the problem can equivalently be rephrased as a PDEconstrained optimisation problem [33]. The resulting PDE is determined by the chosen group action, see [13, Sect. 6.1.1] for a brief discussion. For instance, the geometric group action is associated with the transport (or advection) equation, while the masspreserving group action is associated with the continuity equation. It is important to highlight that the PDE constraint implements both the flow equation and the chosen diffeomorphic group action.
Such an optimal control approach was also pursued for motion estimation and image interpolation [2, 6, 7, 9, 12, 29, 44]. In the terminology of optimal control, the PDE represents the state equation, the velocity field the control, and the transformed image the resulting state. We refer to the books [5, 16, 26, 32] and to the article [30] for a general introduction to PDEconstrained optimisation and suitable numerical methods. Let us mention that other methods, such as geodesic shooting [3, 37, 49, 56], exist and constitute particularly efficient numerical approaches. In particular, this direction has recently been combined with machine learning methods [57].
A particularly challenging scenario for diffeomorphic image registration occurs when the target image is not contained in the orbit of the template image under abovementioned group action of diffeomorphisms. For instance, this could happen in the case of the geometric group action due to the appearance of new structures in the target image or due to a discrepancy between the image intensities of the template and the target image. A possible solution is provided by the metamorphosis framework [36, 46, 51, 52], which is an extension to LDDMM that allows for modulations of the image intensities along characteristics of the flow. The image intensities change according to an additional flow equation with an unknown source. See [58, Chap. 13] for a general introduction and, for instance, [33] for an application to magnetic resonance imaging. Let us also mention [43], which adopts a discrete geodesic path model for the purpose of image reconstruction, and [34], in which the metamorphosis model is combined with optimal transport.
In [24], the metamorphosis framework is adapted for indirect image registration. The authors prove that their formulation constitutes a welldefined regularisation method by showing existence, stability, and convergence of solutions. However, in the setting where only few measurements—e.g. a few directions in CT—are available, reconstruction of appearing or disappearing structures seems very challenging.
Therefore, in order to obtain robustness with respect to differences in the intensities between the transformed template and the sought target image, we follow a different approach. We consider not only the standard sumofsquared differences (SSDs) but also a distance that is based on the normalised cross correlation (NCC) [40, Chap. 7.2], as it is invariant with respect to a scaling of the image intensities.
While image registration itself is already an illposed inverse problem that requires regularisation [20], the indirect setting as described above is intrinsically more challenging. It can be phrased as an inverse problem, where measurements (or observations) \(g \in Y\) are related to an unknown quantity \(f \in X\) via the operator equation
Here, \(K:X \rightarrow Y\) is a (not necessarily linear) operator that models the data acquisition, often by means of a physical process, \(n^{\delta }\) are measurement errors such as noise, and X and Y are Banach spaces. When f constitutes an image and g are tomographic measurements, solving (1) is often referred to as image reconstruction.
We use a variational scheme [48] to solve the inverse problem of indirect image registration, which can be formulated as a PDEconstrained optimisation problem [13, Sect. 6.1.1]. It is given by
where \(J_{\gamma , g}:V \rightarrow [0, + \infty ]\) is the functional
Here, V is an admissible vector space with norm \(\Vert \cdot \Vert _{V},\) \(D:Y \times Y \rightarrow \mathbb {R}_{\ge 0}\) is a data fidelity term that quantifies the misfit of the solution against the measurements, and \(\gamma > 0\) is a regularisation parameter. Moreover, \(f_{v}(T, \cdot ):\Omega \rightarrow \mathbb {R}\) denotes the evaluation at time \(T > 0\) of the (weak) solution of C(v), which is either the Cauchy problem
governed by the transport equation, or
involving the continuity equation. Here, \(f_{0} \in L^2(\Omega ,\mathbb {R})\) denotes an initial condition, which in our case is the template image.
The main goals of this article are the following. First, to study variational and regularising properties of problem (2), and to develop efficient numerical methods for solving it. Second, to investigate alternative choices of distance functions D, such as the abovementioned NCCbased distance. Third, to demonstrate experimentally that excellent reconstructions can be computed from highly undersampled and noisy Radon transform data.
Our numerical approach is based on the Lagrangian methods developed in [35], called LagLDDMM. In contrast to most existing approaches, which are mainly firstorder methods (see [35] for a brief classification and discussion), LagLDDMM uses a Gauss–Newton–Krylov method paired with Lagrangian solvers for the hyperbolic PDEs listed above. The characteristics associated with these PDEs are computed with an explicit Runge–Kutta method. One of the main advantages of this approach is that Lagrangian methods are unconditionally stable with regard to the admissible step size. Furthermore, the approach limits numerical diffusion and, in order to evaluate the gradient or the Hessian required for optimisation, does not require the storage of multiple spacetime vector fields or images at intermediate time instants. The scheme can also be implemented matrixfree.
In comparison to abovementioned existing methods for indirect image registration, such as [13, 24, 31, 45], our method is conceptually different in several ways. The first difference concerns the discretisation. While [13, 24, 45] are mainly based on small deformations and use reproducing kernel Hilbert spaces, our method relies on nonparametric registration. The main advantages are that it directly allows for a multilevel approach and no kernel parameters need to be chosen. Moreover, due to the flexibility of the underlying framework it is straightforward to extend our method to parametric registration. Second, our approach relies on secondorder methods for optimisation by using a Gauss–Newton method paired with line search, while the other methods mainly rely on gradient descent. This allows for a fast decrease of the objective within only few iterations. Third, our method allows to easily exchange the underlying PDE solver. Essentially, any solver can be used as long as it can be differentiated efficiently. The used explicit Runge–Kutta method has the advantage that it does not require the storage of multiple images or repeated interpolation of the template, which can potentially lead to a blurred solution. Finally, let us mention that [31] is conceptually different since both a deformation and a template image are computed. Our main focus, however, are applications where only very few and noisy measurements are available and the problem of estimating an additional template seems highly underdetermined in such situations.
Contributions
The contributions of this article are as follows. First, we provide the necessary theoretical background on (weak) solutions of the continuity and the transport equation, and recapitulate existence and uniqueness theory for characteristic curves for the associated ODE. In contrast to the results derived in [13], where the template image is assumed to be contained in the space \(SBV(\Omega , \mathbb {R}) \cap L^{\infty }(\Omega , \mathbb {R})\) of essentially bounded functions with special bounded variation, our results only require \(L^{2}(\Omega , \mathbb {R})\) regularity. In addition, by using results from [17], we are able to consider the transport equation in the setting with \(H^1\) regularity of vector fields in space as well as in time and with bounded divergence. Moreover, we show the existence of a minimiser of the problem (2), stability with respect to the data, and convergence for vanishing noise.
Second, in order to solve the problem numerically, we follow a discretisethenoptimise approach and extend the LagLDDMM framework [35] to the indirect setting. The library itself is an extension of FAIR [40] and, as a result, our implementation provides great flexibility regarding the selected PDE, and can easily be extended to other distances as well as to other regularisation functionals. The source code of our MATLAB implementation is available online.^{Footnote 1}
Finally, we present numerical results for the abovementioned distances and PDEs. To the best of our knowledge, the results obtained for indirect image reconstruction based on the continuity equation are entirely novel. Moreover, we propose to use the NCCbased distance instead of SSD whenever the image intensities of the template and the unknown target are far apart, and show its numerical feasibility.
Theoretical Results on the Transport and Continuity Equation
In this section, we review the necessary theoretical background, and state results on the existence and stability of weak solutions of the transport and the continuity equation. Compared to [13], our results are stronger since we do not require space regularity of the template image.
Continuity Equation
In what follows, we consider wellposedness of the continuity equation that arises in the LDDMM framework using the masspreserving group action via the method of characteristics. The regularity assumptions on v are such that we can apply the theory from [51].
Let \(\Omega \subset \mathbb {R}^n\) be a bounded, open, convex domain with Lipschitz boundary and let \(T>0.\) In the following, we examine the continuity equation
with coefficients \(v \in L^2([0,T],{\mathcal {V}})\) and initial condition \(f_0 \in L^2(\Omega , \mathbb {R}),\) where \({\mathcal {V}}\) is a Banach space which is continuously embedded into \(C^{1,\alpha }_0(\Omega , \mathbb {R}^n)\) for some \(\alpha >0.\) Here \(C^{1,\alpha }_0(\Omega , \mathbb {R}^n)\) denotes the closure of \(C^\infty _c(\Omega , \mathbb {R}^n)\) under the \(C^{1,\alpha }\) norm. Note that such velocity fields can be continuously extended to the boundary. Clearly, Eq. (4) has to be understood in a weak sense, i.e. a function \(f \in C^0([0,T], L^2(\Omega , \mathbb {R}))\) is said to be a weak solution of (4) if
holds for all \(\eta \in C^\infty _c([0,T) \times \Omega ).\) The corresponding characteristic ODE is
In this notation, the first argument of X is the time dependence, the second the initial time, and the third the initial space coordinate. The following theorem is a reformulation of [51, Theorems 1 and 9] and characterises solutions of (6).
Theorem 2.1
Let \(v \in L^2([0, T], {\mathcal {V}})\) and \(s \in [0,T]\) be given. There exists a unique global solution \(X(\cdot ,s,\cdot ) \in C^0([0,T],C^1({\overline{\Omega }}, \mathbb {R}^n))\) such that \(X(s,s, x) = x\) for all \(x \in \Omega \) and
in weak sense (absolutely continuous solutions). The solution operator \(X_v :L^2([0,T], {\mathcal {V}}) \rightarrow C^0([0,T] \times {\overline{\Omega }}, \mathbb {R}^n)\) assigning a flow \(X_v\) to every velocity field v is continuous with respect to the weak topology in \(L^2([0,T],{\mathcal {V}}).\)
Since \(X(0,t,X(t,0,x)) = x,\) we can directly conclude that \(X(t,0,\cdot )\) is a diffeomorphism for every \(t \in [0,T].\) Now, the diffeomorphism X(0, t, x) can be used to characterise solutions of (4) as follows.
Proposition 2.2
If \(v \in L^2([0, T], {\mathcal {V}}),\) then the unique weak solution of (4), as defined in (5), is given by \(f(t,x) = \det (\mathcal {D}_x X(0,t,x)) f_0(X(0,t,x)),\) where \(\mathcal {D}_{x} X\) denotes the Jacobian of X.
Proof
The proof is divided in three steps. First, we want to show that f satisfies the regularity conditions of weak solutions. For this purpose, the first step is to show \(X(0,\cdot ,\cdot ) \in C^0([0,T],C^0({\overline{\Omega }}, \mathbb {R}^n)),\) i.e. that the flow is continuous in the initial values. Clearly, \(X(0,t,\cdot )\in C^0({\overline{\Omega }}, \mathbb {R}^n)\) for every \(t \in [0,T].\) For an arbitrary sequence \(t_i \rightarrow t\) we get
where the first factor is bounded due to [51, Lemma 9]. Next, using the sequence \(X_i(\cdot ) = X(0,t_i,\cdot ),\) it follows \(f_0(X(0,\cdot ,\cdot )) \in C^0([0,T], L^2(\Omega , \mathbb {R})),\) where the continuity in time follows from [42, Corollary 3]. Then, by differentiating \(X(0,t,X(t,0,x)) = x\) and rearranging the terms we obtain
since all involved expressions are continuous. Finally, we conclude \(f \in C^0([0,T], L^2(\Omega , \mathbb {R})),\) which follows from
since both summands converge to zero.
The second step is to show that (5) is satisfied. Note that \(X(\cdot ,0,x)\) is differentiable in t for a.e. \(t \in [0,T],\) since it is absolutely continuous by definition. By inserting f into (5) and using the transformation formula, we get
For the last equality we used that \(\eta (t,X(t,0,x))\) is absolutely continuous.
The last step is to prove uniqueness of weak solutions, i.e. that every solution has the given form. Let \(f_1,f_2\) be two different solutions, then we can find a t such that \(\Vert f_1(t,\cdot )  f_2(t,\cdot ) \Vert _{L^2(\Omega )} > 0.\) By continuity in time we can find an interval I of length \(\delta > 0\) that contains t, and a constant \(c>0\) such that
for all \(s \in I.\) However, weak solutions are unique in \(L^\infty ([0,T], L^2(\Omega , \mathbb {R})),\) see [17, Corollary II.1], where we used the embedding of \({\mathcal {V}}\) into \(C^1_0(\Omega , \mathbb {R}^n).\) This yields a contradiction. \(\square \)
Additionally, we can state and prove the following stability result for solutions of (4).
Proposition 2.3
(Stability) Let \(v_i \rightharpoonup v\) in \(L^2([0, T],{\mathcal {V}})\) and \(f_i\) denote the weak solution of (4) corresponding to \(v_i.\) Then for every \(t \in [0,T],\) there exists a subsequence, also denoted with \(f_i,\) such that \(f_i(t,\cdot ) \rightarrow f(t,\cdot )\) in \(L^2(\Omega , \mathbb {R}).\)
Proof
The solution of (6) corresponding to \(v_i\) is denoted by \(X_i\). Fix an arbitrary \(t\in [0,T]\). From Theorem 2.1 we conclude \(\Vert X_i(0,t,\cdot )  X(0,t,\cdot )\Vert _{C^0({\overline{\Omega }})} \rightarrow 0.\) Further, [19, Theorem 3.1.10] implies that \(X_i(0,t,\cdot )\) is uniformly bounded for all \(i\in \mathbb {N}\) in \(C^{1,\alpha }({\overline{\Omega }}),\) which implies \(f_0(X_i(0,t,\cdot )) \rightarrow f_0(X(0,t,\cdot ))\) in \(L^2(\Omega , \mathbb {R})\) by [42, Corollary 3].
It is left to show that a subsequence, also denoted by \(X_i,\) exists such that \(X_i(0,t,\cdot ) \rightarrow X(0,t,\cdot )\) in \(C^1({\overline{\Omega }},\mathbb {R}^n).\) This concludes the proof since it also implies the convergence of \(\det (\mathcal {D}_x X_i(0,t,\cdot )) \rightarrow \det (\mathcal {D}_x X(0,t,\cdot ))\) in \(C^0({\overline{\Omega }}).\) However, \(X_i(0,t,\cdot )\) is uniformly bounded in \(C^{1,\alpha }({\overline{\Omega }}, \mathbb {R}^n)\) and it follows that \(\mathcal {D}_x X_i(0,t,\cdot )\) is uniformly bounded in \(C^{0,\alpha }({\overline{\Omega }}, \mathbb {R}^{n\times n}).\) By using the compact embedding of \(C^{0,\alpha }({\overline{\Omega }}, \mathbb {R}^{n\times n})\) into \(C^0({\overline{\Omega }}, \mathbb {R}^{n\times n})\) [23, Lemma 6.33], there exists a subsequence of \(X_i(0,t,\cdot )\) that converges to \(X(0,t,\cdot )\) in \(C^{1}({\overline{\Omega }}, \mathbb {R}^n).\) \(\square \)
Transport Equation with \(H^1\) Regularity
Here, we prove wellposedness of the transport equation that arises in the LDDMM framework using the geometric group action. Compared to the previous section, the space regularity assumptions on v are weaker and fit the setting in [17].
The transport equation reads as
with coefficients
for some fixed constant C and initial value \(f_0 \in L^2(\Omega , \mathbb {R}).\) The admissible set A consists of all \(H^1\) functions that are zero on the boundary of the spatial domain and have bounded divergence in the \(L^\infty \) norm.
Note that the set A is a subset of \(H^1([0,T] \times \Omega )^n,\) which is closed and convex so that it is a weakly closed subset of a reflexive Banach space. In the following, we only check that A is closed. Let \(v_i\) be a convergent sequence in A with limit v. Since the two involved spaces are Banach spaces, we only have to check that v satisfies the constraint. Assume that \(\Vert {{\,\mathrm{div}\,}}_x v \Vert _{L^\infty ([0,T]\times \Omega )} > C,\) then there exists a set B with positive measure and an \(\epsilon >0\) such that for all \(x\in B\) we have \(\vert {{\,\mathrm{div}\,}}_x v(x) \vert \ge C + \epsilon .\) Hence, we get \(\Vert {{\,\mathrm{div}\,}}_x v_i {{\,\mathrm{div}\,}}_x v\Vert _{L^2([0,T]\times \Omega )}\ge \sqrt{\mu (B)}\epsilon ,\) which contradicts the convergence in \(H^1.\)
Again, Eq. (9) has to be understood in weak sense so that \(f \in C^0([0,T], L^2(\Omega , \mathbb {R}))\) is said to be a solution of (9) if it satisfies
for all \(\eta \in C^\infty _c([0,T) \times \Omega ).\) The next theorem is an existence and stability result, see [17, Corollaries II.1 and II.2, Theorem II.5].
Theorem 2.4
(Existence and Stability) For every \(v \in A\) there exists a unique weak solution \(f \in C^0([0,T], L^2(\Omega , \mathbb {R}))\) of (9). If \(v_i \in A\) converges to \(v\in A\) in the norm of \(L^2([0,T]\times \Omega , \mathbb {R}^n),\) then the corresponding sequence of weak solutions \(f_i \in C^0([0,T], L^2(\Omega , \mathbb {R}))\) converges to f in \(C^0([0,T], L^2(\Omega , \mathbb {R})).\)
Proof
The existence and uniqueness of weak solutions follows from [17, Corollaries II.1 and II.2]. Note that these solutions are also renormalised due to [17, Theorem II.3].
We recast the second part of the theorem such that it has the exact form of [17, Theorem II.5]. First, note that both the velocity fields and the initial condition can be extended to \(\mathbb {R}^n\) by zero outside of \(\Omega \) due to boundary condition of A. Due to the conditions on v, the weak formulation is equivalent to the one for the extension in the \(\mathbb {R}^n\) setting. The uniform boundedness condition on \(f_i\) is satisfied since \(\Omega \) is bounded. \(\square \)
Corollary 2.5
Let \(v_i \rightharpoonup v\in A\) with the inner product of \(H^1([0,T] \times \Omega )^n.\) Then, \(f_i\) converges to f in \(C^0([0,T], L^2(\Omega , \mathbb {R})).\)
Proof
Combine the previous theorem with the compact embedding of \(H^1([0,T] \times \Omega )^n\) into \(L^2([0,T] \times \Omega )^n\) (Rellich embedding theorem [1, A6.4]). \(\square \)
Remark 2.6
Note that the same arguments can be used if we use higher spatial regularity, such as \(H^2,\) in this section. From a numerical point of view, the bound on the divergence is always satisfied for C large enough if we use linear interpolation for the velocities on a fixed grid. Here we use that all norms are equivalent on finite dimensional spaces.
Regularising Properties of TemplateBased Image Reconstruction
In this section, we prove regularising properties of templatebased reconstruction as defined in (2). Recall that the problem reads
where C(v) is the Cauchy problem with either the transport or the continuity equation. The admissible set \({\mathcal {V}}\) is chosen such that the regularity requirements stated in the previous section are satisfied. For the following considerations we require these assumptions on K and D:

(1)
The operator K is continuous, \(D(\cdot , g)\) is lower semicontinuous for each \(g \in Y,\) and \(D(g, \cdot )\) is continuous for each \(g \in Y.\)

(2)
If \(f_n,g_n\) are two convergent sequences with limits f and g, respectively, then D must satisfy \(\liminf _{n \rightarrow \infty } D(f_n,g) \le \liminf _{n \rightarrow \infty } D(f_n,g_n).\)

(3)
If \(D(f,g)=0,\) then \(f=g.\)
Note that the requirements on D are satisfied if D is a metric. The obtained results are along the lines of [13] but are adapted to our setting and to our notation. For simplicity we stick to the notation of the continuity equation, but want to mention that the same derivations hold for the transport equation with coefficients in the set A. First, we prove that a minimiser of the problem exists.
Proposition 3.1
(Existence) For every \(f_0 \in L^2(\Omega , \mathbb {R}),\) the functional \(J_{\gamma , g}\) defined in (3) has a minimiser.
Proof
The idea of the proof is to construct a minimising sequence which is weakly convergent and then use that the functional is weakly lower semicontinuous. Let us consider a sequence \(v_n\) such that \(J_{\gamma , g} (v_n)\) converges to \(\inf _v J_{\gamma , g}(v).\) By construction of the functional, \(v_n\) is bounded in \(L^2([0,T], {\mathcal {V}})\) and hence there exists a subsequence, also denoted with \(v_n,\) such that \(v_n \rightharpoonup v_\infty .\) By Proposition 2.3, there exists a subsequence, also denoted with \(v_n,\) such that \(f_{v_n}(T,\cdot ) \rightarrow f_{v_\infty }(T,\cdot )\) in \(L^2(\Omega , \mathbb {R}).\) With this at hand, we are able to prove weak lower semicontinuity of the data term. Indeed, as K is continuous, from \(f_{v_n}(T,\cdot ) \rightarrow f_{v_\infty }(T,\cdot )\) we get \(K(f_{v_n}(T,\cdot )) \rightarrow K(f_{v_\infty }(T,\cdot )).\) Since \(D(\cdot , g)\) is lower semicontinuous, we obtain that \(D(K(f_{v_\infty }(T,\cdot )),g) \le \liminf _{n \rightarrow \infty } D(K(f_{v_n}(T,\cdot )),g).\) This concludes the proof, since the whole functional is (weakly) lower semicontinuous, and hence \(J_{\gamma ,g}(v_\infty ) \le \inf _v J_{\gamma , g}(v).\) \(\square \)
Next, we state a stability result.
Proposition 3.2
(Stability) Let \(f_0 \in L^2(\Omega , \mathbb {R})\) and \(\gamma >0.\) Let \(g_n\) be a sequence in Y converging to \(g \in Y.\) For each n, we choose \(v_n\) as minimiser of \(J_{\gamma , g_n}.\) Then, there exists a subsequence of \(v_n\) which weakly converges towards a minimiser v of \(J_{\gamma , g}.\)
Proof
By the properties of D it holds, for every n, that
Hence, \(v_n\) is bounded in \(L^2([0,T], {\mathcal {V}})\) and there exists a subsequence, also denoted with \(v_n,\) such that \(v_n \rightharpoonup v.\) From the weak convergence we obtain \(\gamma \Vert v \Vert _V^2 \le \gamma \liminf _{n \rightarrow \infty } \Vert v_n \Vert _V^2.\)
By passing to a subsequence and by using Proposition 2.3, we deduce that \(f_{v_n}(T,\cdot ) \rightarrow f_{v}(T,\cdot ).\) Together with the convergence of \(g_n\) and the convergence property of D this implies
Thus, for any \({\tilde{v}},\) it holds that
because \(v_n\) minimises \(J_{\gamma , g_n}.\) Then, as \(J_{\gamma , g_n} ({\tilde{v}})\) converges to \(J_{\gamma ,g} ({\tilde{v}})\) by the assumptions on D, we deduce \(J_{\gamma , g} (v) \le J_{\gamma ,g} ({\tilde{v}})\) and hence that v minimises \(J_{\gamma , g}.\) \(\square \)
Finally, we state a convergence result for the method.
Proposition 3.3
(Convergence) Let \(f_0 \in L^2(\Omega , \mathbb {R})\) and \(g \in Y,\) and suppose that there exists \(\hat{v} \in L^2([0,T], {\mathcal {V}})\) such that \(K(f_{{\hat{v}}}(T,\cdot )) = g.\) Further, assume that \(\gamma :\mathbb {R}_{>0} \mapsto \mathbb {R}_{>0}\) satisfies \(\gamma (\delta ) \rightarrow 0\) and \(\frac{\delta }{\gamma (\delta )} \rightarrow 0\) as \(\delta \rightarrow 0.\) Now let \(\delta _n\) be a sequence of positive numbers converging to 0 and assume that \(g_n\) is a data sequence satisfying \(D(g,g_n) \le \delta _n\) for each n. Let \(v_n\) be a minimiser of \(J_{\gamma _n, g_n},\) where \(\gamma _n = \gamma (\delta _n).\) Then, there exists a subsequence of \(v_n\) which weakly converges towards an element v such that \(K(f_{v}(T,\cdot )) = g.\)
Proof
For every n, it holds that
From the requirements on \(\gamma \) and \(\delta \) we deduce that \(v_n\) is bounded in \(L^2 ([0,T], {\mathcal {V}})\) and then that up to an extraction, \(v_n\) weakly converges to some v in \(L^2 ([0,T], {\mathcal {V}}).\)
Further, it holds \(D(K(f_{v}(T,\cdot )), g) \le \liminf _{n \rightarrow \infty } D(K(f_{v_n}(T,\cdot )), g_n)\) with the same arguments as in the previous proposition. Finally, for every n, it holds that
where the two rightmost terms both converge to zero. Thus, \(K(f_{v}(T,\cdot )) = g\) by the assumptions on D. \(\square \)
We conclude with a remark on data discrepancy functionals that satisfy the conditions and will be used in our numerical experiments in Sect. 5.
Remark 3.4
We now assume that the data space Y is a real Hilbert space. Clearly, the conditions are satisfied if \(D_{\mathrm {SSD}}(f,g) = \Vert f  g\Vert _{Y}^2.\) We will only check the convergence condition. It holds
where the last two terms converge to zero since convergent sequences are bounded.
Another function that satisfies the conditions is \(D_{\mathrm {NCC}} :Y\setminus \{0\} \times Y\setminus \{0\} \rightarrow [0,1]\) with
which is based on the NCC. First, note that \({\tilde{D}}(\cdot ,g) = \frac{\langle \cdot ,g \rangle ^2}{\Vert g \Vert _{Y}^2}\) and the function \(\Vert \cdot \Vert _{Y}^{2}\) are continuous. Thus, we get that \(D_{\mathrm {NCC}}(\cdot ,g)\) is continuous. By symmetry, this also holds for \(D_{\mathrm {NCC}}(g, \cdot ).\) It remains to check the convergence property:
From this we conclude \(\liminf _{n \rightarrow \infty } D_{\mathrm {NCC}}(f_n,g) = \liminf _{n \rightarrow \infty } D_{\mathrm {NCC}}(f_n,g_n).\) Unfortunately, \(D_{\mathrm {NCC}}(f,g) =0\) only implies \(f = c g,\) with \(c \in \mathbb {R}.\)
Numerical Solution
The focus of this section is to approximately solve problem (2). Our approach is based on the Lagrangian methods developed in [35] and the inexact multilevel Gauss–Newton method used in [40]. Both methods and their necessary modifications are briefly outlined here.
As customary in PDEconstrained optimisation [16, Chap. 3], we eliminate the state equation by defining a controltostate operator, which parametrises the final state \(f_{v}(T, \cdot )\) in terms of the unknown velocities v. With a slight abuse of notation, we define this solution map as
Here, \(f_{v}\) denotes the unique solution to either the transport or the continuity equation, as defined in Sect. 2. As a result, we obtain the reduced form of (2):
Here, \(R:V \rightarrow \mathbb {R}_{\ge 0}\) is a regularisation functional that can be written as
with B denoting a linear (vectorial) differential operator.
In this work, we consider the operators \(B = \nabla _{x}\) and \(B = \Delta _{x},\) which are also used in [35]. We refer to the resulting functionals R as diffusion and curvature regularisation functionals, respectively. Note that B can as well be chosen to incorporate derivatives with respect to time.
Amongst the operators above, we also consider a regularisation functional that resembles the norm of the space \(V = L^{2}([0, T], H_{0}^{3}(\Omega , \mathbb {R}^{n})).\) This particular choice is motivated by the fact that, for \(n = \{2, 3\},\) the space \(H_{0}^{3}(\Omega , \mathbb {R}^{n})\) can be continuously embedded in \(C_{0}^{1, \alpha }(\Omega , \mathbb {R}^{n}),\) for some \(\alpha > 0,\) so that the results in Sect. 2 hold. The norm of V is given by
Here, \(\cdot _{H^{k}(\Omega , \mathbb {R}^{n})}\) denotes the usual \(H^{k}\)seminorm including only the highestorder partial derivatives. By the Gagliardo–Nirenberg inequality, (17) is equivalent to the usual norm of \(L^{2}([0, T], H_{0}^{3}(\Omega , \mathbb {R}^{n})).\) To simplify numerical optimisation, we omit the requirement that v is compactly supported in \(\Omega \) and minimise over \(L^{2}([0, T], H^{3}(\Omega , \mathbb {R}^{n})).\)
In order to solve problem (15), we follow a discretisethenoptimise strategy. Without loss of generality, we assume that the domain is \(\Omega = (0, 1)^{n}.\) We partition it into a regular grid consisting of \(m^{n}\) equally sized cells of edge length \(h_{X} = 1 / m\) in every coordinate direction.
The template image \(f_{0} \in L^2(\Omega , \mathbb {R})\) is assumed to be sampled at cellcentred locations \({\mathbf {x}}_{c} \in \mathbb {R}^{m^{n}},\) giving rise to its discrete version \({\mathbf {f}}_{0}({\mathbf {x}}_{c}) \in \mathbb {R}^{m^{n}}.\) The template image is interpolated on the cellcentred grid by means of cubic Bspline interpolation as outlined in [40, Chap. 3.4].
Similarly, the time domain is assumed to be [0, 1] and is partitioned into \(m_{t}\) equally sized cells of length \(h_{t} = 1 / m_{t}.\) We assume that the unknown velocities \(v:[0, 1] \times \Omega \rightarrow \mathbb {R}^{n}\) are sampled at cellcentred locations in space as well as at cellcentred locations in time, leading to a vector of unknowns \({\mathbf {v}} \in \mathbb {R}^{N},\) where \(N = (m_{t} + 1) \cdot n \cdot m^{n}\) is the total number of unknowns of the finitedimensional minimisation problem.
Lagrangian Solver
In order to compute the solution map f(v) numerically, i.e. to solve the hyperbolic PDEs (4) and (9), the Lagrangian solver in [35] follows a twostep approach. First, given a vector \({\mathbf {v}} \in \mathbb {R}^{N}\) of velocities, the ODE (6) is solved approximately using a fourthorder Runge–Kutta (RK4) method with \(N_{t}\) equally spaced time steps of size \(\Delta t.\) For simplicity, we follow the presentation in [35] based on an explicit firstorder Euler method and refer to [35, Sect. 3.1] for the full details.
Given initial points \({\mathbf {x}} \in \mathbb {R}^{m^{n}}\) and velocities \({\mathbf {v}} \in \mathbb {R}^{N},\) an approximation \({\mathbf {X}}_{{\mathbf {v}}}:[0, 1]^{2} \times \mathbb {R}^{m^{n}} \rightarrow \mathbb {R}^{m^{n}}\) of the solution \(X_{v}\) is given recursively by
for all \(k = 0, 1, \ldots , N_{t}  1,\) with initial condition \({\mathbf {X}}_{{\mathbf {v}}}(0, 0, {\mathbf {x}}) = {\mathbf {x}}.\) Here, \({\mathbf {I}}({\mathbf {v}}, t_{k}, {\mathbf {X}}_{{\mathbf {v}}}(0, t_{k}, {\mathbf {x}}))\) denotes a componentwise interpolation of \({\mathbf {v}}\) at time \(t_{k} = k \Delta t\) and at the points \({\mathbf {X}}_{{\mathbf {v}}}(0, t_{k}, {\mathbf {x}}).\) Note that, since the characteristic curves for both PDEs coincide, this step is identical regardless of which PDE we impose.
The second step computes approximate intensities of the final state \(f_{v}(1, \cdot ).\) This step depends on the particular PDE. For the transport equation, in order to compute the intensities at the grid points \({\mathbf {x}}_{c},\) we follow characteristic curves backwards in time, which is achieved by setting \(\Delta t =  1 / N_{t}\) in (18). The deformed template is then given by
where \({\mathbf {f}}_{0} \in \mathbb {R}^{m^{n}}\) is the interpolation of the discrete template image.
For the continuity equation, [35] proposes to use a particleincell (PIC) method, see [14] for details. The density of particles which are initially located at grid points \({\mathbf {x}}_{c}\) is represented by a linear combination of basis functions, which are then shifted by following the characteristics computed in the first step. To determine the final density at grid points, exact integration over the grid cells is performed. By setting \(\Delta t = 1 / N_{t}\) in (18), the transformed template can be computed as
where \({\mathbf {F}} \in \mathbb {R}^{N \times N}\) is the pushforward matrix that computes the integrals over the shifted basis functions. See [35, Sect. 3.1] for its detailed specification using linear, compactly supported basis functions. By design, the method is masspreserving at the discrete level.
Numerical Optimisation
Let us denote by \({\mathbf {K}}:\mathbb {R}^{N} \rightarrow \mathbb {R}^{M},\) \(M \in \mathbb {N},\) a finitedimensional, Fréchet differentiable approximation of the (not necessarily linear) operator \(K:L^2(\Omega , \mathbb {R}) \rightarrow Y.\) With the application to CT in mind, we will outline a discretisation of (15) suitable for the ndimensional Radon transform, which maps a function on \(\mathbb {R}^{n}\) into the set of its integrals over the hyperplanes in \(\mathbb {R}^{n}\) [41, Chap. 2].
An element \(K(f(v)) \in Y\) is a function on the unit cylinder \(S^{n  1} \times \mathbb {R}\) of \(\mathbb {R}^{n + 1},\) where \(S^{n  1}\) is the \((n  1)\)dimensional unit sphere. We discretise this unit cylinder as follows. First, we sample \(p \in \mathbb {N}\) directions from \(S^{n  1}.\) When \(n = 2,\) as it is the case in our experiments in Sect. 5, directions are parametrised by angles from the interval [0, 180] degrees. For simplicity, we say (slightly imprecise) that we take one measurement in each direction. Second, similarly to the sampling of \(\Omega ,\) we use an interval (0, 1) instead of \(\mathbb {R}\) and partition it into q equally sized cells of length \(h_{Y} = 1 / q.\) Depending on n and the diameter of \(\Omega ,\) the interval length may require adjustment. Each measurement i is then sampled at cellcentred points \({\mathbf {y}}_{c} \in \mathbb {R}^{q}\) and denoted by \({\mathbf {g}}_{i}({\mathbf {y}}_{c}) \in \mathbb {R}^{q}.\) All measurements are then concatenated into a vector \({\mathbf {g}} :={\mathbf {g}}({\mathbf {y}}_{c}) \in \mathbb {R}^{M},\) where \(M = p \cdot q.\)
The finitedimensional optimisation problem in abstract form is then given by
where D and R are chosen to be discretisations of a distance and of (16), respectively.
In further consequence, we approximate integrals using a midpoint quadrature rule. As we are mainly interested in the setting where only few directions are given, we disregard integration over the unit sphere. For vectors \({\mathbf {x}}, {\mathbf {y}} \in \mathbb {R}^{M},\) the corresponding approximations of the distance based on SSDs and the NCCbased distance are then
respectively. See [40, Chaps. 6.2 and 7.2] for details. Note that, due to cancellation, no (spatial) discretisation parameter occurs in the approximation of the NCC above.
Moreover, we approximate the regularisation functional in (16) with
where \({\mathbf {B}} \in \mathbb {R}^{N \times N}\) is a finitedifference discretisation of the differential operator in (16), analogous to [39, Chap. 8.5]. In our implementation we use zero Neumann boundary conditions and pad the spatial domain to mitigate boundary effects arising from the discretisation of the operator.
In order to apply (inexact) Gauss–Newton optimisation to problem (21), we require first and (approximate) secondorder derivatives of \(J_{\gamma , {\mathbf {g}}}({\mathbf {v}}).\) By application of the chain rule, we obtain
where \(\partial {\mathbf {K}} / \partial {\mathbf {f}}\) is the Fréchet derivative of \({\mathbf {K}}\) and \(\partial {\mathbf {f}}({\mathbf {v}}) / \partial {\mathbf {v}}\) is the derivative of the solution map (14) with respect to the velocities, which is given below.
The partial derivatives of the distance functions (22) with respect to its first argument are given by
where \({\mathbf {I}}_{N} \in \mathbb {R}^{N \times N}\) is the identity matrix of size N, and
respectively. Moreover, the derivatives of (23) are given by
In order to obtain an efficient iterative secondorder method for solving (21), one requires an approximation of the Hessian \({\mathbf {H}} \in \mathbb {R}^{N \times N}\) that balances the following tradeoff. Ideally, it is reasonably efficient to compute, consumes limited memory (sparsity is desired), and has sufficient structure so that preconditioning can be used. However, each iteration of the Gauss–Newton method should also provide a suitable descent direction. For these reasons, we approximate the Hessian by
where \(\epsilon > 0\) ensures positive semidefiniteness. For simplicity, the term involving \(\partial ^{2} {\mathbf {f}}({\mathbf {v}}) / \partial {\mathbf {v}}^{2}\) is omitted and, regardless of the chosen distance, we use the second derivative in (24) as an approximation of \(\partial ^{2} D({\mathbf {x}}, {\mathbf {y}}) / \partial {\mathbf {x}}^{2}.\) In our numerical experiments we found that this choice works well for the problem considered in Sect. 5.
It remains to discuss the derivative of the solution map. For the transport equation, the application of the chain rule to (19) yields
where \(\nabla _{x} {\mathbf {f}}_{0}\) denotes the gradient of the interpolation of the template image and \(\partial {\mathbf {X}}_{{\mathbf {v}}} / \partial {\mathbf {v}}\) is the derivative of the endpoints of the characteristic curves with respect to the velocities, see below. Similarly, for the solution map (20) that corresponds to the continuity equation, we obtain
Here, \(\partial {\mathbf {F}} / \partial {\mathbf {X}}_{{\mathbf {v}}}\) is the derivative of the pushforward matrix with respect to the endpoints of the characteristics, again see [35, Sect. 3.1].
If explicit time stepping methods are used to solve the ODE (6), the partial derivative \(\partial {\mathbf {X}}_{{\mathbf {v}}} / \partial {\mathbf {v}}\) can be computed recursively. For example, for the forward Euler approach in (18) it is given by
for all \(k = 0, 1, \ldots , N_{t}  1,\) with \(\partial {\mathbf {I}} / \partial {\mathbf {v}}\) and \(\partial {\mathbf {I}} / \partial {\mathbf {X}}_{{\mathbf {v}}}\) being the derivatives of the interpolation schemes with respect to the velocities and with respect to the endpoints of the characteristics, respectively. We refer to [40, Chap. 3.5] for details. The case where characteristics are computed backwards in time can be handled similarly.
In order to solve the finitedimensional minimisation problem (21), we apply a inexact Gauss–Newton–Krylov method, which proceeds as follows. Given an initial guess \({\mathbf {v}}^{(0)} = {\mathbf {0}},\) we update the velocities in each iteration \(i = 0, 1, \ldots \) by \({\mathbf {v}}^{(i + 1)} = {\mathbf {v}}^{(i)} + \mu \delta {\mathbf {v}}\) until a termination criterion is satisfied. Here, \(\mu \in \mathbb {R}\) denotes a step size that is determined via Armijo line search and \(\delta {\mathbf {v}} \in \mathbb {R}^{N}\) is the solution to the linear system
For details on the stopping criteria and the line search we refer to [40, Chap. 6.3.3]. We solve the system (25) approximately by means of a preconditioned conjugate gradient method, which can be implemented matrixfree whenever the derivative of \({\mathbf {K}}\) and its adjoint can be computed matrixfree. See [35, Sect. 3.2] for further details on the preconditioning.
Due to the nonconvexity of (15) and to speed up computation, we use a multilevel strategy in order to reduce the risk of ending up in a local minimum, see [27]. On each level, we use a subsampled version of the velocities that were computed on the previous, more coarser, discretisation as initial guess.
While standard image registration typically uses resampling of the template and the target image [40, Chap. 3.7], the approach described here requires multilevel versions of the operator \({\mathbf {K}}\) together with a suitable method for resampling the measurements \({\mathbf {g}}.\) We stress that, if these are not available, optimisation can as well just be performed on the finest discretisation level.
In the following, we assume that \({\mathbf {K}}\) is a discretisation of the Radon transform [41], which is a linear operator, and outline a suitable procedure for creating multilevel versions of the operator and the measured data. The former is easily achieved with a computational backend such as ASTRA [54, 55], which allows to explicitly specify the number of grid cells used to discretise the measurement geometry. For the sake of simplicity, we restrict the presentation here to the case where \(n = 2,\) i.e. \(\Omega \subset \mathbb {R}^{2},\) and \({\mathbf {K}}\) is linear.
Let us assume that the number of grid cells used to discretise \(\Omega \) at the finest level is \(m = 2^{\ell },\) \(\ell \in \mathbb {N}.\) In our experiments, we set the number of grid cells of the onedimensional measurement domain (0, 1) at the current level \(k \le \ell \) to \(q^{(k)} = 1.5 \cdot 2^{(k)}\) and set the length of each cell to \(h_{Y}^{(k)} = 1 / q^{(k)}.\) Then, a multilevel representation of each measurement \({\mathbf {g}}_{i},\) \(i \le p,\) at cellcentred grid points \({\mathbf {y}}_{j} = (j  1/2) h_{Y}^{(k  1)}\) is given by
where the denominator arises from averaging over two neighbouring grid points and dividing the edge length of the imaging domain \(\Omega \) in each coordinate direction in half. The approach can be extended to higher dimensions.
Numerical Examples
In our numerical experiments we use the Radon transform [41] as operator. Other choices are possible and, assuming that one has access to a suitable resampling procedure for the measured data, the multilevel strategy can be applied as well. The aim here is to investigate the reconstruction quality with different regularisation functionals, distances, and noise levels for both PDE constraints. We show synthetic examples for the settings \(n = 2\) and 3, and nonsynthetic examples for \(n = 2\) using real Xray tomography data. In the synthetic case, all shown reconstructions were computed from measurements taken from at most 10 directions (i.e. angles) sampled from intervals within [0, 180] degrees.
All computations were performed using an Intel Xeon E52630 v4 \(2.2 \,\mathrm {GHz}\) server equipped with \(128 \, \mathrm {GB}\) RAM and an NVIDIA Quadro P6000 GPU featuring \(24 \, \mathrm {GB}\) of memory. The GPU was only used for computing the Radon transform of 3D volumes.
Before we proceed, we give a brief idea of suitable parameter choices. For the multilevel approach we used in each synthetic example \(32\times 32\) pixels at the coarsest level and \(128\times 128\) pixels at the finest level, i.e. \(\ell = 7.\) The size of the reconstructed images in the nonsynthetic examples was \(128\times 128.\) Again, three levels were used. In the synthetic 3D example the reconstructed volume was \(32 \times 32 \times 32\) and the coarsest level was \(8 \times 8 \times 8.\)
We used time dependent velocity fields with only one time step, i.e. \(n_{t} = 1,\) since this keeps the computational cost reasonable and sufficed for our examples. The characteristics were computed using five Runge–Kutta steps, i.e. \(N_{t} = 5.\)
The spatial regularisation parameter depends on the chosen regularisation functional and the noise level, and was chosen in the order of \(10^{3},\) \(10^{0},\) and \(10^{3}\) for thirdorder, curvature, and diffusion regularisation, respectively, in the noisefree case and using the NCCbased distance. The temporal regularisation parameter is less sensitive and was chosen in the order of \(10^2.\) Furthermore, the parameter corresponding to the norm of \(L^{2}(\Omega , \mathbb {R}^{n})\) in (17) was set to \(10^{6}.\)
In our first example, we investigate different regularisation functionals with different noise levels together with the transport equation. The target is 2D Radon transform data based on a digital brain image and the template is a deformed version thereof, see Fig. 1. Since we want to focus on the behaviour of the regularisation functionals, we do not treat the continuity equation here. The data was generated using parallel beam tomography with only six equally distributed angles from the interval [0, 60] degrees and was corrupted with Gaussian white noise of different levels.
Figure 2 shows results obtained from the generated noisefree measurements using four existing methods. In Fig. 2a filtered backprojection was used. In Fig. 2b, c, the following two total variation regularisationbased models, see e.g. [10],
with \({\mathcal {R}}_1({\mathbf {u}}) :=\mathrm {TV}({\mathbf {u}}),\) \({\mathcal {R}}_2({\mathbf {u}}) :=\mathrm {TV}({\mathbf {u}}  {\mathbf {f}}_{0}),\) and \(\gamma > 0\) were used. Here, \({\mathcal {R}}_2({\mathbf {u}})\) incorporates template information. Approximate minimisers of both functionals were computed using the primal–dual hybrid gradient method [11]. For the case of filtered backprojection, the standard MATLAB implementation was used. The results in Fig. 2a–c highlight why more sophisticated methods, such as the proposed templatebased approach, are necessary to obtain satisfying reconstructions in this setting, and illustrate the challenges when dealing with very sparse data.
As outlined in Sect. 1, one possibility is the metamorphosis approach [24]. In Fig. 2d we show a result obtained with this method using the recommended parameters. However, 200 iterations of gradient descent were performed, and the regularisation parameters were set to \(\gamma = 10^{5}\) and \(\tau = 1.\) Observe the change in image intensities compared to Fig. 1a and the blur in the heavily deformed regions.
In Fig. 3, we show results for the different noise levels and different regularisation functionals computed with our approach. All results were obtained using the NCCbased distance. As expected, the quality of the reconstruction gets worse for higher noise levels and, consequentially, larger regularisation parameters were necessary. Since data is acquired from only six directions, the influence of the noise is very strong. Especially for the diffusive regularisation we needed to choose large regularisation parameters for higher noise levels, see Fig. 3a. Since diffusion corresponds to firstorder regularisation, it is much easier to reconstruct the noise with “rough” deformations. Overall, we found that second and thirdorder regularisation performed similar when appropriate regularisation parameters were chosen. Even though some theoretical results only hold for higherorder regularity, secondorder regularisation seems sufficient for our use case. The computation time for the results in Fig. 3 was between 200 and 700 s.
In the second example, see Fig. 4, we compare the behaviour of the SSD and the NCCbased distance. The example consists of two different hands which, in addition, are rotated relative to each other. Here, the deformation is much larger than in the previous example, but still fairly regular. The data was generated similarly to the previous example, but with only five angles from the interval [0, 75] degrees. Note also that the intensities of the template and target image are different (roughly by a factor of two). First, we discuss the transport equation. The intensity difference is a serious issue if we use the SSD distance, as we can see in Fig. 4a. The hand is deformed into a smaller version in order to compensate the differences. If we use the NCCbased distance instead, which can deal with such discrepancies, the result is much better from a visual point of view. The shapes are wellaligned. The resulting SSIM value is still low, which is not surprising since SSIM is not invariant with respect to intensity differences between perfectly aligned images. However, neither of the two approaches is able to remove or create any of the additional (noise) structures in the images. For the combination SSD with continuity equation, no satisfactory results could be obtained. Since no change of intensity is possible by changing the size of the hand, part of it is moved outside of the image. This behaviour could potentially be corrected if other boundary conditions are used in the implementation. Therefore, we do not provide an example image for this case. Using the NCCbased distance, the results look similar as for the transport equation with slightly worse SSIM value. These results suggest that the NCCbased distance is a more robust choice that avoids unnatural deformations, which would be required in the case of SSD to compensate intensity differences. In this example, the computation time was between 50 and 325 s.
In the next example, see Fig. 5, we compare the continuity equation with the transport equation as constraint together with the NCCbased distance. The continuity equation allows for limited change of mass along the deformation path. Since the intensity change scales with the determinant of the Jacobian, bigger changes are only possible if areas are compressed or extended a lot. In the presented example this occurs only to a mild extent. For this example, the continuity equation and the transport equation yield visually similar results with minor differences in the SSIM value. As in the previous examples, higherorder regularisation is beneficial and artefacts occur for the diffusion regularisation. The computation time amounted to roughly 64 to 360 s in this example.
In Fig. 6, we created an artificial pair of images consisting of a disk to show the possibilities of intensity changes when using the continuity equation as a constraint. Both template and unknown image were constructed so that their total mass is equal. The measurements were generated as before using only five angles uniformly distributed in the interval [0, 90] degrees. Furthermore, we used curvature regularisation. For the transport equation we observe that the shape is matched, but the intensity is not correct, see Fig. 6a. If we use the continuity equation instead, intensity changes are possible, which can be observed in Fig. 6b. The computation time for the two results was 90 and 500 s.
In order to demonstrate the practicality of our method, we computed results from nonsynthetic Xray tomography data [8, 28], which are available online.^{Footnote 2}^{,}^{Footnote 3} See Fig. 7 for these two examples (‘lotus’ and ‘walnut’). The template was generated by applying filtered backprojection to the full measurements and by subsequently deforming it. Then, this deformed templated was used in our method to compute a reconstruction from only few measurement directions. The computation time amounted to roughly 80 and 600 s in these examples. In both nonsynthetic examples the use of the NCCbased distance proved crucial and no satisfactory result could be obtained using SSD.
In Fig. 8, we demonstrate that our framework is also capable of reconstructing 3D volumes. Here, we used the SSD distance together with curvature regularisation and the transport equation. We applied the 3D Radon transform to obtain ten measurements from angles within [0, 180]. The total computation time was roughly 800 s.
All in all, our results demonstrate that, given a suitable template image, very reasonable reconstructions can efficiently be obtained from only a few measurements, even in the presence of noise. Moreover, our examples show that the NCCbased distance adds robustness to the approach with regard to discrepancies in the image intensities.
Conclusions
Overall, our numerical examples show that our implementation yields good results, as long as the deformation between template and target is fairly regular. By using the NCCbased distance, robustness with respect to intensity differences between the template and the target image can be achieved. As already mentioned in the introduction, we do not follow the metamorphosis approach, since there is too much flexibility in the model and the source term is very likely to reproduce noise and artefacts if the data is too limited. It is left for further research to investigate possible adaptations of the model that allow for the appearance of new objects or structures in the reconstruction without reproducing noise or artefacts. Possibly, the results of our method can be used as better template for other algorithms that require template information. Finally, note that due to the great flexibility of the FAIR library, it is also possible to use a great variety of regularisation functionals for the velocities and other distances, see [40, Chaps. 7 and 8]. Additionally, our implementation is not necessarily restricted to the Radon transform and essentially every (continuous) operator can be used. The multilevel approach can be applied as long as a meaningful resampling procedure for the operator and the measured data can be provided.
References
Alt, H.W.: Linear Functional Analysis: An ApplicationOriented Introduction, p. xii+435. Universitext. Springer, London (2016)
Andreev, R., Scherzer, O., Zulehner, W.: Simultaneous optical flow and source estimation: space–time discretization and preconditioning. Appl. Numer. Math. 96, 72–81 (2015)
Ashburner, J.: A fast diffeomorphic image registration algorithm. NeuroImage 38(1), 95–113 (2007)
Beg, M.F., Miller, M.I., Trouvé, A., Younes, L.: Computing large deformation metric mappings via geodesic flows of diffeomorphisms. Int. J. Comput. Vis. 61(2), 139–157 (2005)
Borzì, A., Schulz, V.: Computational Optimization of Systems Governed by Partial Differential Equations. Computational Science and Engineering, vol. 8, p. xx+282. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2012)
Borzì, A., Ito, K., Kunisch, K.: An optimal control approach to optical flow computation. Int. J. Numer. Methods Fluids 40(1–2), 231–240 (2002)
Borzì, A., Ito, K., Kunisch, K.: Optimal control formulation for determining optical flow. SIAM J. Sci. Comput. 24(3), 818–847 (2003)
Bubba, T.A., Hauptmann, A., Huotari, S., Rimpeläinen, J., Siltanen, S.: Tomographic Xray data of a lotus root filled with attenuating objects (2016). arXiv:1609.07299
Burger, M., Dirks, H., Schönlieb, C.B.: A variational model for joint motion estimation and image reconstruction. SIAM J. Imaging Sci. 11(1), 94–128 (2018)
Candès, E.J., Romberg, J.K., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)
Chambolle, A., Pock, T.: A firstorder primal–dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)
Chen, K., Lorenz, D.A.: Image sequence interpolation using optimal control. J. Math. Imaging Vis. 41(3), 222–238 (2011)
Chen, C., Öktem, O.: Indirect image registration with large diffeomorphic deformations. SIAM J. Imaging Sci. 11(1), 575–617 (2018)
Chertock, A., Kurganov, A.: On a practical implementation of particle methods. Appl. Numer. Math. 56, 1418–1431 (1999)
Crippa, G.: The flow associated to weakly differentiable vector fields. PhD Thesis, Classe di Scienze Matematiche, Fisiche e Naturali, Scuola Normale Superiore di Pisa/Institut für Mathematik, Universität Zürich (2007)
De los Reyes, J.C.: Numerical PDEConstrained Optimization. Springer, Cham (2015)
DiPerna, R.J., Lions, P.L.: Ordinary differential equations, transport theory and Sobolev spaces. Invent. Math. 98(3), 511–547 (1989)
Dupuis, P., Grenander, U., Miller, M.: Variational problems on flows of diffeomorphisms for image matching. Q. Appl. Math. 56, 587–600 (1998)
Effland, A.: Discrete Riemannian calculus and a posteriori error control on shape spaces. Dissertation, University of Bonn (2017)
Engl, H.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems. Mathematics and Its Applications, p. viii+321. Kluwer Academic Publishers Group, Dordrecht (1996)
Evans, L.C.: Partial Differential Equations. Graduate Studies in Mathematics, vol 19, 2nd edn. American Mathematical Society, Providence (2010)
Frikel, J.: Sparse regularization in limited angle tomography. Appl. Comput. Harmon. Anal. 34(1), 117–141 (2013)
Gilbarg, D., Trudinger, N.S.: Elliptic Partial Differential Equations of Second Order. Classics in Mathematics, p. xiv+517. Springer, Berlin (2001). Reprint of the 1998 edition
Gris, B., Chen, C., Öktem, O.: Image Reconstruction Through Metamorphosis. Technical report (2018). arXiv:1806.01225 [cs.CV]
GuerquinKern, M., Lejeune, L., Pruessmann, K.P., Unser, M.: Realistic analytical phantoms for parallel magnetic resonance imaging. IEEE Trans. Med. Imaging 31(3), 626–636 (2012)
Gunzburger, M.D.: Perspectives in Flow Control and Optimization. Advances in Design and Control, p. xiv+261. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2003)
Haber, E., Modersitzki, J.: A multilevel method for image registration. SIAM J. Sci. Comput. 27(5), 1594–1607 (2006)
Hämäläinen, K., Harhanen, L., Kallonen, A., Kujanpää, A., Niemi, E., Siltanen, S.: Tomographic Xray data of walnut (2015). arXiv:1502.04064
Hart, G.L., Zach, C., Niethammer, M.: An optimal control approach for deformable registration. In: 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, vol. 2, pp. 1223–1227. IEEE (2009)
Herzog, R., Kunisch, K.: Algorithms for PDEconstrained optimization. GAMMMitt. 33(2), 163–176 (2010)
Hinkle, J., Szegedi, M., Wang, B., Salter, B., Joshi, S.: 4D CT image reconstruction with diffeomorphic motion model. Med. Image Anal. 16(6), 1307–1316 (2012)
Hinze, M., Pinnau, R., Ulbrich, M., Ulbrich, S.: Optimization with PDE Constraints, Mathematical Modelling: Theory and Applications, vol. 23, p. xxi+270. Springer, New York (2009)
Hong, Y., Joshi, S., Sanchez, M., Styner, M., Niethammer, M.: Metamorphic geodesic regression. In: Medical Image Computing and ComputerAssisted Intervention—MICCAI 2012. Lecture Notes in Computer Science, vol. 7512, pp. 197–205. Springer, Berlin (2012)
Maas, J., Rumpf, M., Schönlieb, C.B., Simon, S.: A generalized model for optimal transport of images including dissipation and density modulation. ESAIM: M2NA 49(6), 1745–1769 (2015)
Mang, A., Ruthotto, L.: A Lagrangian Gauss–Newton–Krylov solver for mass and intensitypreserving diffeomorphic image registration. SIAM J. Sci. Comput. 39(5), B860–B885 (2017)
Miller, M.I., Younes, L.: Group actions, homeomorphisms, and matching: a general framework. Int. J. Comput. Vis. 41(1/2), 61–84 (2001)
Miller, M.I., Trouvé, A., Younes, L.: Geodesic shooting for computational anatomy. J. Math. Imaging Vis. 24(2), 209–228 (2006)
Miller, M.I., Trouvé, A., Younes, L.: Hamiltonian systems and optimal control in computational anatomy: 100 years since D’Arcy Thompson. Annu. Rev. Biomed. Eng. 17(1), 447–509 (2015)
Modersitzki, J.: Numerical Methods for Image Registration. Oxford University Press, New York (2003)
Modersitzki, J.: FAIR: Flexible Algorithms for Image Registration. Fundamentals of Algorithms, vol. 6. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2009)
Natterer, F.: The Mathematics of Computerized Tomography. Classics in Applied Mathematics, vol. 32, p. xviii+222. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2001)
Neumayer, S., Persch, J., Steidl, G.: Morphing of manifoldvalued images inspired by discrete geodesics in image spaces. SIAM J. Imaging Sci. 11(3), 1898–1930 (2018)
Neumayer, S., Persch, J., Steidl, G.: Regularization of Inverse Problems via Time Discrete Geodesics in Image Spaces. Technical report (2018). arXiv:1805.06362 [math.NA]
Niethammer, M., Hart, G.L., Zach, C.: An optimal control approach for the registration of image timeseries. In: Proceedings of the 48th IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference, pp. 262–270. IEEE (2009)
Öktem, O., Chen, C., Domaniç, N.O., Ravikumar, P., Bajaj, C.: Shapebased image reconstruction using linearized deformations. Inverse Probl. 33(3), 035004 (2017)
Richardson, C.L., Younes, L.: Metamorphosis of images in reproducing kernel Hilbert spaces. Adv. Comput. Math. 42(3), 573–603 (2015)
Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60(1–4), 259–268 (1992)
Scherzer, O., Grasmair, M., Grossauer, H., Haltmeier, M., Lenzen, F.: Variational Methods in Imaging. Applied Mathematical Sciences, vol. 167. Springer, New York (2009)
Singh, N., Hinkle, J., Joshi, S., Fletcher, P.T.: A vector momenta formulation of diffeomorphisms for improved geodesic regression and atlas construction. In: Proceedings of the 10th International Symposium on Biomedical Imaging, pp. 127–142. IEEE (2013)
Trouvé, A.: Diffeomorphisms groups and pattern matching in image analysis. Int. J. Comput. Vis. 28(3), 213–221 (1998)
Trouvé, A., Younes, L.: Local geometry of deformable templates. SIAM J. Math. Anal. 37(1), 17–59 (2005)
Trouvé, A., Younes, L.: Metamorphoses through Lie group action. Found. Comput. Math. 5(2), 173–198 (2005)
Trouvé, A., Younes, L.: Shape spaces. In: Handbook of Mathematical Methods in Imaging, pp. 1759–1817. Springer, New York (2015)
van Aarle, W., Palenstijn, W.J., De Beenhouwer, J., Altantzis, T., Bals, S., Batenburg, K.J., Sijbers, J.: The ASTRA toolbox: a platform for advanced algorithm development in electron tomography. Ultramicroscopy 157, 35–47 (2015)
van Aarle, W., Palenstijn, W.J., Cant, J., Janssens, E., Bleichrodt, F., Dabravolski, A., De Beenhouwer, J., Batenburg, K.J., Sijbers, J.: Fast and flexible Xray tomography using the ASTRA toolbox. Opt. Express 24(22), 25129 (2016)
Vialard, F.X., Risser, L., Rueckert, D., Cotter, C.J.: Diffeomorphic 3D image registration via geodesic shooting using an efficient adjoint calculation. Int. J. Comput. Vis. 97(2), 229–241 (2011)
Yang, X., Kwitt, R., Styner, M., Niethammer, M.: Quicksilver: fast predictive image registration–a deep learning approach. NeuroImage 158, 378–396 (2017)
Younes, L.: Shapes and Diffeomorphisms. Applied Mathematical Sciences, vol. 171, p. xviii+434. Springer, Berlin (2010)
Acknowledgements
Lukas F. Lang and CarolaBibiane Schönlieb acknowledge support from the Leverhulme Trust Project “Breaking the nonconvexity barrier”, the EPSRC Grant EP/M00483X/1, the EPSRC Centre Nr. EP/N014588/1, the RISE Projects ChiPS and NoMADS, the Cantab Capital Institute for the Mathematics of Information, and the Alan Turing Institute. Sebastian Neumayer is funded by the German Research Foundation (DFG) within the Research Training Group 1932, Project Area P3. Ozan Öktem is supported by the Swedish Foundation of Strategic Research, Grant AM130049. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Quadro P6000 GPU used for this research.
Author information
Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Lang, L.F., Neumayer, S., Öktem, O. et al. TemplateBased Image Reconstruction from Sparse Tomographic Data. Appl Math Optim 82, 1081–1109 (2020). https://doi.org/10.1007/s00245019095732
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00245019095732
Keywords
 Inverse problems
 Optimal control
 Tomography
 LDDMM
 Image registration