An Inverse Problem Involving a Viscous Eikonal Equation with Applications in Electrophysiology

In this work we discuss the reconstruction of cardiac activation instants based on a viscous Eikonal equation from boundary observations. The problem is formulated as a least squares problem and solved by a projected version of the Levenberg–Marquardt method. Moreover, we analyze the well-posedness of the state equation and derive the gradient of the least squares functional with respect to the activation instants. In the numerical examples we also conduct an experiment in which the location of the activation sites and the activation instants are reconstructed jointly based on an adapted version of the shape gradient method from (J. Math. Biol. 79, 2033–2068, 2019). We are able to reconstruct the activation instants as well as the locations of the activations with high accuracy relative to the noise level.


Introduction
This work is concerned with an inverse problem in cardiac electrophysiology. In particular, the activation instants of the excitation wave in the myocardium are estimated from the arrival times of the wave at the epicardium. To briefly explain the problem we recall that the electro-physiologic activity of the heart is often modeled using the bidomain equations, whose numerical solution is very expensive. If one is only interested in the activation times T of the tissue, the bidomain model can be reduced to the simpler viscous Eikonal equation given, for instance, in the form ⎧ ⎨ ⎩ −εdiv(M∇T ) + √ M∇T · ∇T = 1 in Ω, T = u i on Γ i , i = 1, . . . , n ε∇T · n = 0 on Γ N . (1) Here the activation time T (x) is the time instance when the wave front passes through the point x in the domain Ω which describes the computational geometry of the heart. The epicardium of the heart is denoted by Γ N and the boundaries of the activation regions (activation sites) by Γ i . The matrix M describes the fiber orientation of the heart tissue and the values u i ∈ R are the activation instants in the activation regions. On the basis of this model we formulate the inverse problem in the following form where z is the measured data on the epicardium. Problem (2) constitutes an inverse problem for the activation instants u i . Similar inverse problems involving the Eikonal equation are considered in the context of seismic tomography, see for instance [23]. While in the analysis part we focus on reconstructing the activation instants from measurements of the activation time T on the surface of the computational domain Ω, in the numerical section we demonstrate that the activation instants and the location of the activation sites can be reconstructed simultaneously.
To briefly comment on the physiological background of this research, we point out that computational models of cardiac function are increasingly considered as a clinical research tool. For the understanding of the driving mechanism of cardiac electro-mechano-fluidic function, the sequence of electrical activations is of key importance. Computer models intended for clinical applications must be parameterized in a patient-specific manner to approximate the electrical activation sequence in a given patient's heart, which necessitates to solving inverse problems to identify patient specific parameters. Anatomical [5,18] as well as early experimental mapping studies [6], using ex vivo human hearts provided evidence that electrical activation in the left ventricle (LV), i.e. the main pumping chamber that drives blood into the circulatory system, is initiated by the His-Purkinje system [11] at several specific sites of earliest activation (root points) which are located at the endocardial (inner) surface of the LV. In a first approximation it can be assumed that the healthy human LV is activated at these root points by a tri-fascicular conduction system [21] consisting of three major fascicles referred to as anterior, septal and posterior fascicle. Size and location of these patches as well as the corresponding instants of their activation are key determinants shaping the activation sequence of the left ventricle. Since the His-Purkinje system is highly variable in humans, there is significant interest in inverse methods for identifying these sites and activation instants, ideally non-invasively.
As noted above it has become a standard procedure to rely for the mathematical description of the excitation process in the myocardium on various forms of Eikonal equations, see for instance [10,17,19]. These are reduced forms of the bidomain equations, a reaction diffusion system which describes the electrical activity of the heart. In [3,Section 5], see also [4], a singular perturbation technique with respect to the thickness of the myocardial wall and the time taken by the excitation wave front to cross the heart wall is carried out to arrive at various models for the Eikonal equation which differ by the nonlinear term. The two versions which are advocated in [3], and for which numerical comparisons are done there, are |∇T | 2 M and |∇T | 2 M . It is stated there that the model involving |∇T | 2 M is better for wavefront propagation and collision. In the earlier work [15] we have used |∇T | 2 M and solved the inverse shape problem of identifying the centers of the activation regions (spherical subdomains ω i ) from epicardial data z. An alternative approach for describing the wavefront propagation is based on the use of a function ϕ which at time t describes the level set of points for which ϕ(t, x) = 0 gives the position of the wavefront at that times, see [14]. We refer to [20] for an illuminating summary of wave propagation models involving Eikonal equations.
To briefly outline the paper, first we give a sufficient condition for the well-posedness of the elliptic PDE using the Schauder fixed point theorem and the maximum principle. The activation instants enter the state equation as constant Dirichlet boundary conditions on the surface of the activation regions. Then we calculate the gradient of the least squares cost functional with respect to these activation instants. It can be expressed in terms of the normal derivative of the solution to the adjoint state equation on the surface of activation sites. Therefore we also analyze the well-posedness of the adjoint and linearized state equations. Finally, we solve the least squares problem using the projected Levenberg-Marquardt method, see e.g. [9,13] for a convergence analysis of this method.
In our first numerical experiments we consider only the reconstruction of the activation instants. In the second numerical example we perform the joint reconstruction of the activation sites and the activation instants. The activation sites are reconstructed by means of an adapted version of the shape gradient method introduced in [15] together with a projected gradient method for the reconstruction of the activation instants. The numerical examples illustrate the feasibility of the approach and are carried out on the 2D unit square with artificial data.

Problem Statement
Let U ⊂ R d , with d = 2 or d = 3 be a bounded domain and Γ N = ∂U its boundary. In the physiological context it represents the cardiac domain. Within U we consider a family of open subdomains {ω i } n i=1 and we set Γ i = ∂ω i . These boundaries constitutes the surface from where the activation spreads. Then we define Ω = U \ n i=1 ω i which is our mathematical and computational cardiac domain, with boundary ∂Ω = Γ N ∪ n i=1 Γ i . Note that Ω is connected but not simply connected. Let us choose a parameter ε > 0, and fix z ∈ H 1/2 (Γ N ), which represents the epicardial input data.
With these specifications we consider the following problem: subject to the viscous Eikonal equation where β ∈ [0, 1], n is the unit normal on Γ N , and |∇T | 2 M = M∇T · ∇T . Further u = col(u 1 , . . . , u n ) ∈ U ad which is a bounded, closed, and convex set in R n . The function T stands for the activation time, and the matrix M models the cardiac conduction velocity.
Compared to the standard viscous Eikonal equation with the nonlinearity |∇T | 2 M we introduce an additive parameter β > 0 in the model considered in (4). It is a mathematical tool which will allow us to guarantee that the control to state mapping u → T is differentiable and that the adjoint equation is well-posed. For well-posedness of the optimal control problem in Section 3 the choice β = 0 is admissible. In Theorem 2 below we shall argue that the solution T of (4) as a function of β tends to the solution of (4) with β = 0.

Well Posedness of the Viscous Eikonal Equation and Existence of Optimal Controls
We assume that the boundaries of Ω are chosen such that the equation x ∈ Ω holds. Further, for any u ∈ R n we assume the existence of g ∈ W 2,6 (Ω) with g| Γ i = u i for i = 1, . . . , n, g vanishing in a neighbourhood of Γ N , and g W 2,6 (Ω) ≤ c|u| R n , with c independent of u. For example g = n i=1 u i g i can be chosen where the functions g i are chosen as smooth bump functions which are equal to 1 onω i , vanish near Γ N and have the property In the subsequent developments (5) will be used withf replaced by
Moreover there exists a constant c, independent of u ∈ R n , and β ∈ [0, 1] such that with the estimate Thus we can define the operator G : with c(M, ε) independent of β ∈ [0, 1] and T H . In the following we shall utilize Schaefer's fixed point theorem in order to prove that G has a fixed point. At first we prove that G : V → V is continuous and compact. Let according to (7). The compact embedding of H 2 (Ω) ∩ V in V implies the existence of a subsequence {w k } and of a w ∈ V with w k → w in V . By taking the limit in the weak formulation of (6) we see that In order to apply Schaefer fixed point theorem we have to further show that the set Multiplying this equation with T H and integrating over Ω, we obtain by Young's inequality and the fact that 0 < λ ≤ 1: Then Schaefer's fixed point theorem can be applied to G and yields the existence of an element T H ∈ V with G(T H ) = T H which is a solution of (6). Setting T = T H +g we have obtained a solution to (4), for which by (7) Moreover, since ∇T ∈ H 1 (Ω) d and thus ∇T ∈ L 6 (Ω) d , and since also g ∈ W 2,6 (Ω) we have that − β + |∇T | 2 M + 1 + εdiv(M∇g) ∈ L 6 (Ω), and thus 2. Uniqueness: Let T i ∈ W 2,6 (Ω), i = 1, 2 be two solutions of (4) and define δT = T 1 − T 2 . Then δT satisfies the equation It is easy to see, that Here we have used that M 0 0 1 defines a scalar product for the vectors (v, √ β). Alternatively we can note that B(x, v) is an element of the subdifferential of the convex function v → β + |v| 2 M . Thus we have Consequently, Then the maximum principle implies that δT ≤ 0 in Ω, see [25,Theorem 3.27]. Exchanging the roles of T 1 and T 2 in the above argument leads to δT ≥ 0 in Ω, and consequently to δT = 0, which implies the desired uniqueness.
This proof is inspired from [8, Section 9.2, Theorem 5]. Henceforth it will be assumed that ε is large enough so that the solution to (4) according to Theorem 1 exists.

Theorem 2 We have
where T β denotes the solution to (4) as a function of β.
We close this section by asserting the existence of a solution to (3).

Theorem 3
There exists an optimal solution to problem (3).
Proof Due to boundedness of U ad and boundedness from below of J , there exists a minimizing sequence {u n } in U ad , satisfying Compactness and closedness of U ad imply the existence of a subsequence {u n k } with a limit u ∈ U ad . Let T (u n k ) denote the solutions to (4) with u = u n k . By Theorem 1 the sequence T (u n k ) is bounded in W 2,6 (Ω), and hence there exists a subsequence, denoted in the same manner, which converges weakly in W 2,6 (Ω) and, by Rellich's compact embedding theorem, strongly in C 1 (Ω) to someT ∈ W 2,6 (Ω). We can now pass to the limit in (4) (with T = T (u n k ) and u = u n k ) to obtain that the pair (ū,T ) satisfies (4). By (9) we have that lim n→∞ J (u n k ) = J (ū) = min u∈U ad J (u). This concludes the proof.

Well Posedness of the Linearized and Adjoint State Equation
For the practical realization of (3) the gradient of the cost is an essential tool for iterative solution schemes. For this purpose we investigate in this section the differentiability of the control to solution mapping, and we analyze the adjoint equation which will allow us to obtain a convenient expression for the gradient of the cost. These goals require us to assume β > 0, so that the nonlinear term in the state equation is smooth. An alternative procedure might involve proceeding as in [2,16], at the expense of working with generalized differentiability concepts.
We analyse the well-posedness of the following equations and For this purpose we define the bilinear form B : for any ϕ, v ∈ V . We recall the function g ∈ W 2,6 (Ω) defined in the previous section.

Definition 1
The function δT = v + g ∈ H 1 (Ω) is called a weak solution of (10) if v ∈ V solves the variational equation Analogously ϕ ∈ V is called a weak solution of (11) if it solves the variational equation We introduce the operator A : V → V * and its adjoint A * : for all v, ϕ ∈ V .

Proposition 1
The operators A : V → V * and A * : V → V * are isomorphisms. In particular there exists a constant C(M, T , ε) such that Proof The claims follow from a similar argumentation as in the proof of Proposition 2 in [15] using Garding's inequality and the weak maximum principle.
We introduce the space The space W is a closed subspace of H 2 (Ω), since the trace as well as the normal trace operator are continuous. (10)  . We easily see that

Proposition 2 Equation
holds true. Thus Proposition 1 gives us the existence of v ∈ V satisfying (12) and we have the estimate This implies that δT = v + g is the unique weak solution of (10). Moving the term L(T , v) to the right-hand side of (10), we conclude with standard elliptic regularity that δT ∈ W and that (13) holds.

Proposition 3 Equation (11) has a unique weak solution which satisfies ϕ ∈ H 2 (Ω) ∩ V and
Proof Proposition 1 implies the existence of a weak solution which satisfies the estimate . Moving the div-term to the right-hand side of (11) and using div ⎛ the claim follows from standard elliptic regularity.

Derivative of J
In this section we characterize the gradient of J using the linearized and adjoint state equations. It is an essential tool for every gradient based method, in particular for the numerical realization of problem (2), and will also be used for our numerical examples.
Lemma 1 There exists a constant C > 0 independent of β > 0 such that

holds.
Proof There holds Using the reverse triangle inequality for | · | we get Proof By multiplication with the conjugate square root we get utilizing Lemma 1 and ∇T M ≤ β + ∇T 2 M = f (T ). Then using the embedding H 2 (Ω) → W 1,4 (Ω) and that f (T ) −1 ∈ L ∞ (Ω) we get

Theorem 4
The operator S : R N → W , u → T is Frechet differentiable and its derivative S (u)δu in direction δu ∈ R N is given by the solution δT ∈ W of (10) with u i = δu i for i = 1, . . . , N and r = 0.
Proof We introduce the mapping E : Using Lemma 2 it can be argued that E is Frechet differentiable. Moreover due to Proposition 2 the operator D T E(T , u) : W → L 2 (Ω) × R N given by Then there exists a neighbourhood V ⊆ W of T 0 and U ⊆ R N of u 0 and a Frechet differentiable implicit function S : U → V , u → T with derivative given by δT = D T E(T , U ) −1 (0, δu). Since u 0 is arbitrary, the result follows.

Theorem 5 There holds
Proof For each δu ∈ R N we have There holds where δT = S (u)δu ∈ W solves (10) with r = 0.

A Projected Levenberg-Marquardt Method
We solve the inverse problem (3) based on a Levenberg-Marquardt strategy. Let P ad : R d → U ad be the orthogonal projection on U ad . In particular we iterate where 0 < λ ≤ 1 is the stepsize and d solves the problem min d∈R d The gradient of j is given by Thus we have to solve the equation Let H(u) be the matrix representation of S (u) * S (u).

Proposition 4 The matrix H(u) is positive definitive and there holds
with w is the solution of (11) for h being the solution (10) for δu ∈ R n .
Proof The formula follows from the exact same calculation as in the proof of Theorem 5, where we replace T − z by S (u)δu and ϕ by w. Moreover we have The corresponding equality implies S (u)δu = 0 on Γ N . This fact, together with the unique continuation principle [1,22] and uniqueness of solutions for the linearized state equation (10) imply that δu = 0.

Numerical Example
In this section we present two numerical examples. In the first one we reconstruct the activation instants using the Levenberg-Marquardt method. As an alternative we could have used the iteratively regularized Gauß-Newton method. For a version with constraints see e.g. [12,24]. In the second example we jointly reconstruct the positions of the activation regions and the activation instants using a combined shape gradient and projected gradient method.

Finding the Activation Instants
In this example, the computational domain U is given by the unit-square (0, 1) × (0, 1). We consider three activation sites ω i = B 0.1 (x i ) whose midpoints are given by The observed data is given on the boundary Γ N of U . While the domain is unrealistic from an applications point of view, the fact that the number of activation sites is known is physiologically reasonable. Indeed, early experimental work [21], using ex vivo human hearts provides evidence that electrical activation in the left ventricle is initiated at three discrete sites.
The domain U is discretized by 66049 vertices and 131072 triangles, which yields a discretization size of ≈ 4 · 10 −3 . The state, linearized state and adjoint variable are approximated by P 1 finite elements on the mentioned grid using the Fenics toolbox. The nonlinear system of equations for the state variable is solved using Newtons method. The linear equations for the linearized state and the adjoint state are solved using a direct method. Moreover we set ε = 0.1 and M = sin(π x) + 1.1 0 0 s i n (πy) + 1.1 .
For the numerical computations we chose β = 0 and did not encounter any numerical instabilities due to term |T (x)| −1 M in the linearized state and adjoint state equations. In other examples instabilities might arise, which would force us to introduce β > 0. The exact activation instants are given by u † = (0, 0.1, 0.2) . Then observed data z is generated by solving the state equation numerically for T for u † , restricting T to Γ N and adding noise η. The used perturbance has the form where δ ≥ 1 andη is a FEM-function with random coefficients onΩ. The random coefficients are chosen from a standard normal distribution with zero mean and variance 1. The functionη is restricted to the boundary Γ N and denoted byη| Γ N . Thus δ is the relative noise level. In this example we choose δ = 0.1 and δ = 10 −9 .
After discretization of the state and adjoint state variable by linear finite elements, the projected Levenberg-Marquartdt Method can directly be applied. In every step of the iteration the matrix H(u) is calculated by solving the linearized state equation and the adjoint equation for all unit vectors resulting in a 3 × 3 matrix. Thus 6 linear PDEs must be solved numerically, see Proposition 4. Moreover the gradient of J has to be calculated by solving the nonlinear state equation and the adjoint state equation. So in total 8 PDEs have to be solved per iteration. The nonlinear state equation is solved by the Newton method. Since β = 0 the method can be interpreted as a semi smooth Newton method. The iteration is stopped by the discrepancy principle with τ > 1, see for instance [7]. In our experiments we choose τ = 1.1. The parameter α k is set to 0.1 k .

Finding the Activation Instants and Activation Regions
In this section we consider a similar scenario as before. But in addition to the activation instants we also reconstruct the position of the activation regions ω i by determining the midpoints of ω i . For this purpose we use the shape optimization approach introduced in [15] for the squared version of the Eikonal equation. Here we only modify the formulas developed in that work to fit our state equation. The shape derivative of J with respect to a smooth perturbation field h with compact support on U = Ω ∪ n i=1ω i is given by with the outer product v ⊗w = vw for v, w ∈ R d , the inner product G : N = trace(GN ) for G, N ∈ R d×d , and where M k stands for the kth column of M. Based on the shape derivative we calculate a perturbation field h by solving a linear elasticity equation of the form U γ Dh: Dv + h · v dx = − Ω S 1 : Dv + S 0 · v dx, ∀v ∈ H 1 0 (U, R d ) for γ > 0 and thus h is a decent direction for J . Since we are only interested in the shift of the midpoints x i of ω i , we average h over ω i , i = i, . . . , N, in order to get a shift of the midpoints. The proposed method is of gradient type and thus we also update the activation instants based on the gradient calculated in Theorem 5. In particular we use a projected gradient method.
In Table 1 and Fig. 1 we summarize our finding for the three noise levels δ. In particular we document the number of iterations K δ at which the discrepancy criterion is reached, the Bottom left: Distances between X k and X † during the iteration; Bottom right: Error of the activation instants during the iteration state error S(X K δ , u K δ )−z L 2 (Γ N ) , the distance between reconstructed and exact positions denoted by d K δ and the reconstruction error |u K δ − u † |. The reconstructed position of the three midpoints as well as the activation instants u are given for the respective noise levels by We conclude that the positions can be reconstructed with good quality relative to the noise level. Further tests in the noise free case showed that there is limit until which the state error can be reduced. This is caused by discretization effects.