Solving an inverse elliptic coefficient problem by convex non-linear semidefinite programming

Several applications in medical imaging and non-destructive material testing lead to inverse elliptic coefficient problems, where an unknown coefficient function in an elliptic PDE is to be determined from partial knowledge of its solutions. This is usually a highly non-linear ill-posed inverse problem, for which unique reconstructability results, stability estimates and global convergence of numerical methods are very hard to achieve. The aim of this note is to point out a new connection between inverse coefficient problems and semidefinite programming that may help addressing these challenges. We show that an inverse elliptic Robin transmission problem with finitely many measurements can be equivalently rewritten as a uniquely solvable convex non-linear semidefinite optimization problem. This allows to explicitly estimate the number of measurements that is required to achieve a desired resolution, to derive an error estimate for noisy data, and to overcome the problem of local minima that usually appears in optimization-based approaches for inverse coefficient problems.


Introduction
Inverse elliptic coefficient problems arise in a number of applications in medical imaging and non-destructive material testing. The arguably most prominent example is the Calderón problem [5,6] which models electrical impedance tomography (EIT) where the electrical conductivity distribution inside a patient is to be determined from current/voltage measurements on its surface, cf. [1] B. Harrach Institute for Mathematics, Goethe-University Frankfurt, Frankfurt am Main, Germany E-mail: harrach@math.uni-frankfurt.de for an overview. Theoretical uniqueness questions for inverse elliptic coefficient problems have mostly been studied in the idealized infinite-dimensional setting where (intuitively speaking) the unknown coefficient function is to be determined with infinite resolution from infinitely many measurements, cf., e.g., [7,12,15]. Lipschitz stability results have been obtained for finitely many unknowns and infinitely many measurements in, e.g., [3,4,11]. Recently there has been progress on the practically very relevant case of finitely many unknowns and measurements, cf., e.g., [2,8,14]. But little is known yet about explicitly characterizing the required number of measurements for a given desired resolution.
Practical reconstruction algorithms for inverse coefficient problems are usually based on regularized data-fitting, which formulates the inverse problem as a minimization problem for a residuum functional together with a regularization term. As the residuum formulation is typically non-convex, this approach highly suffers from the problem of local minima. Convexification approaches for inverse coefficient problems have been studied in, e.g., [13]. But, to the knowledge of the author, no equivalent convex reformulations of inverse coefficient problems with finitely many measurements have been found yet.
The aim of this work is to show that a uniquely solvable convex reformulation of an inverse coefficient problem is indeed possible if enough measurements are being taken, and that the required number of measurements can be explicitly characterized. More precisely, we state a criterion that is sufficient for unique solvability and for the solution minimizing a linear cost functional under a convex non-linear semidefinite constraint. For a given desired resolution and a given number of measurements, the criterion can be explicitly checked by calculating finitely many forward solutions. The criterion is fulfilled if sufficiently many measurements are taken. Thus, the required number of measurements can be found by starting with a low number and incrementally increasing it until the criterion is fulfilled. The criterion also yields explicit error estimates for noisy data.
This work is closely related to [10] that gives an explicit construction of special measurements that uniquely determine the same number of unknowns in an inverse elliptic coefficient problem by a globally convergent Newton rootfinding method. We also formulate our result for the same inverse Robin transmission problem as in [10] which is motivated by EIT-based corrosion detection and may be considered as a simpler variant of the Calderón problem. Our main advance in this work is the step from Newton root-finding to a convex semidefinite program. This allows utilizing a redundant set of given measurements, and eliminates the need of specially constructed measurements. It also simplifies the underlying theory as it no longer requires simultaneously localized potentials, and allows the criterion to be written using the Loewner order, which very naturally arises in elliptic inverse coefficient problems with finite resolution and finitely many measurements [9]. Also, to the knowledge of the author, this work is the first connection between the emerging research fields of semidefinite optimization and inverse coefficient problems, which might bring new inspiration to these important fields.

Inverse problems for convex monotonous functions
Let ,,≤" denote the entry-wise order on R n , and ,, " denote the Loewner order on the space of symmetric matrices S m ⊆ R m×m , with n, m ∈ N. For A ∈ S m the largest eigenvalue is denoted by λ max (A).
Given a-priori bounds b > a > 0, we consider the inverse problem to where F : R n + → S m is assumed to be a continuously differentiable, convex and monotonically non-increasing matrix-valued function, i.e., for all x, x (0) ∈ R n + , and all 0 ≤ d ∈ R n , Such problems naturally arise in inverse coefficient problems in elliptic PDEs with finite resolution and finitely many measurements [9]. Note that, here and in the following, we write the derivative of F in a point (2) and (3), if and only if F fulfills , cf., e.g., [9,Lemma 2] for the only-if-part. The if part immediately follows from writing the directional derivative as differential quotient.
In this section we will derive a sufficient criterion for unique solvability of the finite-dimensional inverse problem (1) and for reformulating it as a convex optimization problem. Note that our criterion may appear technical at a first glance, but we stress that it only requires finitely many evaluations of directional derivatives of F , so that it can be easily checked in practice. Moreover, for the inverse Robin transmission problem considered in section 3, we will show that the criterion will always be fulfilled if sufficiently many measurements are taken. Hence, the criterion allows to constructively determine the number of measurements that are required for a certain resolution and for convex reformulation by simply increasing the number of measurements until the criterion is fulfilled.
To formulate our result, let e j ∈ R n denote the j-th unit vector, ½ ∈ R n denote the vector of ones, and e ′ j := ½ − e j is the vector containing zero in the j-th component and ones in all others. For a matrix A ∈ R n×n , A 2 denotes the spectral norm, and for a number λ ∈ R, ⌈λ⌉ denotes the ceiling function, i.e., the least integer greater than or equal to λ.
be continuously differentiable, convex and monotonically non-increasing. If x is the unique minimizer of the convex optimization problem (5) possesses a minimum, and every such minimum x δ fulfills To prove Theorem 1 we will show the following lemmas.
Proof We will show that, for all d ∈ R n , which clearly implies (7). (8) then follows from (7) by the convexity property We prove (9) by contraposition and assume that there exists an index k ∈ {1, . . . , n} with We have that either and in both cases it follows that Hence, by (6) and monotonicity, which yields that so that (9) is proven. ✷ Remark 1 Lemma 1 can be considered a converse monotonicity result, as it yields that, for all y ∈ R n + , with y = x, Lemma 2 Let F : R n + → S m , n, m ≥ 2, be continuously differentiable, convex and monotonically non-increasing, and b ≥ a > 0. If λ := min j=1,...,n, k=2,...,K λ max (F ′ (z j,k )(d j )) > 0, where z j,k ∈ R n + , d j ∈ R n , and K ∈ N are defined as in Theorem 1), then λ max (F ′ (x)((n − 1)e ′ j − e j )) ≥ λ for all x ∈ [a, b] n , j ∈ {1, . . . , n}.

Hence, if (10) is proven, then
so that the assertion follows.
To prove (10), let j ∈ {1, . . . , n}, and x ∈ [a, b] n . We define t := x j + a 2(n−1) . Then, for all 0 ≤ δ ≤ a 4(n−1) so that we obtain from monotonicity (2) and convexity (3) which proves (10) and thus the assertion. ✷ Proof of Theorem 1. Under the assumption of Theorem 1, it follows from Lemma 2, that the assumptions of Lemma 1 are fulfilled, so that (8) holds for all x, y ∈ [a, b] n . In particular this yields thatŶ := F (x) uniquely determineŝ x ∈ [a, b] n . Moreover, for every x ∈ [a, b] n with x =x, and F (x) Ŷ = F (x), we obtain from Remark 1 that which shows thatx is the unique minimizer of (4). This proves Theorem 1(a). To prove Theorem 1(b), we note that the set of all x ∈ [a, b] n with F (x) Y δ + δI is compact and non-empty since it containsx. Hence, at least one minimizer of (5) exists. Every minimizer x δ ∈ [a, b] n fulfills which contradicts the minimality of x δ . Hence ✷

Application to an inverse elliptic coefficient problem
We will now study the problem of determining a Robin transmission coefficient in an elliptic PDE from finitely many measurements. Using Theorem 1 we will show that this inverse coefficient problem can be rewritten as a uniquely solvable convex non-linear semidefinite optimization problem if enough measurements are being used. This also gives a constructive criterion whether a certain number of measurements suffices to determine the Robin parameter with a given desired resolution by convex optimization, and yields an error estimate for noisy data.

The infinite-dimensional inverse Robin transmission problem
Let Ω ⊂ R d (d ≥ 2) be a bounded domain and D ⊂ Ω be an open subset with D ⊂ Ω. Ω and D are assumed to have Lipschitz boundaries, ∂Ω and Γ := ∂D, and Ω \ D is assumed to be connected. We consider the inverse problem of recovering the coefficient γ ∈ L ∞ + (Γ ) in the elliptic Robin transmission problem from the Neumann-Dirichlet-Operator where u g γ ∈ H 1 (Ω) solves (11)- (14).
We summarize and reformulate some known results on the Neumann-Dirichlet operator that motivate why the corresponding finite-dimensional inverse problem can be treated with the methods from section 2. In the following theorem "≤" is to be understood pointwise almost everywhere for L ∞functions, and " " is the Loewner order on the space of self-adjoint operators.
Proof Fréchet differentiability, monotonicity and convexity of Λ are shown in [10,Lemma 5], cf. also [11,Lemma 4.1]. (b) follows from the localized potentials result in [11,Lemma 4.3]. The "only if"-part in (c) follows from (a), and the "if"-part in (c) easily follows from using (b) together with the convexity inequality in (a). ✷

The inverse problem with finitely many measurements
We now consider the inverse Robin transmission problem with finite resolution and finitely many measurements as in [10]. We assume that the unknown coefficient function γ ∈ L ∞ + (Γ ) is piecewise constant on an a-priori known partition of Γ , i.e.
where Γ 1 , . . . , Γ n , n ≥ 2, are pairwise disjoint measurable subsets of Γ . For the ease of notation, we identify a piecewise constant function γ ∈ L ∞ (Γ ) with the vector γ = (γ 1 , . . . , γ n ) T ∈ R n in the following. We also assume that we know a-priori bounds b > a > 0, so that γ ∈ [a, b] n . We aim to reconstruct γ ∈ [a, b] n from finitely many measurements of Λ(γ). More precisely, we assume that (g j ) j∈N ⊆ L 2 (∂Ω) has dense span in L 2 (∂Ω), and that we can measure F (γ) := ∂Ω g j Λ(γ)g k ds j,k=1,...,m ∈ R m×m for some number m ∈ N. Thus, the question whether a certain number of measurements determine the unknown coefficient with a certain resolution can be written as the problem to Using our results in section 2 we can now show that this inverse problem is uniquely solvable if sufficiently many measurements are being used, and that it can be equivalently reformulated as a convex semidefinite program.
Proof F (γ) is a symmetric matrix since Λ(γ) is self-adjoint. Fréchet differentiability, monotonicity and convexity of F : R n + → S m immediately follow from the corresponding properties of Λ in Theorem 2. For all j = 1, . . . , n, k = 2, . . . , K, we have that Λ ′ (z j,k )d j 0 by Theorem 2(b). By density, it follows that for all j = 1, . . . , n, k = 2, . . . , K there exists m ∈ N so that F ′ (z j,k )d j 0, and since there are only finitely many such combinations of j and k, there exists m ∈ N , so that all these matrices possess a positive eigenvalue. Hence, the assertions follow from Theorem 1. ✷

Conflict of interest and data availability statement
The author declares that he has no conflict of interest. Data sharing not applicable to this article as no datasets were generated or analysed during the current study