Monotonicity-Based Regularization for Shape Reconstruction in Linear Elasticity

We deal with the shape reconstruction of inclusions in elastic bodies. For solving this inverse problem in practice, data fitting functionals are used. Those work better than the rigorous monotonicity methods from [5], but have no rigorously proven convergence theory. Therefore we show how the monotonicity methods can be converted into a regularization method for a data-fitting functional without losing the convergence properties of the monotonicity methods. This is a great advantage and a significant improvement over standard regularization techniques. In more detail, we introduce constraints on the minimization problem of the residual based on the monotonicity methods and prove the existence and uniqueness of a minimizer as well as the convergence of the method for noisy data. In addition, we compare numerical reconstructions of inclusions based on the monotonicity-based regularization with a standard approach (one-step linearization with Tikhonov-like regularization), which also shows the robustness of our method regarding noise in practice.


Introduction
The main motivation is the non-destructive testing of elastic structures, such as is required for material examinations, in exploration geophysics, and for medical diagnostics (elastography). From a mathematical point of view, this constitutes an inverse problem since we have only measurement data on the boundary and not inside of the elastic body. This problem is highly ill-posed, since even the smallest measurement errors can completely falsify the result.
There are several authors who deal with the theory of the inverse problem of elasticity. For the two dimensional case, we refer the reader to [14,21,15,17]. In three dimensions, [22,23] and [8] gave the proof for uniqueness results for both Lamé coefficients under the assumption that µ is close to a positive constant. [2,3] proved the uniqueness for partial data, where the Lamé parameters are piecewise constant and some boundary determination results were shown in [20,22,17].
Further on, solution methods applied so far for the inverse problem, which will be solved in this paper, were presented in the following works: In [24] and [25], the time-independent inverse problem of linear elasticity is solved by means of the adjoint method and the reconstruction is simulated numerically. In addition, [26] deals with the coupling of the state and adjoint equation and added two variants of residual-based stabilization to solve the inverse linear elasticity problem for incompressible plane stress. A boundary element-Landweber method for the Cauchy problem in stationary linear elasticity was investigated in [19]. In [13], the stationary inverse problem was solved by means of a Landweber iteration as well and numerical examples were presented. Reciprocity principles for the detection of cracks in elastic bodies were investigated, for example, in [1] and [27] or more recently in [9]. By means of a regularization approach, a stationary elastic inverse problem is solved in [16] and applied in numerical examples. [18] introduces a regularized boundary element method. Finally, we want to mention the monotonicity methods for linear elasticity developed by the authors of this paper in [5] as well as its application for the reconstruction of inclusions based on experimental data in [7].
We want to point out that the reconstruction of the support of the Lamé parameters, also called shape in this paper, and not the reconstruction of their values is the topic of this work. The key issue of the shape reconstruction of inclusions is the monotonicity property of the corresponding Neumann-to-Dirichlet operator (see [28,29]). These monotonicity properties were also applied for the construction of monotonicity tests for electrical impedance tomography (EIT), e.g., in [12], as well as the monotonicity-based regularization in [11]. In practice however, data fitting functionals provide better results than the monotonicity methods but the data-fitting functionals are usually not convex (see, e.g. [10]). Even for exact data, therefore, it cannot generally be guaranteed that the algorithm does not erroneously deliver a local minimum. In addition, there is noise and ill-posedness. The local convergence theory of Newton-like methods requires non-linearity assumptions such as the tangential cone condition, which are still not proven even for simpler examples such as EIT. The convergence theory of Tikhonov-regularized data fitting functionals applies to their global minima, which in general cannot be found due to the non-convexity. Our method is based on the minimization of a convex functional and is to the knowledge of the authors the first rigorously convergent method for this problem, but only provides the shape of the inclusions. We combine the monotonicity methods (cf. [6] and [5]) with data fitting functionals to obtain convergence results and an improvement of both methods regarding stability for noisy data. Here, we want to remark that compared to other data-fitting methods, we use the following a-priori assumptions: the Lamé parameters fulfill monotonicity relations, have a common support, the lower and upper bounds of the contrasts of the anomalies are known and we deal with a constant and known background material. Compared with [11], we expand the approach used there from the consideration of only one parameter to two parameters.
The outline of the paper is as follows: We start with the introduction of the problem statement. In order to detect and reconstruct inclusions in elastic bodies, we aim to determine the difference between an unknown Lamé parameter pair (λ, µ) and that of the known background (λ 0 , µ 0 ) and formulate a minimization problem. Similar to the linearized monotonicity tests in [5], we also consider the Fréchet derivative, which approximates the difference between two Neumann-to-Dirichlet operators. For solving the resulting minimization problem, we first take a look at a standard approach (standard one-step linearization method). Therefore regularization parameters are introduced, which can only be determined heuristically. For this purpose, for example, a parameter study can be carried out. We would like to point out that this method is only a heuristic approach, but is commonly used in practice. Overall, this heuristic approach leads to reconstructions of the unknown inclusions without a rigorous theory. In Section 4, we focus on the monotonicity-based regularization in order to enhance the data fitting functionals. The idea of the regularization is to introduce conditions for the parameters / inclusions to be reconstructed for the minimization problem, which are based on the monotonicity properties of the Neumann-to-Dirichlet operator and the monotonicity tests. Further on, we prove that there exists a unique minimizer for this problem and that we obtain convergence even for noisy data. Finally, we compare numerical reconstructions of inclusions based on the monotonicity-based regularization with the one-step linearization, which also shows the robustness of our method regarding noise in practice.

Problem Statement
We start with the introduction of the problems of interest, e.g., the direct as well as inverse problem of stationary linear elasticity.
Let Ω ⊂ R d (d = 2 or 3) be a bounded and connected open set with Lipschitz boundary ∂Ω = Γ = Γ D ∪ Γ N , Γ D ∩Γ N = ∅, where Γ D and Γ N are the corresponding Dirichlet and Neumann boundaries. We assume that Γ D and Γ N are relatively open and connected. For the following, we define Let u : Ω → R d be the displacement vector, µ, λ : Ω → L ∞ + (Ω) the Lamé parameters,∇u = 1 2 ∇u + (∇u) T the symmetric gradient, n is the normal vector pointing outside of Ω , g ∈ L 2 (Γ N ) d the boundary force and I the d × d-identity matrix. We define the divergence of a matrix A ∈ R d×d where e i is a unit vector and x j a component of a vector from R d .
The boundary value problem of linear elasticity (direct problem) is that u ∈ H 1 (Ω) d solves From a physical point of view, this means that we deal with an elastic test body Ω which is fixed (zero displacement) at Γ D (Dirichlet condition) and apply a force g on Γ N (Neumann condition). This results in the displacement u, which is measured on the boundary Γ N .
The equivalent weak formulation of the boundary value problem (1) We want to remark that for λ, µ ∈ L ∞ + (Ω) the existence and uniqueness of a solution to the variational formulation (4) can be shown by the Lax-Milgram theorem (see e.g., in [4]).
Measuring boundary displacements that result from applying forces to Γ N can be modeled by the Neumann-to-Dirichlet operator Λ(λ, µ) defined by This operator is self-adjoint, compact and linear (see Corollary 1.1 from [5]). Its associated bilinear form is given by where u g (λ,µ) solves the problem (1)-(3) and u h (λ,µ) the corresponding problem with boundary force h ∈ L 2 (Γ N ) d .
Next, we take a look at the discrete setting. Let the Neumann boundary Γ N be the union of the patches Γ where g l , l = 1, . . . , M , denote the M given boundary forces applied to the corresponding patches Γ (l) N . In order to discretize the Neumann-to-Dirichlet operator, we apply a boundary force g l on the patch Γ  For the Neumann boundary forces as described here, we get an orthogonal system g l in L 2 (Γ N ) d . In practice, we additionally normalize the system g l and use more patches Γ

Standard One-step Linearization Methods
In this section we take a look at one-step linearization methods. We want to remark that these methods are only a heuristical approach but commonly used in practice.
We compare the matrix of the discretized Neumann-to-Dirichlet operator Λ(λ, µ) with Λ(λ 0 , µ 0 ) for some reference Lamé parameter (λ 0 , µ 0 ) in order to reconstruct the difference (λ, µ) − (λ 0 , µ 0 ). Thus, we apply a single linearization step For the solution of the problem, we discretize the reference domain Ω = where χ Bj is the characteristic function w.r.t. the pixel B j and set κ = (κ j ) p j=1 ∈ R p and ν = (ν j ) p j=1 ∈ R p . This approach leads to the linear equation system where V and the columns of the sensitivity matrices S λ and S µ contain the entries of Λ(λ 0 , µ 0 ) − Λ(λ, µ) and the discretized Fréchet derivative for a given B j for j = 1, ..., p, respectively. Here, we have Solving (12) results in a standard minimization problem for the reconstruction of the unknown parameters. In order to determine suitable parameters (κ, ν), we regularize the minimization problem, so that we have with ω and σ as regularization parameters. For solving this minimization problem we consider the normal equation Obtaining a solution for this system is memory expensive and finding two suitable parameters ω and σ can be time consuming, since we can only choose them heuristically. However, the parameter reconstruction provides good results as shown in the next part.

Numerical Realization
We present a simple test model, where we consider a cube of a biological tissue with two inclusions (tumors) as depicted in Figure 2. The Lamé parameters of the corresponding materials are given in Table 1.

Exact Data
First of all, we take a look at the example without noise, which means we assume we are given exact data.
In order to obtain a suitable visualization of the 3D reconstruction, we manipulate the transparency parameter function α : R → [0, 1] of Figure 4 as exemplary depicted for the Lamé parameter µ in Figure 3. It should be noted that a low transparency parameter indicates that the corresponding color (here, the colors around zero) are plotted with high transparency, while a high α indicates that the corresponding color is plotted opaque. The reason for this choice is that values of the calculated difference κ = µ 1 − µ 0 close to zero are not an indication of an inclusion, while values with a higher absolute value indicate an inclusion. Hence, this choice of transparency is suitable to plot the calculated inclusions without being covered by white tetrahedrons with values close to zero. Further, the reader should observe that α(κ) > 0 for all values of κ, so that all tetrahedrons are plotted and that the transparency plot for ν takes the same shape but is adjusted to the range of the calculated values.
The following results (Figure 4 - Figure 5) are based on a parameter search and the regularization parameters are chosen heuristically. Thus, we only present the results with the best parameter choice (ω = 10 −17 and σ = 10 −13 ) and reconstruct the difference in the Lamé parameters µ and λ.   With these regularization parameters, the two inclusions are detected and reconstructed correctly for µ (see Figure 4 in the left hand side) and the value of µ − µ 0 is in the correct amplitude range as depicted in Figure 5. Figure 4 shows us, that for λ − λ 0 , the reconstruction does not work. The reason is that the range of the Lamé parameters differs from each other around 10 2 Pa (λ ≈ 100·µ), but i.e. the signatures of µ are represented far stronger in the calculation of V than those of λ.

Noisy Data
Next, we go over to noisy data. We assume that we are given a noise level η ≥ 0 and set Further, we define Λ δ (λ, µ) as Hence, we have In the following examples, we consider relative noise levels of η = 1% ( Figure 6 ) and η = 10% (Figure 8 and 9) with respect to the Frobenius norm as given in (19), where the regularization parameters are chosen heuristically and given in the caption of the figure.
In Figure 6, we observe that for a low noise level with η = 1%, we obtain a suitable reconstruction of the inclusion concerning the Lamé parameter µ and the reconstruction of λ fails again.  In contrary to the low noise level (η = 1%), Figures 8 and 9 show us that the standard one-step linearization method has problems in handling higher noise levels (η = 10%). As such, in the 3D reconstruction (see Figure 8) it is hard to recognize the two inclusions even with respect to the Lamé parameter µ. Further on in the plots of the cuts in Figure 9, the reconstructions of the inclusions are blurred out.  : Shape reconstruction of two inclusions of the reconstructed difference in the Lamé parameter µ for the regularization parameters ω = 6 · 10 −17 and σ = 6 · 10 −13 depicted as cuts with relative noise η = 10%.

Remark 1.
All in all, the numerical experiments of this section motivate the consideration of a modified minimization problem in order to obtain a stable method for noisy data as well as a good reconstruction for the Lamé parameter λ. In doing so, we will combine the idea of the standard one-step linearization with the monotonicity method.

Enhancing the Standard Residual-based Minimization Problem
We summarize and present the required results concerning the monotonicity properties of the Neumann-to-Dirichlet operator as well as the monotonicity methods introduced and proven in [6] and [5].
We define the outer support in correspondence to [5] as follows: let φ = (φ 1 , φ 2 ) : Ω → R 2 be a measurable function, the outer support out ∂Ω supp(φ) is the complement (in Ω) of the union of those relatively open U ⊆ Ω that are connected to ∂Ω and for which φ| U = 0, respectively.

Corollary 3.
Linearized monotonicity test for noisy data (Corollary 2.9 from [5]) . Let Λ δ be the Neumann-to-Dirichlet operator for noisy difference measurements with noise level δ > 0. Then for every open set B ⊆ Ω there exists a noise level δ 0 > 0, such that for all 0 < δ < δ 0 , B is correctly detected as inside or not inside the inclusion D by the following monotonicity test Finally, we present the result (see Figure 10) obtained from noisy data Λ δ with the linearized monotonicity method as described in Corollary 3, where we use the same pixel partition as for the one-step linearization method. Figure 10: Shape reconstruction of two inclusions (red) for α λ = 0.28(λ 1 − λ 0 ) ≈ 4.6 · 10 5 Pa, α µ = 0.28(µ 1 − µ 0 ) ≈ 4.7 · 10 3 Pa with relative noise η = 0.1% and δ = 1.88 · 10 −10 . Figure 10, where the two inclusions are not separated) than the theoretically unproven heuristic one-step linearization (see Figure 6, where the two inclusions are separated). Thus, we improve the standard one-step linearization method by combining it with the monotonicity method without losing the convergence results.

Monotonicity-based Regularization
We assume again that the background (λ 0 , µ 0 ) is homogeneous and that the contrasts of the anomalies (γ λ , γ µ ) T ∈ L ∞ + (D) 2 with , are bounded for all x ∈ D (a.e.) via D is an open set denoting the anomalies and the parameters λ 0 , µ 0 , c λ , c µ , C λ and C µ are assumed to be known. In addition, we want to remark that Ω \ D has to be connected.
In doing so, we can also handle more general Lamé parameters and not only piecewise constant parameters as in the previous section.
The main idea of monotonicity-based regularization is to minimize the residual of the linearized problem, i.e., with constraints on (κ, ν) that are obtained from the monotonicity properties introduced in Lemma 1 and 2. Our aim is to rewrite the minimization problem (25) for the case µ 0 = µ, λ 0 = λ in D in order to be able to reconstruct the inclusions also with respect to λ. Our intention is to force that both Lamé parameters µ(x) and λ(x) take the same shape but different scale.
In more detail, we define the quantities a max and τ as such that for all 0 ≤ a ≤ a max .
In addition, we set the residual r(ν) as and the components of the corresponding matrix R(ν) are given by We want to remark, that we use the same boundary loads g i , i = 1, . . . , M, as in Section 2.
Finally, we introduce the set where we set by χ k := χ B k .
Note that the set on the right hand side of (30) is non-empty since it contains the value zero by Corollary 1 and our assumptions λ ≥ λ 0 , µ ≥ µ 0 .
Then, we modify the original minimization problem (25) to min ν∈C R(ν) F .

Remark 3. We want to remark that β k is defined via the infinite-dimensional Neumann-to-Dirichlet operator Λ(λ, µ) and does not involve the finite dimensional matrix R.
For the numerical realization we will require a discrete versionβ k of β k introduced later on.

Main Results
In the following we present our main results and will show that the choices of the quantities a max and τ will lead the correct reconstruction of the support of µ(x) and ν(x), which we introduced in (26) and (27), respectively, based on the lower bounds from the monotonicity tests as stated in (28) and (29).

Theorem 1. Consider the minimization problem
The following statements hold true: (i) Problem (31) admits a unique minimizerν.
(ii) supp(ν) and D agree up to the pixel partition, i.e. for any pixel B k Now we deal with noisy data and introduce the corresponding residual Based on this, R δ (ν) represents the matrix ( g i , r δ (ν)g j ) i,j=1,...M .
Further on, the admissible set for noisy data is defined by Thus, we present the following stability result.

Theorem 2. Consider the minimization problem
The following statements hold true: (i) Problem (34) admits a minimizer.

Remark 4.
In [5], we used monotonicity methods to solve the inverse problem of shape reconstruction. In Theorem 1 and Theorem 2, we applied the same monotonicity methods to construct constraints for the residual based inversion technique. Both methods have a rigorously proven convergence theory, however the monotonicity-based regularization approach turns out to be more stable regarding noise.

Theoretical Background
In order to prove Theorem 1 as well as Theorem 2, we have to take a look at the following.
Proof. We adopt the proof of Lemma 3.4 from [11].
Step 1: First, we verify that from B k ⊆ D it follows that β k > 0. In fact, by applying the monotonicity principle (22) multiplied by −1 for λ 1 := λ, µ 1 := µ and λ 2 := λ 0 , µ 2 := µ 0 , we end up with the following inequalities for all pixel B k , all a ∈ [0, a max ] and all g ∈ In the above inequalities, we used the shorthand notation u g 0 for the unique solution u g (λ0,µ0) . The last inequality holds due to the fact that a max and τ fulfill in D and that B k lies inside D.
We want to remark, that compared with the corresponding proof in [11], this shows us that we require conditions on a max as well as on τ (c.f. Equation (28) and (29)) due to the fact that we deal with two unknown parameters (λ and µ) instead of one.
Step 2: In order to prove the other direction of the statement, let β k > 0. We will show that B k ⊆ D by contradiction. Assume that B k ⊆ D and β k > 0. Applying the monotonicity principle from Lemma 1, with the definition of β k in (30), we are led to Based on this, we conclude that for all g ∈ L 2 (Γ N ) d On the other hand, using the localized potentials in a similar procedure as in the proof of Theorem 2.1 in [5], we can find a sequence (g m ) m∈N ⊂ L 2 (Γ N ) d such that the solutions (u m 0 ) m∈N ⊂ H 1 (Ω) d of the forward problem (when the Lamé parameter are chosen to be λ 0 , µ 0 and the boundary forces which contradicts (35).

Lemma 4.
For all pixels B k , denote by S τ k the matrix Then S τ k is a positive definite matrix.
Proof. We adopt the proof of Lemma 3.5 from [11] for the matrix S τ k , which directly yields the desired result.
Proof. (Theorem 1) This proof is based on the proof of Theorem 3.2 from [11]. to (i) Since the functional is continuous, it admits a minimizer in the compact set C.
The uniqueness ofν will follow from the proof of (ii) Step 3.

to (ii)
Step 1 We shall check that for all it holds that r(ν) ≤ 0 in quadratic sense. We want to remark that for a max and τ it holds that (28) and (29) in D.
We proceed similar as in the proof of Lemma 3 and use Lemma 2 for λ 1 = λ, µ 1 = µ, λ 2 = λ 0 and µ 2 = µ 0 . In addition, we multiply the whole expression with −1. Thus, it holds that If a k > 0, it follows β k ≥ a k > 0, so that Lemma 3 implies that B k ⊆ D. Since a k ≤ a max , we end up with g, r(ν)g ≤ 0 for g ∈ L 2 (Γ N ) d .
Per definition of β k , it holds that β k ≥â k . This implies β k > 0. With Lemma 3 we have Step 3: We will prove that, ifν = p k=1â k χ k is a minimizer of problem (31), then the representation ofâ k is given byâ In fact, it holds thatâ k < a max . If there exists a pixel B k such thatν(x) < min(a max , β k ) in B k , we can choose h ν > 0, such thatν + h ν χ k = a max in B k . We will show that then, which contradicts the minimality ofν. Thus, it follows thatâ k = min (a max , β k ).

Due to
Step 1, r(ν) ≤ 0 and r(ν+h ν χ k ) ≤ 0 in the quadratic sense. Thus, for all x = (x 1 , ..., x M ) T ∈ R M , we have x i g i . This means that −R(ν) is a positive semi-definite symmetric matrix in R M ×M .
Due to the fact, that all eigenvalues of a positive semi-definite symmetric matrix are non-negative, it follow that θ i (ν) ≤ 0 for all i ∈ {1, ..., M }. By the same considerations, −R(ν + h ν χ k ) is also a positive semi-definite matrix. We want to remark, that S τ k is positive definite as proven in Lemma 4 and hence, all M eigenvalues of and the matrices R(ν + h ν χ k ), R(ν) + h ν and S τ k are symmetric, we can apply Weyl's Inequalities to get for all i ∈ {1, ..., M }.
In summary we end up with which contradicts the minimality ofν and thus, ends the proof of Step 3.
In conclusion, problem (31) admits a unique minimizerν witĥ This minimizer fulfillsν Next, we go over to noisy data and take a look at the following lemma, where we set V δ := 1 2 (V δ + (V δ ) * ), since we always can redefine the data V δ in this way without loss of generality. Thus, we can assume that V δ is self-adjoint.
Proof. The proof follows the lines of Lemma 3.7 in [11] with the following modifications. We have to check that β k as given in (30) fulfills the relation As proven in [11], the operator V − V δ ≥ −δI in quadratic sense. Further on Lemma 3.6 from [11] implies |V δ | ≥ V δ , since V δ is self-adjoint. Hence, Remark 5. As a consequence, it holds that 1. If B k lies inside D, then β k,δ ≥ a max .

Proof. (Theorem 2)
This proof is based on the proof of Theorem 3.8 in [11]. to (i) For the proof of the existence of a minimizer of (34), we argue as in the proof of Theorem 1 (i). First, we take a look at the functional which is defined by (R δ (ν)) i,j=1,...M := ( g i , r δ (ν)g j ) i,j=1,...M via the residual (32). Since the functional (36) is continuous, it follows that there exists at least one minimizer in the compact set C δ .
Step 2: Upper bound and limit We shall check that a k ≤ β k for all k = 1, ..., p. As shown in the proof of Theorem 3.8 in [11], |V δ | converges to |V | in the operator norm as δ goes to 0, and hence, for any fixed k, in the operator norm. As in [11], we obtain that for all g ∈ L 2 (Γ N ) 3 , Step 3: Minimality of the limit Due to Lemma 5, we know that min(a max , β k ) ≤ min(a max , β k,δ ) for all k = 1, ..., p. Thus,ν belongs to the admissible set C δ of the minimization problem (34) for all δ > 0. By minimality of ν δ , we obtain Denote by ν = p k=1 a k χ k , where a k are the limits derived in Step 1. We have that With the same arguments as in the proof of Theorem 3.8 in [11], i.e. that V converges to V δ as well asâ k,δ goes to a k , we are led to Further on, by the uniqueness of the minimizer we obtain ν =ν that is a max for B k ⊆ D.

Remark 6.
All in all, we are led to the discrete formulation of the minimization problem for noisy data: under the constraint where We want to mention, that V is positive definite, however, V δ is not in general, which leads to problems in the proofs. Hence, we use |V δ | instead.
Next, we take a closer look at the determination ofβ k,δ (see [11]), whereβ k,0 =β k : First, we replace the infinite-dimensional operators |V δ | and Λ (λ 0 , µ 0 ) in (33) by the M × M matrices V δ , S τ k such that we need to findβ k,δ with −aS τ k ≥ −δI − |V δ | for all a ∈ [0,β k,δ ]. Due to the fact that δI + |V δ | is a Hermitian positive-definite matrix, the Cholesky decomposition allows us to decompose it into the product of a lower triangular matrix and its conjugate transpose, i.e.
For each a > 0, it follows that It should be noted that in this notationβ k,M,δ =β k−M,δ for k = M + 1, ..., 2M . Based on this, we go over to the consideration of the eigenvalues and apply Weyl's Inequality. Since the positive semi-definiteness of −aS τ k + δI + |V δ | is equivalent to the positive semi-definiteness of aL −1 S τ k (L T ) −1 + I, we obtain Further, let θ M (L −1 S τ k (L T ) −1 ) be the smallest eigenvalue of the matrix L −1 S τ k (L T ) −1 . Since S τ k is positive semi-definite, so is L −1 S τ k (L T ) −1 . Thus, θ M (L −1 S τ k (L T ) −1 ) ≤ 0. Following the lines of [11], we obtainβ

Numerical Realization
We close this section with a numerical example, where we again consider two inclusions (tumors) in a biological tissue as shown in Figure 2 (for the values of the Lamé parameter see Table 1). In addition to the Lamé parameters, we use the estimated lower and upper bounds c λ , c µ , C λ , C µ given in Table 2.
upper bounds C λ = 1.7 · 10 6 C µ = 1.7 · 10 4 For the implementation, we again consider difference measurements and apply quadprog from Matlab in order to solve the minimization problem. In more detail, we perform the following steps:
Note that S λ , S µ can also be calculated from the stiffness matrix of the FEM implementation without additional quadrature errors by the approach given in [10].

Exact Data
We start with exact data, i.e. data without noise and due to the definition of δ given in (18), with δ = 0.

Remark 7.
Performing the single implementation steps on a laptop with 2 × 2 .5 GHz and 8 GB RAM, we obtained the following computation times : Step 1.), i.e., the determination of the matrix V, was done in 9 min 1 s. The Fréchet derivative is computed in 53 s in steps 2.)-4.). The solution of the minimization problem (step 5.)-7.)) is calculated in 6 min 27 s. Figure 12 presents the results as 3D plots, while Figure 13 shows the corresponding cuts for µ. For the same reasons as discussed in Section 3, we change the transparency of the plots of the 3D reconstruction of Figure 12 as indicated in Figure 11. Thus, tetrahedrons with low values have a higher transparency, whereas tetrahedrons with large values are plotted opaque.     Figure 4 (right hand side), Figure 12 shows an improvement because we are now able to also obtain information concerning λ which is not possible with the heuristic approach considered in (16).

Noisy Data
Finally, we take a look at noisy data with a relative noise level η = 10%, where the δ is determined as given in (18). Figures 14 -15 document that we can even reconstruct the inclusions for noisy data which is a huge advantage compared with the results of the one-step linearization (see Figure 8-9). This shows us, that the numerical simulations based on the monotonicity-based regularization are only marginally affected by noise as we have proven in theory, e.g., in Theorem 2.

Summary
In this paper we introduced a standard one-step linearization method applied to the Neumann-to-Dirichlet operator as a heuristical approach and a monotonicity-based regularization for solving the resulting minimization problem. In addition, we proved the existence of such a minimizer. Finally, we presented numerical examples.