1 Introduction

The focus of this paper is the reconstruction of cavities and inclusions embedded in an elastic isotropic medium by means of boundary tractions and displacements. Identification of defects from boundary measurements plays an important role in non-destructive testing for damage assessment of mechanical specimens, which are possibly defective due to the presence of interior voids or cavities appearing during the manufacturing process, see, for instance, [33, 47, 55, 63] for possible applications to 3D-printing and additive manufacturing. This kind of inverse problems has application also in medical imaging and in particular in elastography, a modality mapping the elastic properties and stiffness of soft tissue, [6,7,8, 31, 59, 60, 64] (to cite a few), and in reflection seismology [20, 62], a non invasive technique used by the oil and gas industry to map petroleum deposits in the Earth’s upper crust and based on seismic data from land acquisition, see for example [61]. We also mention some applications in volcanology, see for example [9, 10, 58] and references therein.

The underlying mathematical model is the following: Consider a bounded domain \(\Omega \subset \mathbb {R}^d\), with \(d=2,3\), representing the region occupied by an elastic isotropic medium and let \(\partial \Omega =\Sigma _D \cup \Sigma _N\), with \(\Sigma _D\) closed. Let the displacement field u be solution to the following mixed boundary value problem for the Lamé system of linearized elasticity:

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{div}(\mathbb {C}_0\widehat{\nabla }u) = 0 &{} \text {in } \Omega \setminus \overline{C}, \\ (\mathbb {C}_0 \widehat{\nabla }u) n = 0 &{} \text {on}\ \partial C,\\ (\mathbb {C}_0 \widehat{\nabla }u) \nu = g &{} \text {on } \Sigma _N,\\ u=0 &{} \text {on } \Sigma _D, \end{array}\right. } \end{aligned}$$
(1.1)

where \(C\Subset \Omega \) is a cavity with Lipschitz boundary, and \(\widehat{\nabla }u\) is the strain tensor. \(\mathbb {C}_0\) is a fourth-order isotropic elastic tensor, uniformly bounded and strongly convex, and n and \(\nu \) are the outer unit normal vector to \(\partial C\) and \(\Sigma _N\), respectively. The Neumann boundary datum g is assumed to be in \(L^2(\Sigma _N)\).

The forward problem consists in finding the elastic displacement u in the elastic body occupying the region \(\Omega \) induced by the tractions on \(\Sigma _N\), given the cavity C. The inverse problem concerns the determination of the cavity C from partial observations of u on the boundary. More precisely, given measurements of the displacement, i.e. \(u_{meas}\in L^2(\Sigma _N)\), find C contained in \(\Omega \), such that \(u\lfloor _{\Sigma _N}=u_{meas}\), where \(u\in H^1_{\Sigma _D}(\Omega \setminus C)\) is the solution to the forward problem.

It is well known that this problem is severely ill-posed and only a very weak logarithmic conditional stability holds, assuming a-priori \(C^{1,\alpha }\) regularity of the unknown cavities [53]. A similar weak stability result holds also in the case of the determination of elastic inclusions, see for example [54]. Hence, in general, the reconstruction of cavities and inclusions turns out to be a challenging issue.

To solve the problem we follow a similar strategy as in [14, 30] and the one in [13] for the reconstruction of conductivity inclusions and cavities respectively. Specifically, we consider the problem of minimizing the functional

$$\begin{aligned} J(C)=\frac{1}{2} \int _{\Sigma _N} |u(C)-u_{meas}|^2\, d\sigma (x)+\alpha \text {Per}(C), \end{aligned}$$
(1.2)

over a suitable set of cavities of finite perimeter and where u(C) is the solution of (1.1) for a given cavity C, \(\text {Per}(C)\) indicates the perimeter of C, and \(\alpha \) is a positive regularization parameter.

We first investigate the continuity of solutions to (1.1) with respect to perturbations of the cavity C in the Hausdorff distance topology and prove it using the Mosco convergence, see [21, 22, 37]. Similarly as in [13], continuity then allows us to prove existence of minima of the functional J(C), stability with respect to noisy data and convergence of the minimizers as \(\alpha \rightarrow 0\) to the solution of the inverse problem.

In the second part of the paper, we use a suitable phase-field relaxation of the functional J in order to overcome issues arising from non-convexity and non-differentiability. To be more precise, we employ an idea adopted by Bourdin and Chambolle, [18] in the context of topology optimization which consists in filling the cavity with a fictitious elastic material described by an elastic tensor \(\mathbb {C}_1:=\delta \mathbb {C}_0\), where \(\delta \) is a small positive parameter and \(\mathbb {C}_0\) has been extended to the whole domain \(\Omega \). In this way, we transform the original inverse problem in the one of reconstructing an elastic inclusion. Then, since the identification of sharp interfaces is in general difficult to be treated numerically, we use a phase-field approach. Instead of binary (i.e., either 0 or 1) phase parameter v describing sharp interfaces between regions with two different materials we use a phase parameter v as a \(H^1\) scalar field, taking values in the interval [0, 1]. Then, we approximate the functional J in (1.2) by means of a Ginzburg-Landau type functional (cf. [52])

$$\begin{aligned} J_{\delta ,\varepsilon }(v):= & {} \frac{1}{2} \int _{\Sigma _N} |u_{\delta }(v)-u_{meas}|^2\, d\sigma (x) \nonumber \\&+ \frac{4\alpha }{\pi } \int _{\Omega }\Big ( \varepsilon |\nabla v|^2 + \frac{1}{\varepsilon }v(1-v)\Big )\, dx, \end{aligned}$$
(1.3)

where \(\varepsilon \) is a small positive parameter, \(\frac{4}{\pi }\) is a rescaled parameter in the Modica–Mortola relaxation of the perimeter, \(u_\delta (v)\) denotes the solution of the modified boundary value problem:

$$\begin{aligned} \left\{ \begin{aligned} \text {div}(\mathbb {C}_{\delta }(v) \widehat{\nabla } u_{\delta }(v))&= 0 \qquad \text {in}\ \Omega , \\ (\mathbb {C}_{\delta }(v) \widehat{\nabla }u_{\delta }(v)) \nu&= g \qquad \text {on } \Sigma _N,\\ u_{\delta }(v)&=0 \qquad \text {on } \Sigma _D, \end{aligned} \right. \end{aligned}$$
(1.4)

where

$$\begin{aligned} \mathbb {C}_{\delta }(v)= \mathbb {C}_0 + (\mathbb {C}_1 - \mathbb {C}_0)v,\quad \text {with}\quad \mathbb {C}_1=\delta \mathbb {C}_0. \end{aligned}$$
(1.5)

Here \(\mathbb {C}_0\) and \(\mathbb {C}_1\) are the elasticity tensors in \(\Omega \setminus C\) and C, respectively. Ideally, the optimal phase variable v should be close to an ideal binary field. In fact, when \(\varepsilon \) is small the potential term (\(\int _{\Omega }\frac{1}{\varepsilon }v(1-v)\, dx\)) prevails and the minimum is attained by a phase-field variable which takes mainly values close to 0 and 1 and the transition occurs in a thin layer of thickness of order \(\varepsilon \).

The phase-field approach to structural optimization problems has been successfully used by different authors (cf., e.g., [12, 15, 25, 36]), the main advantage being the fact that it allows to handle topology changes as well as nucleation of new holes.

To implement our algorithm in Sect. 3.2 we provide first order necessary optimality conditions for the minimization problem associated to \(J_{\delta ,\varepsilon }\) whose discretized version is then employed in Sect. 4 in order to develop the reconstruction algorithm. Minima of the functional \(J_{\delta ,\varepsilon }\) exist and the numerical experiments of Sect. 5 indicate that they are accurate approximations of minima of J, for \(\varepsilon \) and \(\delta \) sufficiently small. This fact could be rigorously justified proving that the \(\Gamma \)-convergence, as \(\varepsilon \) and \(\delta \) tend to 0, to the functional J holds, but this is still an open issue and will be the subject of a future research. Some attempts along this direction have been done in the scalar case for example in [13, 56, 57].

The literature on reconstruction algorithms for identification of inclusions and cavities in elastostatic, viscoelastic and elastic waves systems is very rich and of big impact. In the case of small elastic inclusions or cavities, asymptotic expansions of the perturbed displacement have been used to detect position, size and shape from boundary measurements, see for example [45] and [8]. The method followed in [5] is based on a shape derivative approach, both for elastic and thermoelastic problems. A topological gradient method has been applied in [24], for the detection of an elastic scatterer, and in [50], for identification of a cavity in time-harmonic wave elastic systems. Ikehata and Itou use the so-called enclosure method for the reconstruction of polygonal cavities in an elastostatic setting [42] and of a general cavity in a homogeneous isotropic viscoelastic body [43]. More recently, Doubova and Fernández–Cara proposed an augmented Lagrangian method to identify rigid inclusions in a elastic waves system [31]. Eberle and Harrach applied the monotonicity method for the reconstruction of elastic inclusions using the monotonicity property of the Neumann-to-Dirichlet map [32], and in [46] the authors used the method of fundamental solutions for the reconstruction of elastic cavities. For other reconstruction approaches we refer to the review paper [17] and references therein. Identification of cavities and elastic inclusions could be interpreted as a special case of the determination of Lamé parameters from boundary measurements, see for example [7, 41] and [61].

The plan of the paper is the following. In Sect. 2 we investigate the continuity of the solution to the direct problem with respect to perturbations of the cavity in the Haussdorff topology and then derive the major properties of the misfit functional J(C). In Sect. 3 we consider the approximation of the cavity with an inclusion of small elasticity tensor, the corresponding misfit functional and its properties. We then introduce its phase-field relaxation and analyze its differentiability and derive necessary optimality conditions related to the phase-field minimization problem. In Sect. 4 we propose an iterative reconstruction algorithm allowing for the numerical approximation of the solution and prove its convergence properties. Finally, in Sect. 5 we present some numerical results showing the efficiency and robustness of the proposed reconstruction algorithm.

1.1 Notation and Geometrical Setting

We introduce the principal notation used in the paper.

Notation We denote scalar quantities, points, and vectors in italics, e.g. xy and uv, and fourth-order tensors in blackboard face, e.g. \(\mathbb {A}, \mathbb {B}\).

The symmetric part of a second-order tensor A is denoted by \(\widehat{{A}}:=\tfrac{1}{2}\left( {A}+{A}^T\right) \), where \({A}^T\) is the transpose matrix. In particular, \(\widehat{\nabla } u\) represents the deformation tensor. We utilize standard notation for inner products, that is, \({u}\cdot {v}=\sum _{i} u_{i} v_{i}\), and \({A}: {B}=\sum _{i,j}a_{ij} b_{ij}\) (B is a second-order tensor). |A| denotes the norm induced by the inner product on matrices:

$$\begin{aligned} |{A}|=\sqrt{{A}:{A}}. \end{aligned}$$

Domains. To represent locally a boundary as a graph of function, we adopt the notation: \(\forall \, x\in \mathbb {R}^d\), we set \(x=(x',x_d)\), where \(x'\in \mathbb {R}^{d-1}\), \(x_d\in \mathbb {R}\), with \(d=2,3\). Given \(r>0\), we denote by \(B_{r}({x})\subset \mathbb {R}^d\) the set \(B_{r}({x}):=\{(x',x_d)/\ |x'|^2+x_d^2<r^2\}\) and by \(B'_{r}({x'})\subset \mathbb {R}^{d-1}\) the set \(B'_{r}({x'}):=\{x'\in \mathbb {R}^{d-1}/\, |x'|^2<r^2\}\).

Definition 1.1

(\(C^{0,1}\) regularity) Let \(\Omega \) be a bounded domain in \(\mathbb {R}^d\). We say that a portion \(\Sigma \) of \(\partial \Omega \) is of Lipschitz class with constants \(r_0\), \(L_0\), if for any \({p}\in \Sigma \) there exists a rigid transformation of coordinates under which we have that p is mapped to the origin and

$$\begin{aligned} \Omega \cap B_{r_0}({0})=\{{x}\in B_{r_0}({0})\, :\, x_d>\psi ({x}')\}, \end{aligned}$$

where \({\psi }\) is a \(C^{0,1}\) function on \(B'_{r_0}({0})\subset \mathbb {R}^{d-1}\), such that

$$\begin{aligned} \begin{aligned} {\psi }({0})&=0,\\ \Vert {\psi }\Vert _{C^{0,1}(B'_{r_0}({0}))}&\le L_0. \end{aligned} \end{aligned}$$

The Hausdorff distance between two sets \(\Omega _1\) and \(\Omega _2\) is defined by

$$\begin{aligned} d_H(\Omega _1,\Omega _2)=\max \{\sup \limits _{x\in \Omega _1}\inf \limits _{y\in \Omega _2}\ dist(x,y),\sup \limits _{x\in \Omega _2}\inf \limits _{y\in \Omega _1}\ dist(x,y)\}. \end{aligned}$$

Functional setting: Let \(\Omega \) be a bounded domain. We set

$$\begin{aligned} BV(\Omega )= \{ v \in L^1(\Omega ) \, : \, TV(v) < \infty \}, \end{aligned}$$
(1.6)

where

$$\begin{aligned} TV(v) = \sup \left\{ \int _\Omega v \text {div}(\varphi ); \quad \varphi \in C^1_0(\Omega ), \, \Vert {\varphi }\Vert _{{L^\infty }(\Omega )}\le 1 \right\} \end{aligned}$$
(1.7)

is the total variation of v. The BV space is endowed with the natural norm \(\Vert {v}\Vert _{BV(\Omega )} = \Vert {v}\Vert _{L^1(\Omega )} + TV(v)\). We recall that the perimeter of \(\Omega \) is defined as

$$\begin{aligned} \text {Per}(\Omega )= TV(\chi _{\Omega }), \end{aligned}$$
(1.8)

where \(\chi _{\Omega }\) is the characteristic function of the set \(\Omega \).

Setting \(H^1_{\partial \Omega }(\Omega ):=\{\upsilon \in H^1(\Omega ): \upsilon \lfloor _{\partial \Omega }=0\}\), we recall the following inequalities.

Proposition 1.1

Let \(\Omega \) be a bounded Lipschitz domain. For every \(\upsilon \in H^1_{\partial \Omega }(\Omega )\), there exists a positive constant \(\overline{c}=\overline{c}(\Omega )\) such that

$$\begin{aligned}&(\text {Korn inequality})\qquad \Vert \nabla \upsilon \Vert _{L^2(\Omega )} \le \overline{c}\ \Vert \widehat{\nabla }\upsilon \Vert _{L^2(\Omega )}. \end{aligned}$$
(1.9)
$$\begin{aligned}&(\text {Poincar}\acute{\hbox {e}} ~\mathrm{inequality})\qquad \Vert \upsilon \Vert _{H^1(\Omega )} \le \overline{c}\ \Vert \nabla \upsilon \Vert _{L^2(\Omega )}. \end{aligned}$$
(1.10)

Estimates (1.9) and (1.10) hold also in the case where \(\upsilon \) is zero, in the trace sense, only on a portion of \(\partial \Omega \).

2 Elastic Problem—Detection of a Cavity

The focus of this work is the reconstruction of a cavity in an elastic body from boundary measurements using a phase-field approach. We assume that \(\Omega \) is a bounded domain and that \(\partial \Omega :=\Sigma _N\cup \Sigma _D\), with \(|\Sigma _N| >0\), \(|\Sigma _D|>0\), \(\Sigma _D\) closed, where \(\partial \Omega \) is of Lipschitz class with constants \(r_0\) and \(L_0\). Denoting by C the cavity, we consider the mixed boundary value problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{div}(\mathbb {C}_0\widehat{\nabla }u) = 0 &{} \text {in } \Omega \setminus \overline{C}, \\ (\mathbb {C}_0 \widehat{\nabla }u) n = 0 &{} \text {on}\ \partial C,\\ (\mathbb {C}_0 \widehat{\nabla }u) \nu = g &{} \text {on } \Sigma _N,\\ u=0 &{} \text {on } \Sigma _D, \end{array}\right. } \end{aligned}$$
(2.1)

where \(n,\nu \) are the outer unit normal vector to \(\partial C\) and \(\partial \Omega \), respectively.

We make the following assumptions.

Assumption 2.1

\(\mathbb {C}_0=\mathbb {C}_0(x)\) is a fourth-order tensor such that

$$\begin{aligned} (\mathbb {C}_0)_{ijkh}(x)=(\mathbb {C}_0)_{jikh}(x)=(\mathbb {C}_0)_{khij}(x),\qquad \forall 1\le i,j,k,h\le d,\ \text {and}\ \ {x}\in \Omega . \end{aligned}$$

Moreover, \(\mathbb {C}_0\) is assumed to be uniformly bounded and uniformly strongly convex, that is, \(\mathbb {C}_0\) defines a positive-definite quadratic form on symmetric matrices:

$$\begin{aligned} \mathbb {C}_0(x)\widehat{A}:\widehat{A}\ge \xi _0 |\widehat{A}|^{2}, \qquad \text {a.e in}\,\, \Omega , \end{aligned}$$

for \(\xi _0>0\).

Remark 2.1

We require that \(\mathbb {C}_0\) is defined in \(\Omega \), and not only in \(\Omega \setminus C\), because we employ, in the second part of the paper, a reconstruction algorithm based on the strategy of filling the cavity with a fictitious elastic material.

Assumption 2.2

$$\begin{aligned} g\in L^2(\Sigma _N). \end{aligned}$$
(2.2)

We assume Lipschitz regularity of the cavity (see Definition 1.1), which is a typical requirement to prove uniqueness of the solution to the inverse problem, see [53]. More precisely, we make the following assumption.

Assumption 2.3

Let

$$\begin{aligned} C\in \mathcal {C}:= & {} \{D \subset \overline{\Omega }: \hbox { compact, simply connected } \partial D\in C^{0,1}{} { with ~ constants }~ r_0,\\&L_0\,{ and }~{dist}(D,\partial \Omega )\ge d_0>0\}. \end{aligned}$$

We define

$$\begin{aligned} \Omega ^{d_0/2}=\left\{ x\in \Omega \ / \ {dist}(x,\partial \Omega )\le \frac{d_0}{2}\right\} . \end{aligned}$$
(2.3)

For the class of admissible sets \(\mathcal {C}\), the following result holds.

Remark 2.2

\(\mathcal {C}\) is compact with respect to the Hausdorff topology [29, 51].

Remark 2.3

From now on, we will denote with c any constant possibly depending on \(\Omega \), \(r_0\), \(L_0\), d, \(\xi _0\), \(d_0\), \(\overline{c}\), and on the uniform bounds of the elasticity tensor.

Well-posedness of (2.1) in \(H^1_{\Sigma _D}(\Omega \setminus C)\) follows from an application of the Lax-Milgram theorem to the weak formulation of Problem (2.1):

$$\begin{aligned}&\text {Find} \,u\in H^1_{\Sigma _D}(\Omega \setminus C)\, \text {solution to} \nonumber \\&\int _{\Omega \setminus C} \mathbb {C}_0\widehat{\nabla }{u}:\widehat{\nabla }{\varphi }\, dx=\int _{\Sigma _N}g\cdot \varphi \, d\sigma (x), \qquad \forall \varphi \in H^1_{\Sigma _D}(\Omega \setminus C), \end{aligned}$$
(2.4)

(see for example [28]). Moreover, it holds

$$\begin{aligned} \Vert u\Vert _{H^1(\Omega \setminus C)}\le c \Vert g\Vert _{L^2(\Sigma _N)}. \end{aligned}$$
(2.5)

Choosing \(\varphi =u\) in (2.4), the last inequality follows from the strong convexity of the elasticity tensor \(\mathbb {C}_0\) (see Assumption 2.1), from an application of the Korn and Poincaré inequality to the left-hand side of (2.4) (see Proposition 1.1), and from the use of a Cauchy–Schwarz inequality to the right-hand side. In fact,

$$\begin{aligned} \int _{\Omega \setminus C} \mathbb {C}_0\widehat{\nabla }{u}:\widehat{\nabla }{u}\, dx\ge c \Vert \widehat{\nabla }u\Vert ^2_{L^2(\Omega \setminus C)}\ge c \Vert \nabla u\Vert ^2_{L^2(\Omega \setminus C)}\ge c \Vert u\Vert ^2_{H^1(\Omega \setminus C)}, \end{aligned}$$
(2.6)

and

$$\begin{aligned} \Bigg |\int _{\Sigma _N}g\cdot u\, d\sigma (x)\Bigg | \le \Vert g\Vert _{L^2(\Sigma _N)}\Vert u\Vert _{L^2(\Sigma _N)}\le c \Vert g\Vert _{L^2(\Sigma _N)}\Vert u\Vert _{H^1(\Omega \setminus C)}, \end{aligned}$$
(2.7)

and so estimate (2.5) follows by (2.6) and (2.7).

Our aim is to tackle the following inverse problem:

Problem 2.1

Under Assumptions 2.1, 2.2, and 2.3, given \(\displaystyle u_{meas} \in L^2(\Sigma _N)\), find \(C \in \mathcal {C}\) such that \(u\lfloor _{\Sigma _N} = u_{meas}\), where \(u\in H^1_{\Sigma _D}(\Omega \setminus C)\) solves (2.1).

It has been proved in [53] (see also [11]) that Problem 2.1 has a unique solution when \(\partial C\) is of Lipschitz class. Logarithmic stability estimates have been proved under the assumption of \(C^{1,\sigma }\) regularity, \(0<\sigma \le 1\), on the cavity C, cf. [53].

For the reconstruction of the solution to the inverse problem we consider a standard approach based on the minimization of a quadratic misfit functional, with a Tikhonov regularization penalizing the perimeter of C. More precisely, let

$$\begin{aligned} \min _{C\in \mathcal {C}}J(C), \hbox { where } J(C)=\frac{1}{2} \int _{\Sigma _N} |u(C)-u_{meas}|^2\, d\sigma (x)+\alpha \text {Per}(C), \end{aligned}$$
(2.8)

where \(\alpha >0\) represents a regularization parameter, \(\text {Per}(C)\) the perimeter of the set C, see (1.8), and \(u(C)\in H^1_{\Sigma _D}(\Omega \setminus C)\) the solution to (2.4).

2.1 Continuity Property of Solutions with Respect to C

Adapting to our case some known results in literature, see for example [21, 23, 26, 37, 49] and references therein, in this section we will show the continuity of the boundary term in (2.8) with respect to perturbations of the cavity C in the Hausdorff distance.

To this purpose, we recall the definition of Mosco convergence and some of its properties (see [21, 22, 37, 51]). Let X be a reflexive Banach space, and \(G_n\) a sequence of closed subspaces of X. We define

$$\begin{aligned} G':=\{x\in X\ /\ x=w-\limsup y_{n_k}\ ,\ y_{n_k}\in G_{n_k}, \ n_k\rightarrow +\infty \} \end{aligned}$$
(2.9)

and

$$\begin{aligned} G'':=\{x\in X\ /\ x=s-\liminf y_{n}\ ,\ y_{n}\in G_{n}\ \text {for}\ n\ \text {large}\}. \end{aligned}$$
(2.10)

\(G',G''\) are called the weak-limsup and the strong-liminf of the sequence \(G_n\) in the sense of Mosco.

Definition 2.1

The sequence \(G_n\) converges in the sense of Mosco if \(G'=G''=G\). G is called the Mosco limit of \(G_n\).

In other words, \(G_n\) converges in the sense of Mosco to G when the following two conditions hold:

$$\begin{aligned}&\text {If}\ u_{n_k}\in G_{n_k}\ \text {is such that}\ u_{n_k}\rightharpoonup u\ \text {in}\ X, \text {then}\ u\in G; \end{aligned}$$
(2.11)
$$\begin{aligned}&\forall u\in G, \exists u_n\in G_n\ \text {such that}\ u_n\rightarrow u\ \text {in}\ X. \end{aligned}$$
(2.12)

Given \(\Omega \) and \(\Omega \setminus C\), we can identify the Sobolev space \(H^1_{\Sigma _D}(\Omega \setminus C)\) with a closed subspace of \(L^2(\Omega ,\mathbb {R}^{d+d^2})\) through the map

$$\begin{aligned} \begin{aligned} H^{1}_{\Sigma _D}(\Omega \setminus C)&\hookrightarrow L^2(\Omega ,\mathbb {R}^{d+d^2})\\ u&\rightarrow (u,\partial _{j}u_i),\qquad \forall i,j=1,\cdots ,d \end{aligned} \end{aligned}$$
(2.13)

with the convention of extending u and \(\nabla u\) to zero in C. The same identification holds for \(\Omega \setminus C_n\), extending \(u_n\) and \(\nabla u_n\) to zero in \(C_n\).

Since we are considering the case of uniform Lipschitz domains, we have the following result, which is an adaptation of Theorem 7.2.7 in [21].

Theorem 2.4

Let us assume that \(C_n, C\subset \Omega \) belong to the class \(\mathcal {C}\). If \(C_n\rightarrow C\) in the Hausdorff metric, then \(H^1_{\Sigma _D}(\Omega \setminus C_n)\) converges to \(H^1_{\Sigma _D}(\Omega \setminus C)\) in the sense of Mosco.

We can now prove the following continuity result.

Theorem 2.5

Let \(C_n\in \mathcal {C}\) be a sequence of sets converging to C in the Hausdorff metric (cf. Remark 2.2), and let \(u(C_n)=:u_n\in H^1_{\Sigma _D}(\Omega \setminus C_n)\), \(u(C)=:u\in H^1_{\Sigma _D}(\Omega \setminus C)\) be solutions of (2.4) in \(\Omega \setminus C_n\), \(\Omega \setminus C\), respectively. Then

$$\begin{aligned} \lim \limits _{n\rightarrow +\infty } \int _{\Sigma _N}|u_n-u|^2\,d\sigma (x)=0. \end{aligned}$$
(2.14)

Proof

Thanks to the uniform Lipschitz regularity of \(\partial (\Omega \setminus C_n)\) (and \(\partial (\Omega \setminus C)\)), we have that the Korn and Poincaré inequalities are uniform with respect to n in \(H^1_{\Sigma _D}(\Omega \setminus C_n)\), since they depend only on the Lipschitz constants of the domain \(\partial (\Omega \setminus C_n)\), see [2, 27]. Therefore, from (2.4) and (2.5), we have that

$$\begin{aligned} \Vert u_n\Vert _{H^1(\Omega \setminus C_n)}\le c, \end{aligned}$$
(2.15)

where c is independent of n.

Hence, from the identification (2.13), we get that \(\Vert u_n\Vert _{L^2(\Omega ,\mathbb {R}^{d+d^2})}\) is uniformly bounded. Up to subsequences, there exists \(u^*\in L^2(\Omega ,\mathbb {R}^{d+d^2})\) such that

$$\begin{aligned} u_n \rightharpoonup u^*\ \text {in}\ L^2(\Omega ,\mathbb {R}^{d+d^2}). \end{aligned}$$

Thanks to Theorem 2.4 and from the first condition of the Mosco convergence applied to \(G_n=H^1_{\Sigma _D}(\Omega \setminus C_n)\), \(G=H^1_{\Sigma _D}(\Omega \setminus C)\), and \(X=L^2(\Omega ,\mathbb {R}^{d+d^2})\), see (2.11), we have that \(u^*\in H^1_{\Sigma _D}(\Omega \setminus C)\).

Moreover, taking \(\varphi \in H^1_{\Sigma _D}(\Omega \setminus C)\), there exists \(\varphi _n\in H^1_{\Sigma _D}(\Omega \setminus C_n)\) by (2.12) such that

$$\begin{aligned} \varphi _n\rightarrow \varphi \ \text {in}\ L^2(\Omega ,\mathbb {R}^{d+d^2}). \end{aligned}$$
(2.16)

Considering the weak formulation for \(u_n\) (see (2.4) specialized to the case with \(C=C_n\) and \(\varphi =\varphi _n\))

$$\begin{aligned} \int _{\Omega \setminus C_n} \mathbb {C}_0\widehat{\nabla }u_n:\widehat{\nabla }\varphi _n\, dx=\int _{\Sigma _N}g\cdot \varphi _n\, d\sigma (x), \end{aligned}$$
(2.17)

and since \(\varphi _n\in H^1_{\Sigma _D}(\Omega \setminus C_n)\) and \(\varphi \in H^1_{\Sigma _D}(\Omega \setminus C)\), it holds

$$\begin{aligned} \int _{\Sigma _N}g\cdot \varphi _n\, d\sigma (x)=\int _{\Sigma _N}g\cdot (\varphi _n-\varphi )\, d\sigma (x)+\int _{\Sigma _N}g\cdot \varphi \, d\sigma (x). \end{aligned}$$

Hence, thanks to Assumption 2.3 and (2.16), we have

$$\begin{aligned} \begin{aligned} \Bigg |\int _{\Sigma _N}g\cdot (\varphi _n-\varphi )\, d\sigma (x)\Bigg |&\le c \Vert g\Vert _{L^2(\Sigma _N)} \Vert \varphi _n-\varphi \Vert _{L^2(\Sigma _N)}\\&\le c \Vert \varphi _n-\varphi \Vert _{H^1_{\Sigma _D}(\Omega ^{d_0/2})} \rightarrow 0, \end{aligned} \end{aligned}$$

as \(n\rightarrow +\infty \), where \(\Omega ^{d_0/2}\) is defined as in (2.3). Therefore,

$$\begin{aligned} \int _{\Sigma _N}g\cdot \varphi _n\, d\sigma (x) \rightarrow \int _{\Sigma _N}g\cdot \varphi \, d\sigma (x),\qquad \text {as}\ n\rightarrow +\infty . \end{aligned}$$
(2.18)

The term on the left-hand side of (2.17) is equal to

$$\begin{aligned} \int _{\Omega \setminus C_n} \mathbb {C}_0\widehat{\nabla }u_n:\widehat{\nabla }\varphi _n\, dx=\int _{\Omega \setminus C_n} \mathbb {C}_0\widehat{\nabla }u_n:\widehat{\nabla }(\varphi _n-\varphi )\, dx+\int _{\Omega \setminus C_n} \mathbb {C}_0\widehat{\nabla }u_n:\widehat{\nabla }\varphi \, dx.\nonumber \\ \end{aligned}$$
(2.19)

Then, by (2.15) and (2.16), it follows

$$\begin{aligned} \Bigg |\int _{\Omega \setminus C_n} \mathbb {C}_0\widehat{\nabla }u_n:\widehat{\nabla }(\varphi _n-\varphi )\, dx\Bigg |\le c \Vert \widehat{\nabla }u_n\Vert _{L^2(\Omega \setminus C_n)} \Vert \widehat{\nabla }(\varphi _n-\varphi )\Vert _{L^2(\Omega \setminus C_n)} \rightarrow 0,\nonumber \\ \end{aligned}$$
(2.20)

as \(n\rightarrow +\infty \). Analogously, for the second integral on the right-hand side of (2.19), using the symmetries of the elasticity tensor, we get

$$\begin{aligned} \begin{aligned} \int _{\Omega \setminus C_n} \mathbb {C}_0\widehat{\nabla }u_n:\widehat{\nabla }\varphi \, dx&= \int _{\Omega \setminus C_n} \widehat{\nabla }u_n:\mathbb {C}_0\widehat{\nabla }\varphi \, dx\\&\rightarrow \int _{\Omega \setminus C} \widehat{\nabla }u^*:\mathbb {C}_0\widehat{\nabla }\varphi \, dx= \int _{\Omega \setminus C} \mathbb {C}_0\widehat{\nabla }u^*:\widehat{\nabla }\varphi \, dx, \end{aligned} \end{aligned}$$
(2.21)

as \(n\rightarrow +\infty \). Consequently, using (2.20) and (2.21) in (2.19), we get

$$\begin{aligned} \int _{\Omega \setminus C_n} \mathbb {C}_0\widehat{\nabla }u_n:\widehat{\nabla }\varphi _n\, dx \rightarrow \int _{\Omega \setminus C} \mathbb {C}_0\widehat{\nabla }u^*:\widehat{\nabla }\varphi \, dx, \qquad \text {as}\ n\rightarrow +\infty . \end{aligned}$$
(2.22)

Therefore, we find that

$$\begin{aligned} \begin{aligned} \int _{\Omega \setminus C}\mathbb {C}_0\widehat{\nabla }u^*:\widehat{\nabla }\varphi \, dx&=\int _{\Sigma _N}g\cdot \varphi \, d\sigma (x)\\&=\int _{\Omega \setminus C}\mathbb {C}_0\widehat{\nabla }u:\widehat{\nabla }\varphi \, dx, \qquad \forall \varphi \in H^1_{\Sigma _D}(\Omega \setminus C), \end{aligned} \end{aligned}$$

where the last equality comes from the weak formulation (2.4). Therefore,

$$\begin{aligned} \int _{\Omega \setminus C}\mathbb {C}_0\widehat{\nabla }(u^*-u):\widehat{\nabla }\varphi \, dx=0, \qquad \forall \varphi \in H^1_{\Sigma _D}(\Omega \setminus C), \end{aligned}$$

so that \(u^*=u\). This conclusion comes from the choice \(\varphi =u^*-u\), and the use of Assumption 2.1 and Korn and Poincarè inequalities (see Proposition 1.1).

Next, we prove that \(u_n\rightarrow u\) in \(L^2(\Sigma _N)\) by showing strong convergence of \(u_n\) to u in \(H^1\)-norm in a neighborhood of the boundary of \(\Omega \). Consider the weak formulations

$$\begin{aligned}&\int _{\Omega \setminus C_n}\mathbb {C}_0 \widehat{\nabla }u_n : \widehat{\nabla }\varphi _1\, dx= \int _{\Sigma _N}g\cdot \varphi _1\, d\sigma (x),\qquad \forall \varphi _1\in H^1(\Omega \setminus C_n), \end{aligned}$$
(2.23)
$$\begin{aligned}&\int _{\Omega \setminus C}\mathbb {C}_0 \widehat{\nabla }u : \widehat{\nabla }\varphi _2\, dx= \int _{\Sigma _N}g\cdot \varphi _2\, d\sigma (x),\qquad \forall \varphi _2\in H^1(\Omega \setminus C). \end{aligned}$$
(2.24)

Now, we define \(\Phi =(u_n-u)\chi ^2\), where \(\chi \) is a smooth cut-off function, \(\chi \in [0,1]\) in \(\Omega \), such that

$$\begin{aligned} \chi = {\left\{ \begin{array}{ll} 1 &{} \text {in}\ \overline{\Omega }^{d_0/4}\\ 0 &{} \text {in}\ \Omega \setminus \Omega ^{d_0/2}. \end{array}\right. } \end{aligned}$$

Then, we choose \(\varphi _1=\varphi _2=\Phi \) in (2.23) and (2.24), that is

$$\begin{aligned}&\int _{\Omega ^{d_0/2}}\mathbb {C}_0 \widehat{\nabla }u_n : \widehat{\nabla }\left( (u_n-u)\chi ^2\right) \, dx= \int _{\Sigma _N}g\cdot (u_n-u)\, d\sigma (x), \\&\int _{\Omega ^{d_0/2}}\mathbb {C}_0 \widehat{\nabla }u : \widehat{\nabla }\left( (u_n-u)\chi ^2\right) \, dx= \int _{\Sigma _N}g\cdot (u_n-u)\, d\sigma (x). \end{aligned}$$

Subtracting the last two equations, we find

$$\begin{aligned} \int _{\Omega ^{d_0/2}}\mathbb {C}_0 \widehat{\nabla }(u_n-u) : \widehat{\nabla }\left( (u_n-u)\chi ^2\right) \, dx=0, \end{aligned}$$

that is,

(2.25)

On the second integral, we apply the Young’s inequality with a suitable parameter \(\kappa >0\), that is

Hence, using this last inequality in (2.25), we get

The right-hand side integral goes to zero, noticing that

(2.26)

The left-hand side can be estimated using the fact that

$$\begin{aligned} \int _{\Omega ^{d_0/2}}\chi ^2 \mathbb {C}_0 \widehat{\nabla }(u_n-u) : \widehat{\nabla }(u_n-u)\, dx\ge \int _{\Omega ^{d_0/4}}\mathbb {C}_0 \widehat{\nabla }(u_n-u) : \widehat{\nabla }(u_n-u)\, dx \end{aligned}$$

and, then, by means of the Korn inequality

$$\begin{aligned} \int _{\Omega ^{d_0/4}}\mathbb {C}_0 \widehat{\nabla }(u_n-u) : \widehat{\nabla }(u_n-u)\, dx\ge c \Vert \nabla (u_n-u)\Vert ^2_{L^2(\Omega ^{d_0/4})}. \end{aligned}$$
(2.27)

From (2.27) and (2.26), and recalling that \(u_n\) is converging strongly in \(L^2\)-norm to u from the previous results, we find that

$$\begin{aligned} \Vert u_n-u\Vert _{H^1(\Omega ^{d_0/4})}\rightarrow 0,\quad \text {as}\ n\rightarrow +\infty . \end{aligned}$$
(2.28)

Finally, by the continuity of the trace theorem the proof is concluded. \(\square \)

Remark 2.6

In the previous result, \(u_n\rightarrow u\) in \(L^2(\Sigma _N)\) can be also proved using the following arguments: note that the trace operator is a linear continuous operator from \(H^1_{\Sigma _D}(\Omega \setminus C_n)\) to \(H^{\frac{1}{2}}(\Sigma _N)\) (and, analogously, from \(H^1_{\Sigma _D}(\Omega \setminus C)\) to \(H^{\frac{1}{2}}(\Sigma _N)\)), hence is also continuous in the weak topology, see [19]. Moreover, since \(H^{\frac{1}{2}}(\Sigma _N) \hookrightarrow L^2(\Sigma _N)\) is compact, we find that \(u_n\rightarrow u\) in \(L^2(\Sigma _N)\).

As a consequence of the continuity of the boundary functional, some properties of the functional J(C) defined in (2.8) follow.

Proposition 2.7

For every \(\alpha >0\) there exists at least one solution of the minimization problem (2.8).

Proof

Let \(\{C_n\}_{n\ge 0}\in \mathcal {C}\) be a minimizing sequence. Then there exists a positive constant M such that

$$\begin{aligned} J(C_n)\le M,\qquad \forall n, \end{aligned}$$
(2.29)

hence

$$\begin{aligned} \text {Per}(C_n) \le M,\qquad \forall n. \end{aligned}$$

By compactness (see Thereom 3.39 in [4]), there exists a set of finite perimeter \(C_0\) such that, possibly up to a subsequence,

$$\begin{aligned} |C_n\triangle C_0|\rightarrow 0,\qquad n\rightarrow \infty , \end{aligned}$$

where \(C_n\triangle C_0\) is the symmetric difference of the two sets. Moreover, thanks to the compactness and equiboundedness of the sets \(C_n\) and the fact that \(C_n\in \mathcal {C}\), there exists a further subsequence which converges in the Hausdorff metric to \(C_0\in \mathcal {C}\), thanks to [39, Theorem 2.4.10]. Moreover, by the lower semicontinuity of the perimeter functional (see Section 5.2.1, Theorem 1, in [34]) it follows that

$$\begin{aligned} \text {Per}(C_0)\le \liminf _{n\rightarrow \infty }\text {Per}(C_n). \end{aligned}$$

Using the continuity of the boundary functional, see (2.14), we also have

$$\begin{aligned} \int _{\Sigma _N}(u(C_n) - u_{meas})^2\,d\sigma (x)\rightarrow \int _{\Sigma _N}(u(C_0) - u_{meas})^2\,d\sigma (x),\qquad \text {as }\ n\rightarrow \infty . \end{aligned}$$

In conclusion, we find that

$$\begin{aligned} J(C_0)\le \liminf _{n\rightarrow \infty }J(C_n)=\lim _{n\rightarrow \infty }J(C_n)=\inf _{C\in \mathcal {C}}J(C), \end{aligned}$$

and the claim follows. \(\square \)

We also prove stability with respect to the measured data.

Proposition 2.8

Solutions of (2.8) are stable with respect to perturbations of the data \(u_{meas}\), i.e., if \(u_n\rightarrow u_{meas}\) in \(L^2(\Sigma _N)\) as \(n\rightarrow \infty \) then the solutions \(C_n\) of (2.8) with datum \(u_n\) are such that, up to subsequences,

$$\begin{aligned} d_H(C_n,\widetilde{C})\rightarrow 0,\,\, \text { as }\ n\rightarrow \infty , \end{aligned}$$

where \(\widetilde{C}\in \mathcal {C}\) is a solution of (2.8), with datum \(u_{meas}\).

Proof

Using (2.8), we have that, for any n, \(C_n\) satisfies

$$\begin{aligned} \frac{1}{2} \int _{\Sigma _N}(u(C_n) - u_n)^2\,d\sigma (x)+\alpha \text {Per}(C_n)\le \frac{1}{2} \int _{\Sigma _N}(u(C) - u_n)^2d\sigma (x)+\alpha \text {Per}(C), \end{aligned}$$

for all \(C\in \mathcal {C}\). Therefore, \(\text {Per}(C_n)\le M\) and hence, possibly up to subsequences,

$$\begin{aligned} d_H(C_n,\widetilde{C})\rightarrow 0,\qquad n\rightarrow \infty , \end{aligned}$$

for some \(\widetilde{C}\in \mathcal {C}\), and

$$\begin{aligned} \text {Per}(\widetilde{C})\le \liminf _{n\rightarrow \infty }\text {Per}(C_n). \end{aligned}$$

Moreover, by the continuity of the solution of (2.4) with respect to C, see Theorem 2.5, we get

$$\begin{aligned} \begin{aligned} J(\widetilde{C})&\le \liminf _{n\rightarrow \infty }\frac{1}{2} \int _{\Sigma _N}(u(C_n) - u_n)^2d\sigma (x)+\alpha \text {Per}(C_n)\\&\le \lim _{n\rightarrow \infty }\frac{1}{2} \int _{\Sigma _N}(u(C) - u_n)^2d\sigma (x)+\alpha \text {Per}(C)\\&=\frac{1}{2} \int _{\Sigma _N}(u(C) - u_{meas})^2d\sigma (x)+\alpha \text {Per}(C), \end{aligned} \end{aligned}$$

for all \(C\in \mathcal {C}\). Summarizing, \(\widetilde{C}\in \mathcal {C}\) and it is a minimizer of the functional, hence the assertion follows. \(\square \)

Finally, we can prove that the solution of the minimization problem (2.8) converges to the unique solution of the inverse problem when the regularization parameter tends to zero.

Proposition 2.9

Let us assume that there exists a solution \(C^{\sharp }\in \mathcal {C}\) of the inverse problem corresponding to datum \(u_{meas}\). Moreover, for any \(\eta >0\) let \((\alpha (\eta ))_{\eta >0}\) be such that \(\alpha (\eta )=o(1)\) and \(\frac{\eta ^2}{\alpha (\eta )}\) is bounded as \(\eta \rightarrow 0\).

Furthermore, let \(C_{\eta }\) be a solution to the minimization problem (2.8) with \(\alpha =\alpha (\eta )\) and datum \(u_{\eta }\in L^2(\Sigma _N)\) satisfying \(\Vert u_{meas}-u_{\eta }\Vert _{L^2(\Sigma _N)}\le \eta \). Then

$$\begin{aligned} C_{\eta }\rightarrow C^{\sharp } \end{aligned}$$

in the Hausdorff metric, as \(\eta \rightarrow 0\).

Proof

From the definition of \(C_{\eta }\), it immediately follows that

$$\begin{aligned} \frac{1}{2} \int _{\Sigma _N}(u(C_{\eta }) - u_{\eta })^2d\sigma (x)+\alpha \text {Per}(C_{\eta })\le & {} \frac{1}{2} \int _{\Sigma _N}(u(C^{\sharp }) - u_{\eta })^2d\sigma +\alpha \text {Per}(C^{\sharp })\nonumber \\= & {} \frac{1}{2} \int _{\Sigma _N}(u_{meas} - u_{\eta })^2d\sigma +\alpha \text {Per}(C^{\sharp })\nonumber \\\le & {} \eta ^2+\alpha \text {Per}(C^{\sharp }). \end{aligned}$$
(2.30)

Straightforwardly, we find that

$$\begin{aligned} \text {Per}(C_{\eta })\le \frac{\eta ^2}{\alpha }+\text {Per}(C^{\sharp })\le M. \end{aligned}$$
(2.31)

Hence, up to subsequences, arguing as in Proposition 2.8, we get

$$\begin{aligned} d_H(C_{\eta }, C_0)\rightarrow 0,\qquad \text {as}\ \eta \rightarrow 0, \end{aligned}$$

for some \(C_0\in \mathcal {C}\). From (2.30) and (2.31), as \(\eta \rightarrow 0\), we find

$$\begin{aligned} \int _{\Sigma _N}(u(C_{\eta }) - u_{\eta })^2d\sigma \rightarrow 0, \end{aligned}$$

hence, also

$$\begin{aligned}&\int _{\Sigma _N}(u(C_{\eta }) - u_{meas})^2d\sigma (x)\\&\quad \le \int _{\Sigma _N}(u(C_{\eta }) - u_{\eta })^2d\sigma (x)+\int _{\Sigma _N}(u_{meas}- u_{\eta })^2d\sigma (x)\rightarrow 0. \end{aligned}$$

By the continuity result in Theorem 2.5 and using the last relation, we find that

$$\begin{aligned} u(C_0)=u_{meas},\qquad \text {on}\ \Sigma _N. \end{aligned}$$

Therefore, thanks to the uniqueness result of the inverse problem in Lipschitz domains (cf. [53]) we get \(C_0=C^{\sharp }\). \(\square \)

3 Reconstruction of Cavities—Filling the Void

From the numerical point of view, the minimization of the functional (2.8) is complicated due to its non-differentiability. A typical approach to overcome this issue is to consider a further regularization of the functional, where the perimeter is approximated by a Ginzburg-Landau type functional, see for example [18]. This approach is well-known in the literature and it has been applied in different contexts, see for example [3, 12, 14,15,16, 18, 25, 30, 35, 44, 48].

First, we note that Problem (2.8) is equivalent to the following formulation

$$\begin{aligned} \min _{v\in X_{0,1}}J(v), \hbox { where } J(v)=\frac{1}{2}\int _{\Sigma _N} |u(v)-u_{meas}|^2\, d\sigma (x)+\alpha TV(v), \end{aligned}$$
(3.1)

where \(X_{0,1}:=\{v\in BV(\Omega )\,:\, v=\chi _{C} \, \hbox { a.e. in }\Omega , \,C\in {{\mathcal {C}}}\}\), TV(v) is defined in (1.7), and \(\chi _C\) is the indicator function of C. Note that the space \(X_{0,1}\) is endowed with the norm \(\Vert {v}\Vert _{BV(\Omega )} = \Vert {v}\Vert _{L^1(\Omega )} + TV(v)\).

Remark 3.1

By compactness properties of \(BV(\Omega )\) (see, e.g., [4], Theorem 3.23), any uniformly bounded sequence in \(X_{0,1}\) admits a subsequence converging in \(L^1(\Omega )\) to an element in \(X_{0,1}\). In fact, let \(v_n\) a sequence uniformly bounded in \(X_{0,1}\), there exists, possibly up to a subsequence, \(v\in BV(\Omega )\) such that

$$\begin{aligned} v_n\rightarrow v\ \ \text {in}\ \ L^1(\Omega ) \Rightarrow v_n\rightarrow v \ \ \text {a.e. in}\ \ \Omega . \end{aligned}$$

Since \(v_n\) attains values 0 and 1 only, it follows that \(v\in X_{0,1}\).

Following the approach proposed in [18], we fill the cavity with a fictitious material with elastic properties that are different from the background. Specifically, we take an elasticity tensor \(\mathbb {C}_1:=\delta \mathbb {C}_0\), where \(\delta >0\) is sufficiently small. Therefore, the boundary value problem (2.1) is modified into

$$\begin{aligned} \left\{ \begin{aligned} \text {div}(\mathbb {C}_{\delta }(v) \widehat{\nabla } u_{\delta }(v))&= 0 \qquad \text {in}\ \Omega , \\ (\mathbb {C}_{\delta }(v) \widehat{\nabla }u_{\delta }(v)) \nu&= g \qquad \text {on } \Sigma _N,\\ u_{\delta }(v)&=0 \qquad \text {on } \Sigma _D, \end{aligned} \right. \end{aligned}$$
(3.2)

where

$$\begin{aligned} \mathbb {C}_{\delta }(v)= \mathbb {C}_0 + (\mathbb {C}_1 - \mathbb {C}_0)v,\quad \text {with}\quad \mathbb {C}_1=\delta \mathbb {C}_0. \end{aligned}$$
(3.3)

Here \(\mathbb {C}_0\) and \(\mathbb {C}_1\) are the elasticity tensors in \(\Omega \setminus C\) and C, respectively.

Remark 3.2

Thanks to Assumption 2.1, the fact that \(\delta >0\), and by (3.3), the elasticity tensor \(\mathbb {C}_{\delta }(v)\) is strongly convex.

Remark 3.3

The following analysis can be generalized to the case of a generic fourth-order elasticity tensor \(\mathbb {C}_1\) which is strongly convex and uniformly bounded with the further hypothesis that

$$\begin{aligned} \mathbb {C}_1\widehat{A}:\widehat{A} \le \mathbb {C}_0\widehat{A}:\widehat{A} \ \ \text { or }\ \, \mathbb {C}_0\widehat{A}:\widehat{A} \le \mathbb {C}_1\widehat{A}:\widehat{A}. \end{aligned}$$

Remark 3.4

When dealing with sequences, we will often use the simplified notation \(u_n:= u_{\delta }(v_n), \, u := u_{\delta }(v), \, \mathbb {C}_n := \mathbb {C}_{\delta }(v_n), \,\mathbb {C} := \mathbb {C}_{\delta }(v)\).

The elastic problem (3.2) has the following weak formulation:

$$\begin{aligned}&\text {Find } u_{\delta }(v)\in H^1_{\Sigma _D}(\Omega ) \text { solution to}\nonumber \\&\int _{\Omega } \mathbb {C}_{\delta }(v)\widehat{\nabla }{u_{\delta }(v)}:\widehat{\nabla }{\varphi }\, dx=\int _{\Sigma _N}g\cdot \varphi \, d\sigma (x), \qquad \forall \varphi \in H^1_{\Sigma _D}(\Omega ). \end{aligned}$$
(3.4)

Well-posedness of Problem (3.2) in \(H^1_{\Sigma _D}(\Omega )\) follows in the same way as for Problem (2.1), and, in addition

$$\begin{aligned} \Vert u_{\delta }(v)\Vert _{H^1(\Omega )}\le c\Vert g\Vert _{L^2(\Sigma _N)} \end{aligned}$$

We now approximate Problem (3.1) with the following one

$$\begin{aligned} \min _{v\in X_{0,1}}J_{\delta }(v), \hbox { where } J_{\delta }(v)=\frac{1}{2}\int _{\Sigma _N} |u_{\delta }(v)-u_{meas}|^2\, d\sigma (x)+\alpha TV(v), \end{aligned}$$
(3.5)

where \(u_{\delta }(v)\in H^1_{\Sigma _D}(\Omega )\) is the solution of Problem (3.2).

We prove the existence of minima of \(J_{\delta }(v)\) in \(X_{0,1}\), on account of the ideas contained in [14]. The proof is a consequence of the following property.

Proposition 3.5

Let \(\{v_n\} \subset X_{0,1}\) be strongly convergent in \(L^1(\Omega )\) to \(v \in X_{0,1}\). Then \(\{u_{\delta }(v_n)\lfloor _{\Sigma _N}\}\) strongly converges in \(L^2(\Sigma _N)\) to \(u_{\delta }(v)\lfloor _{\Sigma _N}\), i.e., the map \(F : v \rightarrow u_{\delta }(v)\lfloor _{\Sigma _N}\) is continuous from \(X_{0,1}\) to \(L^2(\Sigma _N)\) in the \(L^1\) topology.

Proof

Consider the weak formulation (3.4) associated to v and \(v_n\), respectively,

$$\begin{aligned}&\int _\Omega \mathbb {C}_{\delta }(v) \widehat{\nabla }u_{\delta }(v): \widehat{\nabla }\varphi = \int _{\Sigma _N} g\cdot \varphi , \quad \forall \varphi \in H_{\Sigma _D}^1(\Omega ), \\&\int _\Omega \mathbb {C}_{\delta }(v_n) \widehat{\nabla }u_{\delta }(v_n): \widehat{\nabla }\varphi = \int _{\Sigma _N} g\cdot \varphi , \quad \forall \varphi \in H_{\Sigma _D}^1(\Omega ). \end{aligned}$$

Subtracting the two equations and setting \(u_n:= u_{\delta }(v_n), \, u := u_{\delta }(v), \, \mathbb {C}_n := \mathbb {C}_{\delta }(v_n), \,\mathbb {C} := \mathbb {C}_{\delta }(v)\), we get

$$\begin{aligned}&\int _\Omega \mathbb {C}_n \widehat{\nabla }(u_n - u) : \widehat{\nabla }\varphi + \int _{\Omega }( \mathbb {C}_n - \mathbb {C}) \widehat{\nabla }u :\widehat{\nabla }\varphi =0, \quad \forall \varphi \in H_{\Sigma _D}^1(\Omega ). \end{aligned}$$

Thus, making the choice \(\varphi = u_n - u\) and proceeding similarly as in (2.5) to get \(H^1\)-estimates, we find

$$\begin{aligned} \Vert u_n-u\Vert ^2_{H^1(\Omega )}\le c \Vert (\mathbb {C}_n - \mathbb {C})\widehat{\nabla } u\Vert _{L^2(\Omega )} \Vert \widehat{\nabla } (u_n - u)\Vert _{L^2(\Omega )}, \end{aligned}$$

and then, by \(\mathbb {C}_n - \mathbb {C} = (\mathbb {C}_1 - \mathbb {C}_0)(v_n - v)\) and the uniform bound on the elasticity tensor, see Assumption 2.1, we derive

$$\begin{aligned} \Vert u_n-u\Vert _{H^1(\Omega )} \le c \Vert (\widehat{\nabla } u) (v_n - v)\Vert _{L^2(\Omega )}. \end{aligned}$$

Observe now that \(v_n - v \rightarrow 0\) in \(L^1(\Omega )\) as \(n\rightarrow +\infty \) so that, possibly up to a subsequence, \(v_n -v \rightarrow 0\), a.e. in \(\Omega \). Moreover, recalling that \(v_n\) and v are bounded and \(u \in H^1(\Omega )\), we deduce, by dominated convergence theorem, that

$$\begin{aligned} \Vert u_n - u\Vert _{H^1(\Omega )} \rightarrow 0,\quad \text {as } n\rightarrow +\infty . \end{aligned}$$

Finally, the trace theorem implies

$$\begin{aligned} \Vert u_n - u\Vert _{L^2(\Sigma _N)} \rightarrow 0,\quad \text {as } n\rightarrow +\infty . \end{aligned}$$

\(\square \)

Proposition 3.6

\(J_{\delta }(v)\) admits a minimum \(v \in X_{0,1}\).

Proof

Observe that \(J_{\delta }(v)\) is bounded from below, by definition. Moreover, \(J_{\delta }(v) \ne + \infty \), for \(v\in X_{0,1}\). So, let \(\{v_n\} \subset X_{0,1}\) be a minimizing sequence of \(J_{\delta }(v)\), that is

$$\begin{aligned} J_{\delta }(v_n) \rightarrow \inf _{v \in X_{0,1}}J_{\delta }(v) = M, \quad \text {as } n\rightarrow +\infty . \end{aligned}$$

Then

$$\begin{aligned} 0\le J_{\delta }(v_n) \le 2M \text { and } 0\le \alpha TV(v_n) \le 2M. \end{aligned}$$

Hence, there exists a positive constant c, independent on n, such that

$$\begin{aligned} \Vert v_n\Vert _{BV(\Omega )} = \Vert v_n\Vert _{L^1(\Omega )} + TV(v_n) \le c. \end{aligned}$$
(3.6)

This implies that \(v_n\) is uniformly bounded in \(X_{0,1}\). Therefore, thanks to Remark 3.1, there exists \(v \in X_{0,1}\) such that \(v_n\rightarrow v\) in \(L^1(\Omega )\). Due to the lower semicontinuity of TV(v) with respect to the \(L^1\)-convergence, we have

$$\begin{aligned} TV(v) \le \liminf \limits _{n\rightarrow +\infty } TV(v_n) \end{aligned}$$

and, using Proposition 3.5, we get

$$\begin{aligned} \begin{aligned} J_{\delta }(v)&= \frac{1}{2}\int _{\Sigma _N}|u_{\delta }(v) - u_{meas}|^2 + \alpha TV(v) \\&\le \liminf \limits _{n\rightarrow +\infty } \left( \frac{1}{2} \int _{\Sigma _N}|u_{\delta }(v_n) - u_{meas}|^2 + \alpha TV(v_n) \right) \\&= \lim \limits _{n\rightarrow +\infty } J_{\delta }(v_n) = M. \end{aligned} \end{aligned}$$
(3.7)

\(\square \)

3.1 Phase-Field Relaxation

Proceeding as in [14, 30], we now consider a phase-field relaxation of the optimization problem (3.5). More precisely, we define a minimization problem for a differentiable cost functional defined on a convex subspace of \(H^1(\Omega )\), namely on the set

$$\begin{aligned} \mathcal {K} = \{ v \in H^1(\Omega ): \ 0 \le v(x) \le 1 \ a.e. \ in \ \Omega , \ v(x) = 0 \ a.e. \ in \ \Omega ^{d_0/2}\}, \end{aligned}$$

where \(\Omega ^{d_0/2}\) has been defined in (2.3), and, for every \(\varepsilon >0\), we replace the total variation term with the following Modica–Mortola functional.

Problem 3.1

Given \(u_{meas} \in L^2(\Sigma _N)\), and \(\varepsilon ,\delta >0\), find

$$\begin{aligned} \min _{v \in \mathcal {K}} J_{\delta ,\varepsilon }(v), \,\,\, J_{\delta ,\varepsilon }(v) := \frac{1}{2} \Vert u_{\delta }(v)-u_{meas}\Vert _{L^2(\Sigma _N)}^2 + \widetilde{\alpha } \!\int _{\Omega }\Big ( \varepsilon |\nabla v|^2 + \frac{1}{\varepsilon }v(1-v)\Big ),\nonumber \\ \end{aligned}$$
(3.8)

\(u_{\delta }(v)\in H^1_{\Sigma _D}(\Omega )\) being the solution to (3.2), for \(v\in \mathcal {K}\), and \(\widetilde{\alpha }=\frac{4}{\pi }\alpha \), where \(4/\pi =(2\int _0^1 \sqrt{v(1-v)}\, dv)^{-1}\) is a rescaling parameter, see [1].

Remark 3.7

We expect \(\Gamma \)-convergence of the functional \(J_{\delta ,\varepsilon }\) to J, given in (3.1). However, this analysis is involved in the elastic context and is still an open issue that needs a specific accurate study.

The following result holds

Proposition 3.8

For any \(\delta ,\varepsilon >0\), Problem (3.8) admits a solution \(v=v_{\delta ,\varepsilon } \in \mathcal{K}\).

Proof

Let us fix \(\delta ,\varepsilon > 0\) and consider a minimizing sequence \(\{ v_n \} \subset \mathcal {K}\) for \(J_{\delta ,\varepsilon }(v)\) (we omit the dependence of \(v_n\) on \(\delta \) and \(\varepsilon \)). We have

$$\begin{aligned} J_{\delta ,\varepsilon }(v_n) \rightarrow \inf _{v \in \mathcal {K}}J_{\delta ,\varepsilon }(v) = M. \end{aligned}$$

Hence, by definition of minimizing sequence, \(0\le J_{\delta ,\varepsilon }(v_n) \le 2M\) independently of n, which implies that also \(\Vert {\nabla v_n}\Vert _{L^2(\Omega )}^2\) is bounded. Moreover, recalling that \(v_n \in \mathcal {K}\) and \(0\le v_n (x)\le 1\) a.e. in \(\Omega \), we deduce that \(\Vert {v_n}\Vert _{L^2(\Omega )}\le M_1\), with \(M_1\) independent of n and hence \(\Vert {v_n}\Vert _{H^1(\Omega )}\le M_2\), with \(M_2\) independent of n. Due to the weak compactness of \(H^1(\Omega )\), there exists \(v \in H^1(\Omega )\) such that, possibly up to a subsequence, \( v_n \rightharpoonup v\) in \(H^1(\Omega )\). Hence, \(v_n \rightarrow v\) strongly in \(L^2(\Omega )\) and \(v_n \rightarrow v\) a.e. in \(\Omega \). Since \(v_n(1-v_n)\le 1/4\), by means of the Lebesgue’s dominated convergence theorem, we get

$$\begin{aligned} \int _{\Omega } v_{n}(1-v_{n}) \rightarrow \int _{\Omega } v(1-v). \end{aligned}$$

Moreover, by the lower semicontinuity of the \(H^1(\Omega )\) norm with respect to the weak convergence, we obtain

$$\begin{aligned} \begin{aligned} \Vert v\Vert _{H^1(\Omega )}^2&\le \liminf \limits _{n\rightarrow +\infty } \Vert {v_{n}}\Vert _{H^1(\Omega )}^2, \\ \Vert {v}\Vert _{L^2(\Omega )}^2 + \Vert {\nabla v}\Vert _{L^2(\Omega )}^2&\le \lim \limits _{n\rightarrow +\infty } \Vert {v_{n}}\Vert _{L^2(\Omega )}^2 + \liminf \limits _{n\rightarrow +\infty } \Vert {\nabla v_{n}}\Vert _{L^2(\Omega )}^2\\&= \Vert {v}\Vert _{L^2(\Omega )}^2 + \liminf \limits _{n\rightarrow +\infty } \Vert {\nabla v_{n}}\Vert _{L^2(\Omega )}^2,\\ \Vert {\nabla v}\Vert _{L^2(\Omega )}^2&\le \liminf \limits _{n\rightarrow +\infty } \Vert {\nabla v_{n}}\Vert _{L^2(\Omega )}^2. \end{aligned} \end{aligned}$$

By the last inequality and the convergence of \(v_n\) to v a.e., by the use of Proposition 3.5 and the fact that \(v_n\) is a minimizing sequence, we have

$$\begin{aligned} \begin{aligned} J_{\delta ,\varepsilon }(v)&= \frac{1}{2} \Vert u_{\delta }(v)-u_{meas}\Vert _{L^2(\Sigma _N)}^2 + \widetilde{\alpha } \int _{\Omega }\left( \varepsilon |\nabla v|^2 + \frac{1}{\varepsilon }v(1-v)\right) \\&\le \liminf \limits _{n\rightarrow +\infty }\left( \frac{1}{2} \Vert u_{\delta }(v_n)-u_{meas}\Vert _{L^2(\Sigma _N)}^2 + \widetilde{\alpha } \int _{\Omega }\left( \varepsilon |\nabla v_n|^2 + \frac{1}{\varepsilon }v_n(1-v_n)\right) \right) \\&=\lim \limits _{n\rightarrow +\infty } J_{\delta ,\varepsilon }(v_n) = M. \end{aligned} \end{aligned}$$

Finally, by pointwise convergence, we know that \(0 \le v \le 1\) a.e. in \(\Omega \) and \(v = 1\) a.e. in \(\Omega ^{d_0/2}\). Hence, v is a minimum of \(J_{\delta ,\varepsilon }\) in \(\mathcal {K}\). \(\square \)

3.2 Necessary Optimality Conditions

In this section we provide an expression for the first order necessary optimality condition associated with the minimization problem (3.8), formulated as a variational inequality involving the Fréchet derivative of \(J_{\delta ,\varepsilon }\).

Proposition 3.9

Define the map \(F:\mathcal {K} \rightarrow H^1(\Omega ), \, F(v) = u_{\delta }(v)\), \(u_{\delta }(v)\) solution to (3.2). Then the operators F and \(J_{\delta ,\varepsilon }\) (for every \(\delta , \varepsilon >0\)) are Fréchet-differentiable on \(\mathcal {K} \subset L^{\infty }(\Omega ) \cap H^1(\Omega )\).

Moreover, any minimizer \(v_{\delta ,\varepsilon }\) of \(J_{\delta ,\varepsilon }\) satisfies the variational inequality

$$\begin{aligned} J'_{\delta ,\varepsilon }(v_{\delta ,\varepsilon })[\omega - v_{\delta ,\varepsilon }] \ge 0, \qquad \forall \omega \in \mathcal {K}, \end{aligned}$$
(3.9)

where

$$\begin{aligned} J'_{\delta ,\varepsilon }(v)[\vartheta ]= & {} \int _\Omega \vartheta (\mathbb {C}_0 - \mathbb {C}_1) \widehat{\nabla }u_{\delta }(v) : \widehat{\nabla }p_{\delta }(v) + 2 \widetilde{\alpha } \varepsilon \int _{\Omega }\widehat{\nabla }v : \widehat{\nabla }\vartheta \nonumber \\&+ \frac{\widetilde{\alpha }}{\varepsilon }\int _{\Omega }(1-2v)\vartheta . \end{aligned}$$
(3.10)

Here \(\vartheta \in \mathcal {K}-v=\{z \ s.t. \ z+v \in \mathcal {K}\}\) and \(p_{\delta }\in H_{\Sigma _D}^1(\Omega )\) is the solution to the adjoint problem

$$\begin{aligned}&\int _{\Omega } \mathbb {C}_{\delta }(v) \widehat{\nabla }p_{\delta } : \widehat{\nabla }\psi \nonumber \\&\quad = \int _{\Sigma _N}{(u_{\delta }(v)-u_{meas})\psi }, \qquad \forall \psi \in H_{\Sigma _D}^1(\Omega ). \end{aligned}$$
(3.11)

Proof

First we prove that F is Fréchet differentiable in \(L^{\infty }(\Omega )\). More precisely,

$$\begin{aligned} F'(v)[\vartheta ] = u^{\sharp }(v), \text { for }\vartheta \in L^{\infty }(\Omega )\cap (\mathcal {K}-v), \end{aligned}$$

where \(u^{\sharp }(v)\) is the solution in \(H_{\Sigma _D}^1(\Omega )\) of

$$\begin{aligned}&\int _{\Omega } \mathbb {C}_{\delta }(v)\widehat{\nabla }u^{\sharp }(v):\widehat{\nabla }\varphi \nonumber \\&\quad = \int _{\Omega } \vartheta (\mathbb {C}_0 - \mathbb {C}_1)\widehat{\nabla }u_{\delta }(v) : \widehat{\nabla }\varphi , \quad \forall \varphi \in H_{\Sigma _D}^1(\Omega ), \end{aligned}$$
(3.12)

namely,

$$\begin{aligned} \Vert {F(v+\vartheta ) - F(v) - u^{\sharp }(v)}\Vert _{H^1(\Omega )} = o(\Vert {\vartheta }\Vert _{L^{\infty }(\Omega )}). \end{aligned}$$
(3.13)

To this aim, we first show that

$$\begin{aligned}&\Vert {u_{\delta }(v+\vartheta ) - u_{\delta }(v)}\Vert _{H^1(\Omega )} \\&\quad \le c \Vert {\vartheta }\Vert _{L^{\infty }(\Omega )}, \, \text { for }\vartheta \in L^{\infty }(\Omega )\cap (\mathcal {K}-v). \end{aligned}$$

Indeed, the difference \(u_{\delta }(v+\vartheta )-u_{\delta }(v)\) satisfies

$$\begin{aligned} \begin{aligned}&\int _{\Omega }\mathbb {C}_{\delta }(v + \vartheta ) \widehat{\nabla }(u_{\delta }(v+\vartheta )-u_{\delta }(v)) : \widehat{\nabla }\varphi \\&\quad + \int _{\Omega }(\mathbb {C}_{\delta }(v + \vartheta ) - \mathbb {C}_{\delta }(v)) \widehat{\nabla }u_{\delta }(v) : \widehat{\nabla }\varphi = 0, \quad \forall \varphi \in H_{\Sigma _D}^1(\Omega ). \end{aligned} \end{aligned}$$
(3.14)

Taking \(\varphi = u_{\delta }(v+\vartheta )-u_{\delta }(v)\) and recalling that \(\mathbb {C}_{\delta }(v + \vartheta ) - \mathbb {C}_{\delta }(v) = (\mathbb {C}_1 - \mathbb {C}_0)\vartheta \), we obtain

$$\begin{aligned} \begin{aligned}&\int _{\Omega }\mathbb {C}_{\delta }(v + \vartheta ) \widehat{\nabla }(u_{\delta }(v+\vartheta )-u_{\delta }(v)):\widehat{\nabla }(u_{\delta }(v+\vartheta )-u_{\delta }(v)) \\&\quad = - \int _{\Omega } \vartheta (\mathbb {C}_1 - \mathbb {C}_0)\widehat{\nabla }u_{\delta }(v) : \widehat{\nabla }(u_{\delta }(v+\vartheta )-u_{\delta }(v)). \end{aligned} \end{aligned}$$
(3.15)

Hence, by using the assumptions on the elasticity tensors, Korn and Poincaré inequalities, and the fact that \(v + \vartheta \in \mathcal {K}\), we obtain

$$\begin{aligned} \begin{aligned}&\Vert {u_{\delta }(v+\vartheta ) - u_{\delta }(v)}\Vert _{H^1(\Omega )} \le c\Vert {\vartheta }\Vert _{L^\infty (\Omega )}\Vert {\widehat{\nabla } u_{\delta }(v)}\Vert _{L^2(\Omega )} \\&\quad \le c\Vert {\vartheta }\Vert _{L^\infty (\Omega )}\Vert {u_{\delta }(v)}\Vert _{H^1(\Omega )} \le c\Vert {\vartheta }\Vert _{L^\infty (\Omega )}\Vert g\Vert _{L^2(\Sigma _N)}\le c\Vert {\vartheta }\Vert _{L^\infty (\Omega )}. \end{aligned} \end{aligned}$$
(3.16)

We now estimate \(u_{\delta }(v+\vartheta )-u_{\delta }(v) - u^{\sharp }(v)\). Subtracting (3.12) from (3.14) and setting \(\omega = u_{\delta }(v+\vartheta )-u_{\delta }(v)\), then it holds

$$\begin{aligned} \int _{\Omega }\mathbb {C}_{\delta }(v + \vartheta ) \widehat{\nabla }\omega : \widehat{\nabla }\varphi - \int _{\Omega }\mathbb {C}_{\delta }(v) \widehat{\nabla }u^{\sharp }(v) : \widehat{\nabla }\varphi = 0, \end{aligned}$$
(3.17)

from which

$$\begin{aligned} \begin{aligned} \int _{\Omega }\mathbb {C}_{\delta }(v) \widehat{\nabla }(\omega - u^{\sharp }(v)) : \widehat{\nabla }\varphi&= - \int _{\Omega }(\mathbb {C}_{\delta }(v + \vartheta ) - \mathbb {C}_{\delta }(v))\widehat{\nabla }\omega : \widehat{\nabla }\varphi \\&= \int _{\Omega }\vartheta (\mathbb {C}_0 - \mathbb {C}_1)\widehat{\nabla }\omega : \widehat{\nabla }\varphi . \end{aligned} \end{aligned}$$
(3.18)

Choosing now \(\varphi = \omega - u^{\sharp }(v)\), we get

$$\begin{aligned}&\int _{\Omega }\mathbb {C}_{\delta }(v)\widehat{\nabla }(\omega - u^{\sharp }(v)):\widehat{\nabla }(\omega - u^{\sharp }(v)) \nonumber \\&\quad = \int _{\Omega }(\mathbb {C}_0 - \mathbb {C}_1)\vartheta \widehat{\nabla }\omega : \widehat{\nabla }(\omega - u^{\sharp }(v)), \end{aligned}$$
(3.19)

and again by the boundedness of the elasticity tensors and the use of Korn and Poincaré inequalities it follows

$$\begin{aligned} \Vert \omega - u^{\sharp }(v)\Vert _{H^1(\Omega )}= & {} \Vert u(v+ \vartheta ) - u_{\delta }(v) - u^{\sharp }(v)\Vert _{H^1(\Omega )} \nonumber \\\le & {} c \Vert \vartheta \Vert ^2_{L^\infty (\Omega )}, \end{aligned}$$
(3.20)

so that \(F^\prime (v)[\theta ] = u^{\sharp }(v)\).

We now prove that \(J_{\delta ,\varepsilon }\) is Fréchet differentiable. By means of the chain rule and the Frechét differentiability of F, we compute the expression of \(J'_{\delta ,\varepsilon }(v)\), i.e.,

$$\begin{aligned} J'_{\delta ,\varepsilon }(v)[\vartheta ]= & {} \int _{\Sigma _N} (F(v)-u_{meas}) F'(v)[\vartheta ] \nonumber \\&+ \widetilde{\alpha } \int _{\Omega }\Big ( 2\varepsilon \nabla v : \nabla \vartheta + \frac{1}{\varepsilon }(1-2v)\vartheta \Big ), \end{aligned}$$
(3.21)

where, with abuse of notation, F(v) and \(F'(v)[\vartheta ]\) denote the trace of F(v) and \(F'(v)[\vartheta ]\) on \(\Sigma _N\), respectively. By the definition of the adjoint problem and of \(u^{\sharp }(v)\), we get

$$\begin{aligned} \begin{aligned} \int _{\Sigma _N} (F(v)-u_{meas}) F'(v)[\vartheta ] =&\int _{\Sigma _N} (F(v)-u_{meas})u^{\sharp }(v) = \\ =&\int _{\Omega }(\mathbb {C}_0 - \mathbb {C}_1)\vartheta \widehat{\nabla }F(v) : \widehat{\nabla }p_{\delta } \end{aligned} \end{aligned}$$
(3.22)

and hence

$$\begin{aligned} J_{\delta ,\varepsilon }'(v)[\vartheta ]= & {} \int _{\Omega }(\mathbb {C}_0 - \mathbb {C}_1)\vartheta \widehat{\nabla }F(v): \widehat{\nabla }p_{\delta } \\&+ \widetilde{\alpha } \int _{\Omega }\left( 2\varepsilon \nabla v \cdot \nabla \vartheta + \frac{1}{\varepsilon }(1-2v)\vartheta \right) . \end{aligned}$$

Finally, by standard arguments, since \(J_{\delta ,\varepsilon }\) is a continuous and Frechét differentiable functional on a convex subset \(\mathcal {K}\) of the Banach space \(H^1(\Omega )\), the optimality conditions for the optimization problem (3.8) are expressed in terms of the variational inequality (3.9). \(\square \)

4 Discretization and Reconstruction Algorithm

4.1 Convergence Analysis

Here, we assume that \(\Omega \) is a polygonal (\(d=2\)) or polyhedral (\(d=3\)) domain. Again, for simplifying the notation, we denote by \(u:=u_{\delta }\) and \(p:=p_{\delta }\).

Let \((\mathcal {T}_h)_{0<h\le h_0}\) be a regular triangulation of \(\Omega \) and define

$$\begin{aligned}&\mathcal {V}_h := \{ v_h \in C({\overline{\Omega }}) \, : \, v_h|_\mathcal {T} \in \mathcal {P}_1(\mathcal {T}), \, \forall \, \mathcal {T} \in \mathcal {T}_h\}, \end{aligned}$$
(4.1)

where \(\mathcal {P}_1(\mathcal {T})\) is the set of polynomials of first degree on \(\mathcal {T}\), and

$$\begin{aligned}&\mathcal {K}_h := \mathcal {V}_h \cap \mathcal {K} , \nonumber \\&\mathcal {V}_{h,\Sigma _D}:=\mathcal {V}_{h}\cap H^1_{\Sigma _D}(\Omega ). \end{aligned}$$
(4.2)

For every \(h>0\), we set \(u_h:=(u_{\delta })_h:\mathcal {K}\rightarrow \mathcal {V}_{h,\Sigma _D}\) where \(u_h\) is solution to

$$\begin{aligned}&\int _\Omega \mathbb {C}_{\delta }(v) \widehat{\nabla }u_h(v): \widehat{\nabla }\varphi _h = \int _{\Sigma _N} g_h\cdot \varphi _h, \forall \varphi _h \in \mathcal {V}_{h,\Sigma _D}. \end{aligned}$$
(4.3)

Here \(g_h\) is a piecewise linear, continuous approximation of g such that \(g_h\rightarrow g\) in \(L^2(\Sigma _N)\) as \(h\rightarrow 0\).

As in [30], one can show that for every \(v\in \mathcal {K}\) there exists a sequence \(v_h\in \mathcal {K}_h\) such that \(v_h \rightarrow v\) in \(H^1(\Omega )\). Most of the following results are an adaptation of those presented in [30] for a scalar equation to the case of the elasticity system, hence we do not provide the proofs for some of them.

The following lemma is a consequence of the continuity and coercivity of the bilinear form on the left-hand side of (4.3) and Céa’s Lemma (see, e.g., [14]).

Lemma 4.1

Let \(g\in L^2(\Sigma _N)\). Then, \(\forall \, v\in \mathcal {K}\), \(u_h(v) \rightarrow u(v)\) strongly in \(H^1(\Omega )\) as \(h\rightarrow 0\).

Next we state a result concerning the continuity of \(u_h\) in the space \(\mathcal {V}_{h,\Sigma _D}\).

Proposition 4.2

Let \(h_k\), \(v_{h_k}\) be two sequences such that \(\lim \limits _{k\rightarrow +\infty } h_k=0\) and \(v_{h_k}\in \mathcal {K}_{h_k}\) with \(v_{h_k}\rightarrow v\) in \(L^1(\Omega )\). Then \(u_{h_k}(v_{h_k})\rightarrow u(v)\) in \(H^1_{\Sigma _D}(\Omega )\) for \(k\rightarrow +\infty \).

Proof

The proof can be obtained reasoning similarly as in Lemma 3.1 of [30]. \(\square \)

Let \(J_{\delta ,\varepsilon ,h}:\mathcal {K}_h\rightarrow \mathbb {R}\) be the approximation to \(J_{\delta ,\varepsilon }\) defined as follows

$$\begin{aligned} J_{\delta ,\varepsilon ,h}:= & {} \frac{1}{2}\Vert u_h(v_h)-u_{meas,h}\Vert ^2_{L^2(\Sigma _N)}\nonumber \\&+\widetilde{\alpha }\int _{\Omega }\varepsilon |\nabla v_h|^2+\frac{1}{\varepsilon }v_h(1-v_h), \end{aligned}$$
(4.4)

where we assume that \(u_{meas,h}\rightarrow u_{meas}\), as \(h\rightarrow 0\). Similarly as in Theorem 3.2 of [30], we can show the following result.

Theorem 4.3

There exists \(v_h\in \mathcal {K}_h\) such that \(J_{\delta ,\varepsilon ,h}(v_h)=\min _{\eta _h\in \mathcal {K}_h} J_{\delta ,\varepsilon ,h}(\eta _h)\). Moreover, let \(h_k\) be such that \(\lim _{k\rightarrow +\infty } h_k=0\). Then every sequence \(v_{h_k}\) has a subsequence converging strongly in \(H^1(\Omega )\) and a.e. in \(\Omega \) to a minimum of \(J_{\delta ,\varepsilon }\).

In our numerical algorithm we approximately solve (3.8) and so we look for an admissible point \(v_h\in \mathcal {K}_h\) that satisfies the first order necessary condition

$$\begin{aligned} J'_{\delta ,\varepsilon ,h}(v_h)[\omega _h - v_h] \ge 0, \qquad \forall \omega _h \in \mathcal {K}_h, \end{aligned}$$
(4.5)

rather than trying to locate a global minimum of \(J_{\delta ,\varepsilon ,h}\). To this aim, we consider the discrete adjoint problem: find \(p_h:=(p_{\delta })_h\in \mathcal {V}_{h,\Sigma _D}\) such that

$$\begin{aligned}&\int _{\Omega } \mathbb {C}_{\delta }(v_h) \widehat{\nabla }p_h : \widehat{\nabla }\psi _h \nonumber \\&\quad = \int _{\Sigma _N}{(u_h(v_h)-u_{meas,h})\psi _h}, \qquad \forall \psi _h \in \mathcal {V}_{h,\Sigma _D}. \end{aligned}$$
(4.6)

Then using \(v_h\in \mathcal {K}_h\), we can prove the discrete version of Proposition 3.9, where the discrete variational inequality reads as:

$$\begin{aligned}&\int _\Omega (\mathbb {C}_0 - \mathbb {C}_1)(\omega _h-v_h) \widehat{\nabla }u_h(v_h) : \widehat{\nabla }p_h + 2 \widetilde{\alpha } \varepsilon \int _{\Omega }\nabla v_h \cdot \nabla (\omega _h-v_h)\nonumber \\&\quad + \frac{\widetilde{\alpha }}{\varepsilon }\int _{\Omega }(1-2v_h)(\omega _h-v_h) \ge 0, \quad \forall \omega _h\in \mathcal {K}_h. \end{aligned}$$
(4.7)

Then, we can prove the following theorem:

Theorem 4.4

Let \(h_k\) be such that \(\lim _{k\rightarrow +\infty } h_k=0\) and \(v_{h_k}\) be a sequence satisfying (4.5). Then there exists a subsequence of \(v_{h_k}\) that converges strongly in \(H^1(\Omega )\) and a.e. in \(\Omega \) to a solution v of (3.9).

Proof

We set \(v_k:=v_{h_k}\), \(u_k:=u_{h_k}(v_{h_k})\) and \(p_k:=p_{h_k}(v_{h_k})\). Testing (4.6) with \(\psi _h=p_k\) we get

$$\begin{aligned} \int _{\Omega } \mathbb {C}_{\delta }(v_k) \widehat{\nabla }p_k :\widehat{\nabla }p_k = \int _{\Sigma _N}{(u_k-u_{meas,k})\cdot p_k}, \end{aligned}$$

which yields, arguing as in (2.5) to get \(H^1\)-estimates,

$$\begin{aligned} c \Vert p_k\Vert ^2_{H^1(\Omega )} \le \Vert u_k-u_{meas,k}\Vert _{L^2(\Sigma _N)}\Vert p_k\Vert _{L^2(\Sigma _N)}. \end{aligned}$$

As the problem for \(u_k\) is well-posed with \(u_k\in H^1_{\Sigma _D}(\Omega )\) and \(u_{meas,k}\rightarrow u_{meas}\) (implying that \(\Vert u_{meas,k}\Vert _{L^2(\Sigma _N)}\) is uniformly bounded with respect to k), we get

$$\begin{aligned} \Vert p_k\Vert _{H^1(\Omega )}\le c. \end{aligned}$$

A similar result holds for \(\Vert u_k\Vert _{H^1(\Omega )}\). Therefore

$$\begin{aligned} \Vert p_k\Vert _{H^1(\Omega )} + \Vert u_k\Vert _{H^1(\Omega )} \le c,\quad \text {uniformly~ in~} k. \end{aligned}$$
(4.8)

From (4.7), employing \((1-2v_k)(w_k-v_k)\le w_k + 2v_k^2\) and testing with \(w_k=0\in \mathcal {K}_h\), we get

$$\begin{aligned}&2\widetilde{\alpha } \varepsilon \int _\Omega \vert \nabla v_k\vert ^2 \le c \Vert \widehat{\nabla }u_k \Vert _{L^2(\Omega )} \Vert \widehat{\nabla }p_k \Vert _{L^2(\Omega )} \nonumber \\&\quad + \frac{2\widetilde{\alpha }}{\varepsilon }\vert \Omega \vert \le c_{\varepsilon }, \end{aligned}$$
(4.9)

where we used (4.8). Therefore, \(v_k\) is bounded in \(H^1(\Omega )\), hence there exists a subsequence (still denoted by \(v_k\)) and \(v\in \mathcal {K}\) such that

$$\begin{aligned}&v_k\rightharpoonup v\quad \text {in~}H^1(\Omega ),\qquad v_k\rightarrow v\quad \text {in~}L^2(\Omega ) ~~(\text {and~in~}L^1(\Omega )),\\&v_k\rightarrow v \quad \text {a.e.~in~}\Omega . \end{aligned}$$

Thanks to Proposition 4.2 we have

$$\begin{aligned} u_k\rightarrow u \quad \text {in~}H^1_{\Sigma _D}(\Omega ). \end{aligned}$$
(4.10)

Now, let \(p\in H^1_{\Sigma _D}(\Omega )\) be the solution of the continuous adjoint problem and let \(\hat{p}_k\in \mathcal {V}_{h_k,\Sigma _D}\) be such that \(\hat{p}_k\rightarrow p\) in \(H^{1}_{\Sigma _D}(\Omega )\). Taking the difference of the problems solved by p and \(p_k\), after some standard manipulation we get

$$\begin{aligned}&\int _{\Omega }\mathbb {C}_{\delta }(v_k)\widehat{\nabla }(p_k- \hat{p}_k):\widehat{\nabla }\psi \\&\quad = \int _{\Omega }\mathbb {C}_{\delta }(v_k)\widehat{\nabla }(p- \hat{p}_k):\widehat{\nabla }\psi + \int _{\Omega }(\mathbb {C}_{\delta }(v)-\mathbb {C}_{\delta }(v_k))\widehat{\nabla }p:\widehat{\nabla }\psi \\&\qquad \ + \int _{\Sigma _N}(u_k-u)\cdot \psi + \int _{{\Sigma }_N}(u_{meas}-u_{meas,k})\cdot \psi , \end{aligned}$$

for all \(\psi \in \mathcal {V}_{h_k,\Sigma _D}\). Taking \(\psi =p_k-\hat{p}_k\), we get

$$\begin{aligned}&\Vert p_k-\hat{p}_k \Vert _{H^1(\Omega )}\le c \Big (\Vert \widehat{\nabla }(p-\hat{p}_k)\Vert _{L^2(\Omega )} + \int _{\Omega } \vert v-v_k\vert ^2 \vert \widehat{\nabla }p\vert ^2\nonumber \\&\quad +\Vert u_k-u\Vert _{L^2(\Sigma _N)} + \Vert u_{meas}-u_{meas,k}\Vert _{L^2(\Sigma _N)}\Big ). \end{aligned}$$
(4.11)

By hypothesis, we have \( \Vert u_{meas}-u_{meas,k}\Vert _{L^2(\Sigma _N)}\rightarrow 0\) and \(\Vert p-\hat{p}_k\Vert _{H^1(\Omega )}\rightarrow 0\) for \(k\rightarrow +\infty \). Hence, invoking Proposition 4.2 and observing that \( \int _{\Omega } \vert v-v_k\vert ^2 \vert \widehat{\nabla }p\vert ^2 \rightarrow 0\) for \(k\rightarrow +\infty \), we deduce \(p_k\rightarrow p\) in \(H^1(\Omega )\).

Next, we have to show that v satisfies the variational inequality (3.9). Given \(\omega \in \mathcal {K}\), there exists a sequence \(\hat{\omega }_k\in \mathcal {K}_{h_k}\) such that \(\hat{\omega }_k\rightarrow \omega \) in \(H^1(\Omega )\) and a.e. in \(\Omega \). Then, from the discrete variational inequality (4.7) we have for \(v_k\) that

$$\begin{aligned}&\int _\Omega (\mathbb {C}_0 - \mathbb {C}_1)(\hat{\omega }_k-v_k) \widehat{\nabla }u_k : \widehat{\nabla }p_k + 2 \widetilde{\alpha } \varepsilon \int _{\Omega }\nabla v_k \cdot \nabla (\hat{\omega }_k-v_k)\nonumber \\&\quad + \frac{\widetilde{\alpha }}{\varepsilon }\int _{\Omega }(1-2v_k)(\hat{\omega }_k-v_k)\ge 0. \end{aligned}$$
(4.12)

Now, observe that

$$\begin{aligned}&\int _\Omega (\mathbb {C}_0 - \mathbb {C}_1)(\hat{\omega }_k-v_k) \widehat{\nabla }u_k : \widehat{\nabla }p_k- \int _\Omega (\mathbb {C}_0 - \mathbb {C}_1)(\omega -v) \widehat{\nabla }u : \widehat{\nabla }p\nonumber \\&\quad = \int _\Omega (\mathbb {C}_0 - \mathbb {C}_1)(\hat{\omega }_k-v_k) [\widehat{\nabla }(u_k-u): \widehat{\nabla }p_k + \widehat{\nabla }u: \widehat{\nabla }(p_k-p)] \nonumber \\&\qquad + \int _\Omega (\mathbb {C}_0 - \mathbb {C}_1)[(\hat{\omega }_k-\omega )-(v_k-v)] \widehat{\nabla }u:\widehat{\nabla }p. \end{aligned}$$
(4.13)

The first integral on the right hand side converges to zero by (4.10) and \(p_k\rightarrow p\) in \(H^1(\Omega )\). To show that also the second integral converges to zero, we invoke the dominated convergence theorem. Hence, from (4.13), we obtain

$$\begin{aligned} \int _\Omega (\mathbb {C}_0 - \mathbb {C}_1)(\hat{\omega }_k-v_k) \widehat{\nabla }u_k : \widehat{\nabla }p_k- \int _\Omega (\mathbb {C}_0 - \mathbb {C}_1)(\omega -v) \widehat{\nabla }u : \widehat{\nabla }p \rightarrow 0, \nonumber \\ \end{aligned}$$
(4.14)

as \(k\rightarrow +\infty \). Then, utilizing (4.14) into (4.12), together with the fact that \(v_k \rightharpoonup v\) in \(H^1(\Omega )\), and the lower semicontinuity of the norm, we find

$$\begin{aligned} \Vert \nabla v\Vert ^2_{L^2(\Omega )}\le \liminf _{k\rightarrow +\infty } \Vert \nabla v_k\Vert ^2_{L^2(\Omega )}. \end{aligned}$$

Morover, noticing that \(\int _\Omega v_k \hat{\omega }_k\rightarrow \int _\Omega v \omega \) for \(k\rightarrow +\infty \), we get

$$\begin{aligned}&\int _\Omega (\mathbb {C}_0 - \mathbb {C}_1)(\omega -v) \widehat{\nabla }u : \widehat{\nabla }p + 2 \widetilde{\alpha } \varepsilon \int _{\Omega }\nabla v \cdot \nabla (\omega -v) \nonumber \\&\qquad + \frac{\widetilde{\alpha }}{\varepsilon }\int _{\Omega }(1-2v)(\omega -v)\nonumber \\&\quad \ge \liminf _{k\rightarrow +\infty }\Big \{ \int _\Omega (\mathbb {C}_0 - \mathbb {C}_1)(\hat{\omega }_k-v_k) \widehat{\nabla }u_k : \widehat{\nabla }p_k \nonumber \\&\qquad + 2 \widetilde{\alpha } \varepsilon \int _{\Omega }\nabla v_k \cdot \nabla (\hat{\omega }_k-v_k) + \frac{\widetilde{\alpha }}{\varepsilon }\int _{\Omega }(1-2v_k)(\hat{\omega }_k-v_k) \Big \}\ge 0. \end{aligned}$$
(4.15)

Finally, it remains to show that \(v_k\rightarrow v\) strongly in \(H^1(\Omega )\). We choose a sequence \(\hat{v}_k\in \mathcal {K}_{h_k}\) such that \(\hat{v}_k\rightarrow v\) in \(H^1(\Omega )\) and using the discrete variational inequality (4.7) with \(\omega _{h_k}=\hat{v}_k\), we easily get \(\nabla v_k\rightarrow \nabla v\) in \(L^2(\Omega )\), implying the result. \(\square \)

4.2 Reconstruction Algorithm

In order to solve the discrete optimization problem we follow the method used in [14] and [30]. The method is based on solving the following parabolic obstacle problem. For \(\delta ,\varepsilon >0\) fixed, let v be the solution to

$$\begin{aligned}&\int _\Omega \partial _t v(\omega -v) + J'_{\delta ,\varepsilon }(v)[\omega -v]\ge 0, \quad \forall \omega \in \mathcal {K}, t\in (0+\infty ),\\&v(\cdot ,0)=v_0\in \mathcal {K}. \end{aligned}$$

An easy computation shows that the value of the objective functional decreases in time. Hence, we expect that if the limit as \(t\rightarrow +\infty \) of its solution \(v(\cdot ,t)\) exists and it is equal to the asymptotic state \(v_\infty \), then this should satisfy the continuous optimality conditions (3.9).

We now discretize the above problem by using a semi-implicit time discretization scheme. We denote by \(\{ v_h^n\}_{n\in \mathbb {N}}\subset \mathcal {K}_h\) the sequence of approximations \(v_h^n\simeq v(\cdot ,t^n)\) obtained as follows:

$$\begin{aligned}&v_h^0=v_0\in \mathcal {K}_h\nonumber \\&v_h^{n+1}\in \mathcal {K}_h:~ \frac{1}{\tau _n}\int _{\Omega } (v_h^{n+1}-v_h^n)(\omega _h-v_h^{n+1})\nonumber \\&\quad + \int _\Omega (\mathbb {C}_0 - \mathbb {C}_1)(\omega _h-v^{n+1}_h) \widehat{\nabla }u^n_h: \widehat{\nabla }p^n_h + 2 \widetilde{\alpha } \varepsilon \int _{\Omega }\nabla v^{n+1}_h \cdot \nabla (\omega _h-v^{n+1}_h)\nonumber \\&\quad + \frac{\widetilde{\alpha }}{\varepsilon }\int _{\Omega }(1-2v^n_h)(\omega _h-v^{n+1}_h)\ge 0, \quad \forall \omega _h\in \mathcal {K}_h,\ n\ge 0, \end{aligned}$$
(4.16)

where \(\tau _n\) is the time step, and \(u_h^n\), \(p_h^n\in \mathcal {V}_{h,\Sigma _D}\) are the discrete solutions of the forward problem (4.3) and adjoint problem (4.6), respectively, for \(v_h=v_h^n\). We now prove a monotonicity property of the method.

Lemma 4.5

For each \(n \in \mathbb {N}\), there exists a constant \(c^n>0\) such that, if \(\tau _n\le (1+c^n)^{-1}\), then

$$\begin{aligned} \Vert v_h^{n+1}-v_h^n\Vert ^2_{L^2(\Omega )}+J_{\delta ,\varepsilon ,h}(v_h^{n+1})\le J_{\delta ,\varepsilon ,h}(v_h^n), \end{aligned}$$
(4.17)

where \(c^n=c^n(\Omega ,\delta ,\xi _0,h,\Vert \mathbb {C}_0 - \mathbb {C}_1\Vert _{L^\infty (\Omega )}, \Vert p_h^n\Vert _{W^{1,\infty }(\Omega )}, \Vert u_h^n\Vert _{W^{1,\infty }(\Omega )}).\)

Proof

Choosing \(\omega _h=v_h^n\) in (4.16), after some simple manipulations we obtain

$$\begin{aligned}&\frac{1}{\tau _n} \Vert v_h^{n+1}-v_h^n\Vert ^2_{L^2(\Omega )} +\widetilde{\alpha }\varepsilon \Vert \nabla (v_h^{n+1}-v_h^n)\Vert ^2_{L^2(\Omega )}\\&\qquad + \frac{\widetilde{\alpha }}{\varepsilon }\Vert v_h^{n+1}-v_h^n\Vert ^2_{L^2(\Omega )} +\widetilde{\alpha } \int _{\Omega }\left( \varepsilon \vert \nabla v_h^{n+1}\vert ^2 - \frac{1}{\varepsilon }v_h^{n+1}(1-v_h^{n+1})\right) \\&\qquad - \widetilde{\alpha } \int _{\Omega }\left( \varepsilon \vert \nabla v_h^{n}\vert ^2 - \frac{1}{\varepsilon }v_h^{n}(1+v_h^{n})\right) \\&\quad \le \int _\Omega (\mathbb {C}_0 - \mathbb {C}_1)(v_h^n-v^{n+1}_h) \widehat{\nabla }u_h^n : \widehat{\nabla }p^n_h. \end{aligned}$$

Adding and subtracting \(\frac{1}{2}\Vert {u_h^{n+1} - u_{meas,h}}\Vert _{{L^2(\Sigma _N)}}^2 \) and \(\frac{1}{2}\Vert {u_h^n- u_{meas,h}}\Vert _{{L^2(\Sigma _N)}}^2 \), we get

$$\begin{aligned}&\frac{1}{\tau _n} \Vert v_h^{n+1}-v_h^n\Vert ^2_{L^2(\Omega )} +\widetilde{\alpha }\varepsilon \Vert \nabla (v_h^{n+1}-v_h^n)\Vert ^2_{L^2(\Omega )}+ \frac{\widetilde{\alpha }}{\varepsilon }\Vert v_h^{n+1}-v_h^n\Vert ^2_{L^2(\Omega )}\\&\quad +J_{\delta ,\varepsilon ,h}(v_h^{n+1})- \frac{1}{2}\Vert {u_h^{n+1} - u_{meas,h}}\Vert _{{L^2(\Sigma _N)}}^2 -J_{\delta ,\varepsilon ,h}(v_h^{n})\\&\quad + \frac{1}{2}\Vert {u_h^n - u_{meas,h}}\Vert _{{L^2(\Sigma _N)}}^2 \le \int _\Omega (\mathbb {C}_0 - \mathbb {C}_1)(v_h^n-v^{n+1}_h) \widehat{\nabla }u_h^n : \widehat{\nabla }p^n_h, \end{aligned}$$

which implies

$$\begin{aligned}&\frac{1}{\tau _n} \Vert v_h^{n+1}-v_h^n\Vert ^2_{L^2(\Omega )} +\widetilde{\alpha }\varepsilon \Vert \nabla (v_h^{n+1}-v_h^n)\Vert ^2_{L^2(\Omega )}+ \frac{\widetilde{\alpha }}{\varepsilon }\Vert v_h^{n+1}-v_h^n\Vert ^2_{L^2(\Omega )}\nonumber \\&\qquad +J_{\delta ,\varepsilon ,h}(v_h^{n+1}) -J_{\delta ,\varepsilon ,h}(v_h^{n})\nonumber \\&\quad \le \int _\Omega (v_h^n-v^{n+1}_h)(\mathbb {C}_0 - \mathbb {C}_1) \widehat{\nabla }u_h^n : \widehat{\nabla }p^n_h + \frac{1}{2}\Vert u_h^{n+1} - u_h^n\Vert _{L^2(\Sigma _N)}^2\nonumber \\&\qquad +\int _{\Sigma _N} (u_h^{n+1} - u_h^n)\cdot (u_h^n-u_{meas,h})\nonumber \\&\quad = \int _\Omega (v_h^n-v^{n+1}_h)(\mathbb {C}_0 - \mathbb {C}_1) \widehat{\nabla }u_h^n : \widehat{\nabla }p^n_h + \frac{1}{2}\Vert u_h^{n+1} - u_h^n\Vert _{L^2(\Sigma _N)}^2\nonumber \\&\qquad +\int _\Omega \mathbb {C}_{\delta }(v_h^n)\widehat{\nabla }p^n_h: \widehat{\nabla }(u_h^{n+1}-u_h^n)\nonumber \\&\quad =\int _\Omega (\mathbb {C}_{\delta }(v_h^{n+1})-\mathbb {C}_{\delta }(v_h^{n}))\widehat{\nabla }u^n_h : \widehat{\nabla }p^n_h + \frac{1}{2}\Vert u_h^{n+1} - u_h^n\Vert _{L^2(\Sigma _N)}^2\nonumber \\&\qquad +\int _\Omega \mathbb {C}_{\delta }(v_h^n)\widehat{\nabla }p^n_h: \widehat{\nabla }(u_h^{n+1}-u_h^n), \end{aligned}$$
(4.18)

where in the last step we employed

$$\begin{aligned} \mathbb {C}_{\delta }(v_h^{n+1})-\mathbb {C}_{\delta }(v_h^{n})=(\mathbb {C}_0 - \mathbb {C}_1)(v_h^n-v^{n+1}_h). \end{aligned}$$

It is easy to verify that it holds

$$\begin{aligned}&\int _\Omega (\mathbb {C}_{\delta }(v_h^{n+1})-\mathbb {C}_{\delta }(v_h^{n})) \widehat{\nabla }u_h^n : \widehat{\nabla }p^n_h + \frac{1}{2}\Vert u_h^{n+1} - u_h^n\Vert _{L^2(\Sigma _N)}^2\\&\qquad +\int _\Omega \mathbb {C}_{\delta }(v_h^n)\widehat{\nabla }p^n_h: \widehat{\nabla }(u_h^{n+1}-u_h^n)\\&\quad =\int _\Omega (\mathbb {C}_{\delta }(v_h^{n})-\mathbb {C}_{\delta }(v_h^{n+1})) \widehat{\nabla }( u_h^{n+1}- u_h^n) : \widehat{\nabla }p^n_h + \frac{1}{2}\Vert u_h^{n+1} - u_h^n\Vert _{L^2(\Sigma _N)}^2\nonumber \\&\qquad + \int _\Omega \mathbb {C}_{\delta }(v_h^{n+1}) \widehat{\nabla }u_h^{n+1}: \widehat{\nabla }p^n_h - \int _\Omega \mathbb {C}_{\delta }(v_h^{n}) \widehat{\nabla }u_h^n: \widehat{\nabla }p^n_h\\&\quad =\int _\Omega (\mathbb {C}_{\delta }(v_h^{n})-\mathbb {C}_{\delta }(v_h^{n+1})) \widehat{\nabla }( u_h^{n+1}- u_h^n) : \widehat{\nabla }p^n_h + \frac{1}{2}\Vert u_h^{n+1} - u_h^n\Vert _{L^2(\Sigma _N)}^2\\&\quad =: I_1, \end{aligned}$$

where the last step follows from the definition of the discrete adjoint problem.

Then, using the Cauchy–Schwarz inequality, the trace theorem and the fact that in finite dimensional spaces all norms are equivalent, we have

$$\begin{aligned} \vert I_1\vert\le & {} c_0^n \Vert \mathbb {C}_1 - \mathbb {C}_0\Vert _{L^\infty (\Omega )} \Vert \widehat{\nabla }p_h^n\Vert _{L^\infty (\Omega )} \Vert v_h^n - v_h^{n+1}\Vert _{L^2(\Omega )} \Vert \widehat{\nabla }(u_h^{n+1}-u_h^n) \Vert _{L^2(\Omega )} \nonumber \\&\quad + \frac{1}{2}\Vert u_h^{n+1} - u_h^n\Vert _{L^2(\Sigma _N)}^2\nonumber \\\le & {} c_1^{n} \Vert v_h^n - v_h^{n+1}\Vert _{L^2(\Omega )} \Vert u_h^{n+1}-u_h^n\Vert _{H^1(\Omega )}+ \frac{c_2^n}{2}\Vert u_h^{n+1} - u_h^n\Vert _{H^1(\Omega )}^2 \end{aligned}$$
(4.19)

where \(c_0^n=c_0^n(\Omega , h)\), \(c_1^n=c^n_1(\Vert \mathbb {C}_1 - \mathbb {C}_0\Vert _{L^\infty (\Omega )}, \Vert \widehat{\nabla }p_h^n\Vert _{L^\infty (\Omega )}, \Omega ,h)\) and \(c_2^n\) is the constant in the trace theorem.

In the sequel we bound \(\Vert u_h^{n+1} - u_h^n\Vert _{H^1(\Omega )}\) by means of the term \(\Vert v_h^n - v_h^{n+1}\Vert _{L^2(\Omega )}\). To this aim, we subtract the equations for \(u_h^{n+1}\) and \(u_h^n\) (cf. (4.3)) and employ \(\varphi =u_h^{n+1}- u_h^n\) as a test function. A standard manipulation yields

$$\begin{aligned} \Vert u_h^{n+1} - u_h^n\Vert _{H^1(\Omega )} \le c_3^n \Vert v_h^n - v_h^{n+1}\Vert _{L^2(\Omega )}, \end{aligned}$$
(4.20)

with \(c_3^n=c^n_3(\Omega ,\delta ,\xi _0,h,\Vert \mathbb {C}_1 - \mathbb {C}_0\Vert _{L^\infty (\Omega )}, \Vert \widehat{\nabla }p_h^n\Vert _{L^\infty (\Omega )})\). Employing (4.20) into (4.19), we obtain

$$\begin{aligned} \vert I_1 \vert \le c_4^n \Vert v_h^{n+1} - v_h^n\Vert ^2_{L^2(\Omega )}, \end{aligned}$$
(4.21)

where \(c_4^n=c^n_4(\Omega ,\delta ,\xi _0,h,\Vert (\mathbb {C}_1 - \mathbb {C}_0\Vert _{L^\infty (\Omega )}, \Vert \widehat{\nabla }p_h^n\Vert _{L^\infty (\Omega )},c_2^n)\).

Using (4.21) into (4.18), we deduce

$$\begin{aligned}&\frac{1}{\tau _n} \Vert v_h^{n+1}-v_h^n\Vert ^2_{L^2(\Omega )} +\widetilde{\alpha }\varepsilon \Vert \nabla (v_h^{n+1}-v_h^n)\Vert ^2_{L^2(\Omega )}+ \frac{\widetilde{\alpha }}{\varepsilon }\Vert v_h^{n+1}-v_h^n\Vert ^2_{L^2(\Omega )}\nonumber \\&\quad + J_{\delta ,\varepsilon ,h}(v_h^{n+1}) \le J_{\delta ,\varepsilon ,h}(v_h^{n}) + c_4^n \Vert v_h^{n+1} - v_h^n\Vert ^2_{L^2(\Omega )}. \end{aligned}$$
(4.22)

Now, since

$$\begin{aligned}&\frac{1}{\tau _n} \Vert v_h^{n+1}-v_h^n\Vert ^2_{L^2(\Omega )} +\widetilde{\alpha }\varepsilon \Vert \nabla (v_h^{n+1}-v_h^n)\Vert ^2_{L^2(\Omega )}+ \frac{\widetilde{\alpha }}{\varepsilon }\Vert v_h^{n+1}-v_h^n\Vert ^2_{L^2(\Omega )}\nonumber \\&\quad \ge \frac{1}{\tau _n} \Vert v_h^{n+1} - v_h^n\Vert ^2_{L^2(\Omega )}, \end{aligned}$$
(4.23)

we get

$$\begin{aligned} \left( \frac{1}{\tau _n}-c_4^n\right) \Vert v_h^{n+1} -v_h^n\Vert ^2_{L^2(\Omega )} + J_{\delta ,\varepsilon ,h}(v_h^{n+1})\le J_{\delta ,\varepsilon ,h}(v_h^{n}). \end{aligned}$$
(4.24)

Finally, choosing \(\tau _n \le \frac{1}{1+c_4^n}\), the assertion of the lemma follows, just setting \(c^n:=c^n_4\). \(\square \)

We are now ready to state a convergence result for our numerical scheme.

Theorem 4.6

Let \(v_h^0\in \mathcal {K}_h\) be an initial guess. Then there exists a collection of timesteps \(\tau _n\) such that \(0< \gamma \le \tau _n \le (1+c^n)^{-1} \), \(\forall n>0\), where \(c^n\) is the constant appearing in Lemma 4.5, and \(\gamma \) depends on the data and possibly on h. The corresponding sequence \(v_h^n\) generated by (4.16) has a convergence subsequence (still denoted by \(v_h^n\)) in \(W^{1,\infty }\) such that

$$\begin{aligned} v_h^n\rightarrow v_h, \qquad n\rightarrow +\infty , \end{aligned}$$

where \(v_h\in \mathcal {K}_h\) satisfies the discrete optimality condition

$$\begin{aligned} J'_{\delta ,\varepsilon ,h}(v_h)[\omega _h-v_h]\ge 0,\quad \forall \omega _h\in \mathcal {K}_h . \end{aligned}$$

Proof

Consider a collection of timesteps bounded by \((1+c^n)^{-1}\), for all \(n>0\). Employing Lemma 4.5, we have

$$\begin{aligned}&\sum _{n=0}^{+\infty } \Vert v_h^n-v_h^{n+1}\Vert ^2_{L^2(\Omega )} \le J_{\delta ,\varepsilon ,h}(v_h^0), \end{aligned}$$
(4.25)
$$\begin{aligned}&\sup _{n\in \mathbb {N}} J_{\delta ,\varepsilon ,h}(v_h^n)\le J_{\delta ,\varepsilon ,h}(v_h^0). \end{aligned}$$
(4.26)

Hence, the sequence \(v_h^n\) is bounded in \(H^1_0(\Omega )\) and it holds

$$\begin{aligned} \lim _{n\rightarrow +\infty } \Vert v_h^n-v_h^{n+1}\Vert ^2_{L^2(\Omega )} =0. \end{aligned}$$
(4.27)

From the weak formulation of the forward and adjoint problems, the previous relations give that \(u_h^n\) and \(p_h^n\) are bounded in \(H^1(\Omega )\), hence in \(W^{1,\infty }(\Omega )\) as we are in finite dimensional spaces. Therefore, thanks to the definition of the constant \(c^n\), reported in the last part of the proof of Lemma 4.5, this gives that there exists a constant \(M>0\) such that \(c^n<M\), and equivalently there exists a positive constant \(\gamma >0\), independent of n, such that \(\gamma \le (1+c^n)^{-1}\). Hence, there exists a subsequence of \((v_h^n,u_h^n,p_h^n)\) (still denoted by the same symbol) such that

$$\begin{aligned} (v_h^n,u_h^n,p_h^n) \rightarrow (v_h,u_h,p_h)\quad \text {in~}W^{1,\infty }(\Omega ), \end{aligned}$$

and in particular

$$\begin{aligned} u_h^n\rightarrow u_h~\text {a.e.~in~}\Omega , \qquad p_h^n\rightarrow p_h~\text {a.e.~in~}\Omega . \end{aligned}$$

Hence, \(u_h\) is the solution of the discrete forward problem and \(p_h\) is the solution of the discrete adjoint problem. Finally, from (4.16) and \(\tau _n \ge \gamma \) we get

$$\begin{aligned}&\int _\Omega (\mathbb {C}_0 - \mathbb {C}_1)(\omega _h-v^{n+1}_h) \widehat{\nabla }u^n_h : \widehat{\nabla }p^n_h + 2 \widetilde{\alpha } \varepsilon \int _{\Omega }\widehat{\nabla }v^{n+1}_h \cdot \widehat{\nabla }(\omega _h-v^{n+1}_h)\\&\quad + \frac{\widetilde{\alpha }}{\varepsilon }\int _{\Omega }(1-2v^n_h)(\omega _h-v^{n+1}_h) \ge -\frac{C}{\gamma } \Vert v_h^{n+1}-v_h^n\Vert _{L^2(\Omega )} \Vert \omega _h -v_h^{n+1}\Vert _{L^2(\Omega )}. \end{aligned}$$

From (4.27) and recalling that \(v_h^n\rightarrow v_h\), we deduce that \(v_h\) satisfies the discrete optimality condition (4.5). \(\square \)

5 Numerical Examples

In this section we show the numerical results which are obtained from an application of the Primal Dual Active Set Method (PDASM) to the variational inequality (4.16). This method has been presented in [40] and later applied for the detection of conductivity inclusions in [30] and [14] for a linear and a semilinear elliptic equation, respectively. Primal dual active set methods represent a very good choice in engineering applications due to their effectiveness and robustness (cf., e.g., [38]). Here, we show that choosing the parameter \(\delta \) sufficiently small we are able to reconstruct elastic cavities of different shapes. Given a tolerance \(\text {tol}>0\), the reconstruction algorithm is based on the following steps.

figure a

In the implementation of Algorithm 1, the numerical experiments are performed for \(d=2\) in the domain \(\Omega =(-1,1)^2\), using a triangular tessellation \(\mathcal {T}_h\) of \(\Omega \). As boundary measurements, we use synthetic data. They are generated by solving via the Finite Element method the forward problem (2.1), with boundary conditions prescribed as in Fig. 2a on the square, with one or more cavities of given geometries. We use a tessellation \(\mathcal {T}^{ref}_h\) which is more refined than \(\mathcal {T}_h\) on the common part outside the cavities (see Fig. 1 for an example of the two tessellations) in order not to commit inverse crime. Once extracting the values of the solution of the forward problem on the boundary of the domain \(\Omega \) obtained by the mesh \(\mathcal {T}^{ref}_h\), we interpolate these values on the mesh \(\mathcal {T}_h\).

Fig. 1
figure 1

Example of the meshes adopted

Therefore, by \(u_{meas}\) we denote the resulting boundary datum on the mesh \(\mathcal {T}_h\). We also mention that the triangular mesh is adaptively refined during the reconstruction procedure using the values of \(\nabla v_h\) after an a-priori fixed number of iterations which depend on the specific numerical example. See, as example, Fig. 2b related to the reconstruction of a circular cavity.

Fig. 2
figure 2

Geometrical setting and refinement of the mesh

In the reconstruction procedure, i.e. for the implementation of the Algorithm 1, we assume to know two different boundary measurements. In fact, in the context of inverse boundary value problems of this kind, it is reasonable to use \(N_g>1\) different boundary measurements \(u^{i}_{meas}\), for \(i=1,\ldots ,N_g\) which clearly improve the numerical reconstruction results. Thus, we consider a slight modification of the original optimization problem (3.8), assuming the knowledge of \(N_g\) different Neumann boundary data \(g^i\), for \(i=1,\ldots ,N_g\) and hence considering

$$\begin{aligned}&\min _{v \in \mathcal {K}} J^{sum}_{\delta ,\varepsilon }(v), \,\,\, \nonumber \\&\quad J^{sum}_{\delta ,\varepsilon }(v) := \frac{1}{N_g}\sum _{i=1}^{N_g}\left( \frac{1}{2} \Vert u^{i}_{\delta }(v)-u^{i}_{meas}\Vert _{L^2(\Sigma _N)}^2\right) \nonumber \\&\qquad \qquad \qquad \quad + \widetilde{\alpha } \!\int _{\Omega }\Big ( \varepsilon |\nabla v|^2 + \frac{1}{\varepsilon }v(1-v)\Big ), \end{aligned}$$
(5.1)

where \(u^{i}_{\delta }(v)\in H^1_{\Sigma _D}(\Omega )\) is the solution to (3.2) with \(g=g^{i}\) and for \(v\in \mathcal {K}\). The necessary optimality condition related to (5.1) can be equivalently obtained reasoning similarly as we did to derive (3.10).

In Table 1, we collect some of the parameters utilized in most numerical tests. Possible changes in these values are highlighted in the text related to each specific experiment.

Finally, all the numerical experiments are performed choosing, as initial guess, the phase-field variable \(v_0=0\).

Table 1 Values of some parameters utilized in Algorithm 1

5.1 Numerical Experiments with \(N_g=2\) and Without Noise in the Measurements

Test 1: reconstruction of a circular cavity The elastic medium is described by the Lamé parameters \(\mu =0.2\) and \(\lambda =1\). The Neumann boundary conditions are \(g^1(x,y)=(0, \frac{1}{10}-\frac{3}{10}y)\) and \(g^2(x,y)=(-\frac{1}{2}x^2, y^2)\). We set the parameter \(\varepsilon =\frac{1}{16\pi }\). The mesh is refined with respect to the gradient of the phase-field variable every 1000 iterations. The algorithm stops after \(n=3544\) iterations. In Fig. 3 we show the numerical results at three different time steps.

Fig. 3
figure 3

Test 1. Reconstruction of a circular cavity. Dotted line represents the target cavity

Test 2: reconstruction of a circular cavity—changing boundary conditions and Lamé parameters We propose the same numerical experiments of Test 1, showing how the results change using different Neumann boundary conditions and Lamé parameters. We report in the captions of Fig. 4 the selected parameters, data, and also the number of time steps needed for reaching the tolerance. Note that the three experiments consider different values for the Poisson coefficient \(\nu :=\frac{\lambda }{2(\lambda +\mu )}\), that is \(\nu =\frac{1}{4}\), \(\nu =\frac{1}{3}\), and \(\nu =-\frac{1}{18}\), respectively. In the three numerical examples of Fig. 4, the refinement of the mesh happens every 1500, 1000, 2000 iterations, respectively.

Fig. 4
figure 4

Test 2. Reconstruction of a circular cavity using several parameters and data. For each experiment, we report the configuration at the final step n. Dotted line represents the target cavity

Fig. 5
figure 5

Test 3. Reconstruction of a square-shaped cavity. Dotted line represents the target cavity

Test 3: reconstruction of a Lipschitz domain This experiment aims at reconstructing a square-shaped cavity. We show several numerical tests, choosing different values for \(\varepsilon \), different boundary conditions and different values of the number of iterations for the refinement of the mesh. We have already shown results based on different choices for the values of the Lamé parameters in the previous numerical tests, so we fix the values of Lamé coefficients to be \(\mu =0.5\) and \(\lambda =1\). In fact, recalling that the range of the Poisson coefficient is \(-1<\nu <\frac{1}{2}\) (\(\nu =\frac{1}{2}\) represents the incompressible case), we have considered four relevant cases for the Poisson coefficient: one test on an elastic material close to incompressible case (\(\nu =\frac{5}{12}\) in Fig. 3e), two tests on elastic coefficients of common materials (\(\nu =\frac{1}{4}\) and \(\nu =\frac{1}{3}\) in Fig. 4a, b, respectively), and one test on auxetic materials, that is materials with negative Poisson ratio (\(\nu =-\frac{1}{18}\) in Fig. 4c). In the results of Fig. 5, the refinement of the mesh happens every 6000 for the first two experiments and every 3000 iterations for the last one. The second numerical result, see Fig. 5b, has the same parameters of the numerical example of Fig. 5a except \(\widetilde{\alpha }\) which is chosen \(\widetilde{\alpha }=5\times 10^{-2}\).

Fig. 6
figure 6

Test 4. Reconstruction of two cavities. Dotted line represents the target cavity

Fig. 7
figure 7

Test 5. Reconstruction of a non-convex domain. It seems that the algorithm tends to reconstruct a convex domain. Dotted line represents the target cavity

Fig. 8
figure 8

Test 6. Reconstruction of cavities by means of noisy measurements. Dotted line represents the target cavity

Test 4: reconstruction of two cavities This test provides results when the two cavities to be reconstructed are a square and a circle. Neumann boundary conditions are given by \(g^1(x,y)=(x,y)\) and \(g^2(x,y)=(-y,-x)\). We propose two numerical reconstruction procedures, see Fig. 6. In Fig. 6a, we report the results obtained by the standard algorithm, while in Fig. 6b we use a variant of the Algorithm 1 where the parameter \(\varepsilon \) is initially set \(\varepsilon =\frac{1}{4\pi }\) but after a fixed and a-priori chosen number of iterations (8000 iterations) is updated and set \(\varepsilon =\varepsilon /4\). In both cases the mesh is refined after 5000 iterations. It is worth noting that the variant of Algorithm 1 does not produce the visible oscillations of the test in Fig. 6a.

Note that we also change a little bit the value of \(\delta \). We have observed that \(\delta \) cannot be chosen too small otherwise numerical instability can appear. Numerically we have seen that, in order to overcome this issue, \(\tau _n\) has to be chosen always smaller than \(\delta \). However, choosing \(\tau \) too small increases the number of necessary iterations to satisfy the stopping criterium.

Test 5: reconstruction of a non-convex domain We finally propose the reconstruction of a cavity which is not convex, see Fig. 7. We use \(g^1(x,y)=(x,y)\) and \(g^2(x,y)=(-y,-x)\) as Neumann boundary conditions and \(\mu =0.5\) and \(\lambda =1\). Parameters have the following values: \(\varepsilon =\frac{1}{16\pi }\), and \(\tau _n=5\times 10^{-4}\). Mesh is refined every 5000 iterations. The stopping criterium is satisfied after \(n=6825\) iterations.

5.2 Numerical Experiments with \(N_g=2\) and Noise in the Measurements

Test 6: reconstruction of cavities of different shapes using noisy measurements Here we run some of the numerical tests showed in the previous section, adding to the boundary measurements a normal distributed noise with zero mean and variance equal to one. We choose two different noise levels: \(2\%\) and \(5\%\). The results are reported in Fig. 8.

For the the test in Fig. 8a, b, we use values of parameters as in Test 1 and refine the mesh every 2000 and 2500 iterations, respectively. The reconstruction of a square-shaped cavity, that is Fig. 8c, d, are obtained by means of parameters of Test 3—Fig. 5c, refining the mesh every 3000 and 10000 iterations. Lastly, to get the results in Fig. 8e, f we use the same parameters of Test 4—Fig. 6b. The mesh is refined every 5000 and 8000 iterations, while the value of the parameter \(\varepsilon \) is adapted after 8000 and 10000 iterations, respectively. In the captions of the single figures, we specify the values that are changed with respect to the ones proposed in the Tests 1, 3, and 4.