1 Introduction

Inverse elliptic coefficient problems arise in a number of applications in medical imaging and non-destructive material testing. The arguably most prominent example is the Calderón problem [5, 6] which models electrical impedance tomography (EIT) where the electrical conductivity distribution inside a patient is to be determined from current/voltage measurements on its surface, cf. [1] for an overview. Theoretical uniqueness questions for inverse elliptic coefficient problems have mostly been studied in the idealized infinite-dimensional setting where (intuitively speaking) the unknown coefficient function is to be determined with infinite resolution from infinitely many measurements, cf., e.g., [7, 12, 15]. Lipschitz stability results have been obtained for finitely many unknowns and infinitely many measurements in, e.g., [3, 4, 11]. Recently there has been progress on the practically very relevant case of finitely many unknowns and measurements, cf., e.g., [2, 8, 14]. But little is known yet about explicitly characterizing the required number of measurements for a given desired resolution.

Practical reconstruction algorithms for inverse coefficient problems are usually based on regularized data-fitting, which formulates the inverse problem as a minimization problem for a residuum functional together with a regularization term. As the residuum formulation is typically non-convex, this approach highly suffers from the problem of local minima. Convexification approaches for inverse coefficient problems have been studied in, e.g., [13]. But, to the knowledge of the author, no equivalent convex reformulations of inverse coefficient problems with finitely many measurements have been found yet.

The aim of this work is to show that a uniquely solvable convex reformulation of an inverse coefficient problem is indeed possible if enough measurements are being taken, and that the required number of measurements can be explicitly characterized. More precisely, we state a criterion that is sufficient for unique solvability and for the solution minimizing a linear cost functional under a convex non-linear semidefinite constraint. For a given desired resolution and a given number of measurements, the criterion can be explicitly checked by calculating finitely many forward solutions. The criterion is fulfilled if sufficiently many measurements are taken. Thus, the required number of measurements can be found by starting with a low number and incrementally increasing it until the criterion is fulfilled. The criterion also yields explicit error estimates for noisy data.

This work is closely related to [10] that gives an explicit construction of special measurements that uniquely determine the same number of unknowns in an inverse elliptic coefficient problem by a globally convergent Newton root-finding method. We also formulate our result for the same inverse Robin transmission problem as in [10] which is motivated by EIT-based corrosion detection and may be considered as a simpler variant of the Calderón problem. Our main advance in this work is the step from Newton root-finding to a convex semidefinite program. This allows utilizing a redundant set of given measurements, and eliminates the need of specially constructed measurements. It also simplifies the underlying theory as it no longer requires simultaneously localized potentials, and allows the criterion to be written using the Loewner order, which very naturally arises in elliptic inverse coefficient problems with finite resolution and finitely many measurements [9]. Also, to the knowledge of the author, this work is the first connection between the emerging research fields of semidefinite optimization and inverse coefficient problems, which might bring new inspiration to these important fields.

2 Inverse problems for convex monotonous functions

Let “\(\le \)” denote the entry-wise order on \(\mathbb {R}^n\), and “\(\preceq \)” denote the Loewner order on the space of symmetric matrices \(\mathbb {S}_m\subseteq \mathbb {R}^{m\times m}\), with \(n,m\in \mathbb {N}\). For \(A\in \mathbb {S}_m\) the largest eigenvalue is denoted by \(\lambda _{\max }(A)\).

Given a-priori bounds \(b>a>0\), we consider the inverse problem to

$$\begin{aligned} \text {determine } \quad x\in [a,b]^n\subset \mathbb {R}^n_+\quad \text { from } \quad F(x)\in \mathbb {S}_m\subseteq \mathbb {R}^{m\times m}, \end{aligned}$$
(1)

where \(F:\ \mathbb {R}^n_+\rightarrow \mathbb {S}_m\) is assumed to be a continuously differentiable, convex and monotonically non-increasing matrix-valued function, i.e., for all \(x,x^{(0)}\in \mathbb {R}^n_+\), and all \(0\le d\in \mathbb {R}^n\),

$$\begin{aligned} F'(x)d&\preceq 0, \end{aligned}$$
(2)
$$\begin{aligned} F(x)-F(x^{(0)})&\succeq F'(x^{(0)})(x-x^{(0)}). \end{aligned}$$
(3)

Such problems naturally arise in inverse coefficient problems in elliptic PDEs with finite resolution and finitely many measurements [9].

Note that, here and in the following, we write the derivative of F in a point \(x\in \mathbb {R}^n_+\) as \(F'(x)\in \mathcal {L}(\mathbb {R}^n,\mathbb {S}_m)\), so that \(F'(x)d\) is a symmetric \(m\times m\)-matrix for all \(d\in \mathbb {R}^n\). Also note that a continuously differentiable function \(F:\ \mathbb {R}^n_+\rightarrow \mathbb {S}_m\) fulfills (2) and (3), if and only if F fulfills

$$\begin{aligned} \begin{aligned} F(x^{(0)})&\succeq F(x)&\quad \text { for all } x,x^{(0)}\in \mathbb {R}^n_+,\ x^{(0)}\le x,\\ F((1-t) x^{(0)}+t x)&\preceq (1-t) F(x^{(0)})+t F(x)&\quad \text { for all } x,x^{(0)}\in \mathbb {R}^n_+,\ t\in [0,1], \end{aligned} \end{aligned}$$

cf., e.g., [9, Lemma 2] for the only-if-part. The if part immediately follows from writing the directional derivative as differential quotient.

In this section we will derive a sufficient criterion for unique solvability of the finite-dimensional inverse problem (1) and for reformulating it as a convex optimization problem. Note that our criterion may appear technical at a first glance, but we stress that it only requires finitely many evaluations of directional derivatives of F, so that it can be easily checked in practice. Moreover, for the inverse Robin transmission problem considered in Sect. 3, we will show that the criterion will always be fulfilled if sufficiently many measurements are taken. Hence, the criterion allows to constructively determine the number of measurements that are required for a certain resolution and for convex reformulation by simply increasing the number of measurements until the criterion is fulfilled.

To formulate our result, let \(e_j\in \mathbb {R}^n\) denote the j-th unit vector, \(\mathbb {1}\in \mathbb {R}^n\) denote the vector of ones, and \(e_j':=\mathbb {1}-e_j\) is the vector containing zero in the j-th component and ones in all others. For a matrix \(A\in \mathbb {R}^{n\times n}\), \(\Vert A\Vert _2\) denotes the spectral norm, and for a number \(\lambda \in \mathbb {R}\), \(\lceil \lambda \rceil \) denotes the ceiling function, i.e., the least integer greater than or equal to \(\lambda \).

Theorem 1

Let \(F:\ \mathbb {R}^n_+\rightarrow \mathbb {S}_m\), \(n,m\ge 2\), be continuously differentiable, convex and monotonically non-increasing, and \(b\ge a>0\). If

$$\begin{aligned} F'(z_{j,k})d_j\not \preceq 0\quad \text { for all } k\in \{2,\ldots ,K\},\ j\in \{1,\ldots ,n\}, \end{aligned}$$

where

$$\begin{aligned} z_{j,k}:=\frac{a}{2}e_j' + \left( a+k\frac{a}{4(n-1)}\right) e_j\in \mathbb {R}^n_+, \quad d_j:=\frac{2b-a}{a}(n-1)e_j' - \frac{1}{2} e_j\in \mathbb {R}^n, \end{aligned}$$

and \(K:=\lceil \frac{4(n-1)b}{a}\rceil -4n+5\in \mathbb {N}\), then the following holds

  1. (a)

    \(\hat{x}\in [a,b]^n\) is uniquely determined by knowledge of \(\hat{Y}:=F(\hat{x})\). \(\hat{x}\) is the unique minimizer of the convex optimization problem

    $$\begin{aligned} \texttt {minimize} \quad \ \Vert x\Vert _1=\sum _{j=1}^n x_j \ \texttt { subject to } \ x\in [a,b]^n,\ F(x)\preceq \hat{Y}. \end{aligned}$$
    (4)
  2. (b)

    For \(\hat{x}\in [a,b]^n\), \(\hat{Y}:=F(\hat{x})\), \(\delta >0\), and \(Y^\delta \in \mathbb {S}_m\), with \(\Vert \hat{Y}-Y^\delta \Vert _2\le \delta \), the convex optimization problem

    $$\begin{aligned} \texttt {minimize } \quad \ \Vert x\Vert _1=\sum _{j=1}^n x_j \ \texttt { subject to } \ x\in [a,b]^n, \ F(x)\preceq Y^\delta +\delta I \end{aligned}$$
    (5)

    possesses a minimum, and every such minimum \(x^\delta \) fulfills

To prove Theorem 1 we will show the following lemmas.

Lemma 1

Let \(F:\ \mathbb {R}^n_+\rightarrow \mathbb {S}_m\), \(n,m\ge 2\), be continuously differentiable, convex and monotonically non-increasing. If, for some \(x\in \mathbb {R}^n_+\),

$$\begin{aligned} F'(x)((n-1)e_j'-e_j)\not \preceq 0 \quad \text { for all } \quad j\in \{1,\ldots ,n\}, \end{aligned}$$
(6)

then, for all \(d\in \mathbb {R}^n\), and \(y\in \mathbb {R}^n_+\),

$$\begin{aligned} \lambda _{\max }(F'(x)d)&< \frac{\lambda \Vert d\Vert _\infty }{n-1} \quad&\text { implies } \quad \sum _{j=1}^n d_j> 0, \quad \text { and } \end{aligned}$$
(7)
$$\begin{aligned} \lambda _{\max }(F(y)-F(x))&< \frac{\lambda \Vert y-x\Vert _\infty }{n-1} \quad&\text { implies } \quad \sum _{j=1}^n (y_j-x_j)> 0, \end{aligned}$$
(8)

with \(\lambda :=\min _{j=1,\ldots ,n} \lambda _\text {max}(F'(x)((n-1)e_j'-e_j))\).

Proof

We will show that, for all \(d\in \mathbb {R}^n\),

$$\begin{aligned} \lambda _{\max }(F'(x)d)< \frac{\lambda \Vert d\Vert _\infty }{n-1} \quad \text { implies } \quad \min _{j=1,\ldots ,n} d_j > -\frac{1}{n-1} \max _{j=1,\ldots ,n} d_j, \end{aligned}$$
(9)

which clearly implies (7). (8) then follows from (7) by the convexity property \(F'(x)(y-x)\preceq F(y)-F(x)\).

We prove (9) by contraposition and assume that there exists an index \(k\in \{1,\ldots ,n\}\) with

$$\begin{aligned} d_k=\min _{j=1,\ldots ,n} d_j\le -\frac{1}{n-1} \max _{j=1,\ldots ,n} d_j. \end{aligned}$$

We have that either

$$\begin{aligned} \Vert d\Vert _\infty =\max _{j=1,\ldots ,n} d_j,\quad \text { or } \quad \Vert d\Vert _\infty =-\min _{j=1,\ldots ,n} d_j=-d_k, \end{aligned}$$

and in both cases it follows that

$$\begin{aligned} d_k\le -\frac{1}{n-1}\Vert d\Vert _\infty , \quad \text { and thus } \quad d\le -\frac{1}{n-1}\Vert d\Vert _\infty e_k + \Vert d\Vert _\infty e_k'. \end{aligned}$$

Hence, by (6) and monotonicity,

$$\begin{aligned} F'(x) d \succeq \frac{\Vert d\Vert _\infty }{n-1} F'(x) \left( (n-1) e_k' - e_k \right) , \end{aligned}$$

which yields that

$$\begin{aligned} \lambda _{\max }(F'(x) d)\ge \frac{\Vert d\Vert _\infty }{n-1} \lambda , \end{aligned}$$

so that (9) is proven. \(\square \)

Remark 1

Lemma 1 can be considered a converse monotonicity result, as it yields that, for all \(y\in \mathbb {R}^n_+\), with \(y\ne x\),

$$\begin{aligned} F(y)&\preceq F(x) \quad&\text { implies } \quad \sum _{j=1}^n (y_j-x_j)> 0. \end{aligned}$$

Lemma 2

Let \(F:\ \mathbb {R}^n_+\rightarrow \mathbb {S}_m\), \(n,m\ge 2\), be continuously differentiable, convex and monotonically non-increasing, and \(b\ge a>0\). If

where \(z_{j,k}\in \mathbb {R}_+^n\), \(d_j\in \mathbb {R}^n\), and \(K\in \mathbb {N}\) are defined as in Theorem 1, then

$$\begin{aligned} \lambda _{\max }(F'(x)((n-1) e_j'-e_j))\ge \lambda \quad \text { for all } x\in [a,b]^n,\ j\in \{1,\ldots ,n\}. \end{aligned}$$

Proof

We will show that for all \(j\in \{1,\ldots ,n\}\) and \(x\in [a,b]^n\), there exists \(t\in [a+\frac{a}{2(n-1)},\ b+\frac{a}{2(n-1)}]\subset \mathbb {R}\), so that, for all \(0\le \delta \le \frac{a}{4(n-1)}\),

$$\begin{aligned} F'(x)((n-1) e_j'-e_j)\succeq F'\left( \frac{a}{2}e_j' + (t-\delta ) e_j \right) d_j. \end{aligned}$$
(10)

Since \(a+K\frac{a}{4(n-1)}\ge b+\frac{a}{4(n-1)}\), we have that for every \(t\in [a+\frac{a}{2(n-1)},\ b+\frac{a}{2(n-1)}]\), there exists \(k\in \{2,\ldots ,K\}\), so that

$$\begin{aligned} \delta :=t-\left( a+k\frac{a}{4(n-1)}\right) \le \frac{a}{4(n-1)} \quad \text { fulfills } \quad 0\le \delta \le \frac{a}{4(n-1)}. \end{aligned}$$

Hence, if (10) is proven, then

$$\begin{aligned} F'(x)((n-1) e_j'-e_j)\succeq F'(z_{j,k}) d_j, \end{aligned}$$

so that the assertion follows.

To prove (10), let \(j\in \{1,\ldots ,n\}\), and \(x\in [a,b]^n\). We define \( t:=x_j+\frac{a}{2(n-1)}. \) Then, for all \(0\le \delta \le \frac{a}{4(n-1)}\)

$$\begin{aligned} (n-1)e_j'-e_j&=\frac{2(n-1)}{a}\left( \frac{a}{2} e_j' + (x_j-t) e_j\right) \le \frac{2(n-1)}{a}\left( x- \frac{a}{2}e_j' - te_j \right) \\&\le \frac{2(n-1)}{a}\left( x- \left( \frac{a}{2}e_j' + (t-\delta ) e_j \right) \right) , \end{aligned}$$

and

$$\begin{aligned}&\frac{2(n-1)}{a}\left( x- \left( \frac{a}{2}e_j' + (t-\delta ) e_j \right) \right) \\&\quad \le \frac{2(n-1)}{a}\left( \left( b- \frac{a}{2}\right) e_j' + (x_j-t+\delta ) e_j \right) \\&\quad = \frac{2b-a}{a}(n-1)e_j' + \frac{2(n-1)}{a} \left( - \frac{a}{2(n-1)} +\delta \right) e_j\\&\quad \le \frac{2b-a}{a}(n-1)e_j' - \frac{1}{2} e_j=d_j, \end{aligned}$$

so that we obtain from monotonicity (2) and convexity (3)

$$\begin{aligned}&F'(x)((n-1)e_j'-e_j)\\&\quad \succeq \frac{2(n-1)}{a} F'(x)\left( x- \left( \frac{a}{2}e_j' + (t-\delta ) e_j \right) \right) \\&\quad \succeq \frac{2(n-1)}{a} \left( F(x)-F\left( \frac{a}{2}e_j' + (t-\delta ) e_j \right) \right) \\&\quad \succeq \frac{2(n-1)}{a} F'\left( \frac{a}{2}e_j' + (t-\delta ) e_j \right) \left( x- \left( \frac{a}{2}e_j' + (t-\delta ) e_j \right) \right) \\&\quad \succeq F'\left( \frac{a}{2}e_j' + (t-\delta ) e_j \right) d_j, \end{aligned}$$

which proves (10) and thus the assertion. \(\square \)

Proof of Theorem 1

Under the assumption of Theorem 1, it follows from Lemma 2, that the assumptions of Lemma 1 are fulfilled, so that (8) holds for all \(x,y\in [a,b]^n\). In particular this yields that \(\hat{Y}:=F(\hat{x})\) uniquely determines \(\hat{x}\in [a,b]^n\). Moreover, for every \(x\in [a,b]^n\) with \(x\ne \hat{x}\), and \(F(x)\preceq \hat{Y}=F(\hat{x})\), we obtain from Remark 1 that

$$\begin{aligned} \sum _{j=1}^n (x_j-\hat{x}_j)> 0, \end{aligned}$$

which shows that \(\hat{x}\) is the unique minimizer of (4). This proves Theorem 1(a). To prove Theorem 1(b), we note that the set of all \(x\in [a,b]^n\) with \(F(x)\preceq \hat{Y}^\delta +\delta I\) is compact and non-empty since it contains \(\hat{x}\). Hence, at least one minimizer of (5) exists. Every minimizer \(x^\delta \in [a,b]^n\) fulfills

$$\begin{aligned} F(x^\delta )\preceq \hat{Y}^\delta +\delta I\preceq F(\hat{x})+2\delta I. \end{aligned}$$

If \(2\delta < \frac{\lambda \Vert x^\delta -\hat{x}\Vert _\infty }{n-1}\), then (8) would imply that

$$\begin{aligned} \sum _{j=1}^n (x_j^\delta -\hat{x}_j)> 0, \end{aligned}$$

which contradicts the minimality of \(x^\delta \). Hence \(\Vert x^\delta -\hat{x}\Vert _\infty \le \frac{2\delta (n-1)}{\lambda }\). \(\square \)

3 Application to an inverse elliptic coefficient problem

We will now study the problem of determining a Robin transmission coefficient in an elliptic PDE from finitely many measurements. Using Theorem 1 we will show that this inverse coefficient problem can be rewritten as a uniquely solvable convex non-linear semidefinite optimization problem if enough measurements are being used. This also gives a constructive criterion whether a certain number of measurements suffices to determine the Robin parameter with a given desired resolution by convex optimization, and yields an error estimate for noisy data.

3.1 The infinite-dimensional inverse Robin transmission problem

Let \(\varOmega \subset \mathbb {R}^d\) (\(d\ge 2\)) be a bounded domain and \(D\subset \varOmega \) be an open subset with \(\overline{D}\subset \varOmega \). \(\varOmega \) and D are assumed to have Lipschitz boundaries, \(\partial \varOmega \) and \(\varGamma :=\partial D\), and \(\varOmega {\setminus }D\) is assumed to be connected. \(L^\infty _+(\varGamma )\) denotes the subset of \(L^\infty (\varGamma )\)-functions with positive essential infima.

We consider the inverse problem of recovering the coefficient \(\gamma \in L^\infty _+(\varGamma )\) in the elliptic Robin transmission problem

$$\begin{aligned} \varDelta u_\gamma ^g&=0 \quad \text { in } \varOmega {\setminus } \varGamma , \end{aligned}$$
(11)
$$\begin{aligned} \partial _{\nu } u_\gamma ^g|_{\partial \varOmega }&= g\quad \text { on }\partial \varOmega , \end{aligned}$$
(12)
$$\begin{aligned} \llbracket u_\gamma ^g \rrbracket _\varGamma&= 0 \quad \text { on } \varGamma , \end{aligned}$$
(13)
$$\begin{aligned} \llbracket \partial _{\nu } u_\gamma ^g\rrbracket _\varGamma&= \gamma u_\gamma ^g \quad \text { on } \varGamma , \end{aligned}$$
(14)

from the Neumann-Dirichlet-Operator

$$\begin{aligned} \varLambda (\gamma )g:=u_\gamma ^g|_{\partial \varOmega }, \quad \text { where } \quad u_\gamma ^g\in H^1(\varOmega ) \text { solves} (11){-}(14). \end{aligned}$$

Using the Lax-Milgram theorem and the compactness of the trace operator from \(H^{1}(\varOmega )\) to \(L^2(\partial \varOmega )\), it easily follows that (11)–(14) is uniquely solvable, and that \(\varLambda (\gamma )\in \mathcal {L}(L^2(\partial \varOmega ))\) is self-adjoint and compact.

We summarize and reformulate some known results on the Neumann-Dirichlet operator that motivate why the corresponding finite-dimensional inverse problem can be treated with the methods from Sect. 2. In the following theorem “\(\le \)” is to be understood pointwise almost everywhere for \(L^\infty \)-functions, and “\(\succeq \)” is the Loewner order on the space of self-adjoint operators.

Theorem 2

\(\varLambda :\ L^\infty _+(\varOmega )\rightarrow \mathcal {L}(L^2(\partial \varOmega ))\) is Fréchet differentiable. Moreover,

  1. (a)

    \(\varLambda \) is monotonically non-increasing and convex, i.e.,

    $$\begin{aligned} \varLambda '(\gamma )\delta&\preceq 0 \quad&\text {for all }\gamma \in L^\infty _+(\varOmega ), 0\le \delta \in L^\infty (\varOmega ),\\ \varLambda (\gamma )-\varLambda (\gamma ^{(0)})&\succeq \varLambda '(\gamma ^{(0)}) (\gamma -\gamma ^{(0)}) \quad&\text {for all }\gamma ,\gamma ^{(0)}\in L^\infty _+(\varOmega ). \end{aligned}$$
  2. (b)

    For all \(\gamma \in L^\infty _+(\varOmega )\), \(C>0\), and \(M\subseteq \varGamma \) measurable with positive measure

    $$\begin{aligned} \varLambda '(\gamma )(C\chi _{\varGamma {\setminus } M}-\chi _M)\not \preceq 0. \end{aligned}$$
  3. (c)

    For all \(\gamma _1,\gamma _2\in L^\infty _+(\partial \varOmega )\)

    $$\begin{aligned} \gamma _1\le \gamma _2 \quad \text { if and only if } \quad \varLambda (\gamma _1)\succeq \varLambda (\gamma _2). \end{aligned}$$

    In particular, \(\varLambda (\gamma )\) uniquely determines \(\gamma \).

Proof

Fréchet differentiability, monotonicity and convexity of \(\varLambda \) are shown in [10, Lemma 5], cf. also [11, Lemma 4.1]. (b) follows from the localized potentials result in [11, Lemma 4.3]. The “only if”-part in (c) follows from (a), and the “if”-part in (c) easily follows from using (b) together with (a). \(\square \)

3.2 The inverse problem with finitely many measurements

We now consider the inverse Robin transmission problem with finite resolution and finitely many measurements as in [10]. We assume that the unknown coefficient function \(\gamma \in L^\infty _+(\varGamma )\) is piecewise constant on an a-priori known partition of \(\varGamma \), i.e.

$$\begin{aligned} \gamma (x)=\sum _{j=1}^n \gamma _j \chi _{\varGamma _j}(x), \quad \text { with } \varGamma =\bigcup _{j=1}^n \varGamma _j, \end{aligned}$$

where \(\varGamma _1,\ldots ,\varGamma _n\), \(n\ge 2\), are pairwise disjoint measurable subsets of \(\varGamma \). For the ease of notation, we identify a piecewise constant function \(\gamma \in L^\infty (\varGamma )\) with the vector \(\gamma =(\gamma _1,\ldots ,\gamma _n)^T\in \mathbb {R}^n\) in the following. We also assume that we know a-priori bounds \(b>a>0\), so that \(\gamma \in [a,b]^n\).

We aim to reconstruct \(\gamma \in [a,b]^n\) from finitely many measurements of \(\varLambda (\gamma )\). More precisely, we assume that \((g_j)_{j\in \mathbb {N}}\subseteq L^2(\partial \varOmega )\) has dense span in \(L^2(\partial \varOmega )\), and that we can measure

$$\begin{aligned} F(\gamma ):=\left( \int _{\partial \varOmega } g_j \varLambda (\gamma ) g_k \,\mathrm{{d}} s\right) _{j,k=1,\ldots ,m}\in \mathbb {R}^{m\times m} \end{aligned}$$

for some number \(m\in \mathbb {N}\). Thus, the question whether a certain number of measurements determine the unknown coefficient with a certain resolution can be written as the problem to

$$\begin{aligned} \text { determine } \quad \gamma \in [a,b]^n \quad \text { from } \quad F(\gamma )\in \mathbb {R}^{m\times m}. \end{aligned}$$
(15)

Using our results in Sect. 2 we can now show that this inverse problem is uniquely solvable if sufficiently many measurements are being used, and that it can be equivalently reformulated as a convex semidefinite program.

Theorem 3

  1. (a)

    If \(m\in \mathbb {N}\) is sufficiently large then \(\hat{Y}:=F(\hat{\gamma })\in \mathbb {S}_m\) uniquely determines \(\hat{\gamma }\in [a,b]^n\). \(\hat{\gamma }\) is the unique minimizer of the convex semi-definite optimization problem

    $$\begin{aligned} \texttt {minimize } \quad \ \Vert \gamma \Vert _1=\sum _{j=1}^n \gamma _j \ \texttt { subject to } \ \gamma \in [a,b]^n,\ F(\gamma )\preceq \hat{Y}. \end{aligned}$$
  2. (b)

    The assertion in (a) holds if all matrices \(F'(z_{j,k})d_j\in \mathbb {S}_m\), (with \(z_{j,k}\in \mathbb {R}_+^n\), \(d_j\in \mathbb {R}^n\), and \(K\in \mathbb {N}\) given in Theorem 1) possess at least one positive eigenvalue. This criterion is fulfilled for sufficiently large \(m\in \mathbb {N}\). Moreover, for \(\delta >0\), and \(Y^\delta \in \mathbb {S}_m\), with \(\Vert \hat{Y}-Y^\delta \Vert _2\le \delta \), the convex optimization problem

    $$\begin{aligned} \texttt {minimize } \quad \ \Vert \gamma \Vert _1=\sum _{j=1}^n \gamma _j \ \texttt { subject to } \ \gamma \in [a,b]^n,\ F(\gamma )\preceq Y^\delta +\delta I \end{aligned}$$

    possesses a minimum, and every such minimum \(\gamma ^\delta \) fulfills

Proof

\(F(\gamma )\) is a symmetric matrix since \(\varLambda (\gamma )\) is self-adjoint. Fréchet differentiability, monotonicity and convexity of \(F:\ \mathbb {R}^n_+\rightarrow \mathbb {S}_m\) immediately follow from the corresponding properties of \(\varLambda \) in Theorem 2. For all \(j=1,\ldots ,n\), \(k=2,\ldots ,K\), we have that \(\varLambda '(z_{j,k})d_j\not \preceq 0\) by Theorem 2(b). By density, it follows that for all \(j=1,\ldots ,n\), \(k=2,\ldots ,K\) there exists \(m\in \mathbb {N}\) so that \(F'(z_{j,k})d_j\not \preceq 0\), and since there are only finitely many such combinations of j and k, there exists \(m\in \mathbb {N}\), so that all these matrices possess a positive eigenvalue. Hence, the assertions follow from Theorem 1. \(\square \)