Advertisement

On stability of generalized phase retrieval and generalized affine phase retrieval

  • Zhitao ZhuangEmail author
Open Access
Research
  • 161 Downloads

Abstract

In this paper, we consider the stability of intensity measurement mappings corresponding to generalized phase retrieval and generalized affine phase retrieval in the real case. First, we show the bi-Lipschitz property on measurements of noiseless signals. After that, the stability property as regards a noisy signal is given by the Cramer–Rao lower bound.

Keywords

Bi-Lipschitz Generalized affine phase retrieval Cramer–Rao lower bound 

MSC

42C15 

1 Introduction

Given a signal \(x\in F^{d}\) (\(F = \mathbb {C}\text{ or } \mathbb {R}\)), phase retrieval aims to recover x from its intensity measurements \(| \langle x, \varphi_{i} \rangle| \), \(i=1,\ldots,N \), where \(\{\varphi_{i}\}_{i=1}^{N} \) is a frame of \(F ^{d} \). The phase retrieval problem has a long history, which can be traced back as far as 1952 [13]. It has important applications in optics, communication, X-ray crystallography, quantum tomography, signal processing and more (see e.g. [11, 14, 16] and the references therein). The task of classical phase retrieval is recovering a signal from its Fourier transform magnitude [5, 8]. Using frame theory, Balan et al. constructed a new class of Parseval frames for a Hilbert space in 2006 [2], which allows signal reconstruction from the absolute value of the frame coefficients. Since then lots of theoretical results and practical algorithms emerged in different fields. Generalized phase retrieval was introduced by Yang Wang and Zhiqiang Xu [17], including as special cases the standard phase retrieval as well as the phase retrieval by orthogonal projection. Explicitly, let \(H_{d}(F) \) denote the set of \(d \times d \) Hermitian matrices over F (if \(F=\mathbb {R}\) then Hermitian matrices are symmetric matrices). As a standard phase retrieval problem, we consider the equivalence relation ∼ on \(F^{d} : x_{1} \sim x_{2} \) if there is a constant \(b \in F \) with \(|b| = 1 \) such that \(x_{1} = bx_{2} \). Let \(\underline{F}^{d}:=F^{d}/\sim\). For any given \(A = \{A_{j}\}^{N}_{j =1}\subset H_{d}(F) \), define the map \(M_{A} : F^{d}\rightarrow\mathbb{R}^{N} \) by
$$M_{A}(x) = \bigl(x^{*}A_{1}x, \ldots, x^{*}A_{N}x \bigr), $$
where \(x^{*} \) denotes the conjugate transpose of x. We say that A is generalized phase retrievable if \(M_{A} \) is injective on \(\underline{F}^{d} \). Similarly, if \(A_{j} \) is positive semidefinite for \(j=1,\ldots,N \), we can define the map \(\sqrt{M_{A}} : F^{d}\rightarrow\mathbb{R}^{N} \) by
$$\sqrt{M_{A}}(x) = \bigl(\sqrt{x^{*}A_{1}x}, \ldots, \sqrt{x^{*}A_{N}x} \bigr). $$
Affine phase retrieval, introduced by Bing Gao et al., aims to recover signal from the magnitudes of affine measurements. More precisely, instead of recovering x from \(\{| \langle x, \varphi_{j} \rangle|\}_{j=1}^{N} \), they consider recovering x from the absolute values of the affine intensity measurements
$$\bigl\vert \langle x, \varphi_{j} \rangle +b_{j} \bigr\vert ,\quad j=1,\ldots,N, $$
where \(\varphi_{j} \in F^{d}\) and \(b_{j}\in \mathbb {C}\). In Sect. 3, we consider generalized affine phase retrieval and discuss its basic properties.

Given two vectors \(x,y \in F^{d}\), we define metrics \(d(x,y)= \Vert {x-y} \Vert \), \(d_{1}(x,y)=\min\{ \Vert {x-y} \Vert , \Vert {x+y} \Vert \} \) and matrix metric \(d_{2}(x,y)= \Vert {x+y} \Vert \Vert {x-y} \Vert \) corresponding to the nuclear norm. Several robustness bounds to the probabilistic phase retrieval problem in a real case are given in [7]. Stability bounds of a reconstruction for a deterministic frame are studied in [3, 4] with appropriate metrics.

Our study mainly focuses on the stability of generalized phase retrieval and generalized affine phase retrieval in real case in two aspects. The first one addresses the bi-Lipschitz property of generalized phase retrieval. Section 2 shows that the mappings \(M_{A} \) and \(\sqrt{M_{A}} \) all have the bi-Lipschitz property with respect to an appropriate metric. However, the generalized affine phase retrieval mappings \(M_{B,b} \) and \(\sqrt{M_{B,b}} \) only can be controlled by two metrics. The second aspect deals with the Cramer–Rao lower bound of generalized phase retrieval and generalized affine phase retrieval in an additive white Gaussian noise model. The Cramer–Rao lower bound of any unbiased estimator is given by calculating the Fisher information matrix.

2 Stability of generalized phase retrieval

In this section, we discuss the bi-Lipschitz property and Cramer–Rao lower bound of generalized phase retrieval. Given a collection of matrices \(\{A_{j}\}_{j=1}^{N}\subset H_{d}(F) \), define
$$ a_{0}:=\inf_{ \Vert {x} \Vert = \Vert {y} \Vert =1}\sum_{j=1}^{N} \bigl\vert x^{*}A_{j} y \bigr\vert ^{2}\quad \text{and} \quad b_{0}:=\sup_{ \Vert {x} \Vert = \Vert {y} \Vert =1}\sum _{j=1}^{N} \bigl\vert x^{*}A_{j} y \bigr\vert ^{2}. $$
Assuming the collection of vectors \(\{A_{j}x\}_{j=1}^{N} \) form a frame of \(F^{d} \) for any \(x\neq0 \), then there exist constants \(0<\alpha_{x}<\beta_{x}<+\infty\) such that
$$ \alpha_{x} \Vert {y} \Vert ^{2}\leq\sum _{j=1}^{N} \bigl\vert x^{*}A_{j} y \bigr\vert ^{2} \leq\beta_{x} \Vert {y} \Vert ^{2},\quad y\in F^{d}. $$
We choose \(\alpha_{x} \) and \(\beta_{x} \) to be the optimal frame bounds corresponding to \(\{A_{j}x\}_{j=1}^{N} \). Obviously, we have
$$\sum_{j=1}^{N} \bigl\vert x^{*}A_{j}y \bigr\vert ^{2}\geq\alpha_{x} \Vert {y} \Vert ^{2}>0, $$
for any \(y\neq0 \) and \(x\neq0 \). Furthermore, the unit sphere \(S_{1}(F^{d})=\{x: \Vert {x} \Vert =1, x\in F^{d} \}\) is compact in \(F^{d} \). So is \(S_{1}(F^{d})\times S_{1}(F^{d}) \) in \(F^{d}\times F^{d} \). Since the mapping
$$ (x,y) \longmapsto\sum_{j=1}^{N} \bigl\vert x^{*}A_{j}y \bigr\vert ^{2} $$
is continuous, it follows that
$$ a_{0}=\inf_{ \Vert {x} \Vert =1} \alpha_{x}=\inf _{ \Vert {x} \Vert = \Vert {y} \Vert =1} \sum_{j=1}^{N} \bigl\vert x^{*}A_{j} y \bigr\vert ^{2} > 0 $$
and
$$ b_{0}=\sup_{ \Vert {x} \Vert =1} \beta_{x} = \sup _{ \Vert {x} \Vert = \Vert {y} \Vert =1} \sum_{j=1}^{N} \bigl\vert x^{*}A_{j} y \bigr\vert ^{2}< +\infty. $$
Conversely, suppose \(a_{0}>0 \) and \(b_{0}<+\infty\), then, for any \(x\neq0 \) and \(y\neq0 \), we have
$$ a_{0} \leq\frac{\sum_{j=1}^{N}|x^{*}A_{j}y|^{2}}{ \Vert {x} \Vert ^{2} \Vert {y} \Vert ^{2}} =\sum_{j=1}^{N} \biggl\vert \biggl\langle A_{j} \frac{x}{ \Vert {x} \Vert }, \frac {y}{ \Vert {y} \Vert } \biggr\rangle \biggr\vert ^{2} \leq b_{0}. $$
This is equivalent to
$$ a_{0} \Vert {x} \Vert ^{2} \Vert {y} \Vert ^{2}\leq \sum_{j=1}^{N} \bigl\vert \langle A_{j}x, y \rangle \bigr\vert ^{2} \leq b_{0} \Vert {x} \Vert ^{2} \Vert {y} \Vert ^{2}, $$
(2.1)
which means for any vector \(x\neq0 \), \(A_{j}x \) is a frame for \(F^{d} \) with frame bounds \(a_{0} \Vert {x} \Vert ^{2} \) and \(b_{0} \Vert {x} \Vert ^{2} \). Hence, we have proved the following lemma.

Lemma 2.1

Suppose\(A= \{A_{j}\}_{j=1}^{N} \)is a collection of Hermitian matrices in\(H_{d}(F) \). Then for any\(x \neq0 \), the collection\(\{A_{j}x\}_{j=1}^{N} \)forms a frame for\(F^{d} \)if and only if\(a_{0}>0 \)and\(b_{0}<+\infty\). In this case, (2.1) holds for every\(x,y\in F^{d}\).

For \(A=\{A_{j}\}_{j=1}^{N} \subset H_{d}(\mathbb {R}) \), Yang Wang and Zhiqiang Xu [17] proved that A is phase retrievable if and only if \(\{A_{j}x\}_{j=1}^{N} \) is a frame of \(\mathbb {R}^{d} \) for any nonzero \(x\in \mathbb {R}^{d} \). This incorporating Lemma 2.1 leads to the following theorem.

Theorem 2.1

Let\(A=\{A_{j}\}_{j=1}^{N} \subset H_{d}(\mathbb {R}) \). ThenAis phase retrievable if and only if\(a_{0}>0 \)and\(b_{0}<+\infty\).

Since \(| \langle A_{j}x, y \rangle|^{2}=y^{T}A_{j}xx^{T}A_{j}y \) in real case, the above theorem can be rewritten in the quadratic forms as follows.

Corollary 2.1

Let\(A=\{A_{j}\}_{j=1}^{N} \subset H_{d}(\mathbb {R})\). ThenAis phase retrieval if and only if there are two positive real constants\(a_{0}\), \(b_{0} \)such that
$$ a_{0} \Vert {x} \Vert ^{2} I \leq R_{x} \leq b_{0} \Vert {x} \Vert ^{2} I,\quad x\in \mathbb {R}^{d} , $$
(2.2)
where the inequality is in the sense of quadratic forms and\(R_{x}:=\sum_{j=1}^{N}A_{j}xx^{T}A_{j} \).
For any positive semidefinite matrix \(A_{j}\in H_{d}(\mathbb {R}) \), there is a matrix \(B_{j}\in \mathbb {R}^{r_{j}\times d} \) such that \(A_{j}=B_{j}^{T}B_{j} \) where \(r_{j}\geq1 \). While the matrix \(B_{j} \) can be taken as the square root of \(A_{j} \), most of the time, it is not unique. Let \(B_{j}^{T}=(b_{j,1},\ldots,b_{j,r_{j}}) \), where \(b_{j,i} \) is the ith column of the matrix \(B_{j}^{T} \), then \(A_{j}x \) can be expanded as
$$A_{j}x=B_{j}^{T}B_{j}x=\sum _{i=1}^{r_{j}}b_{j,i}b_{j,i}^{T}x =\sum_{i=1}^{r_{j}} \langle x, b_{j,i} \rangle b_{j,i}. $$
Hence, we have
$$x^{T}A_{j}x= \langle A_{j}x, x \rangle= \sum _{i=1}^{r_{j}}\bigl| \langle x, b_{j,i} \rangle\bigr|^{2}. $$
Using the above formula, we obtain a relation between generalized phase retrieval and the classical phase retrieval.

Theorem 2.2

Suppose\(A_{j}=B_{j}^{T}B_{j}\in H_{d}(\mathbb {R}) \)is a positive semidefinite matrix and\(B_{j}^{T}=(b_{j,1},\ldots,b_{j,r_{j}}) \)for\(j=1,\ldots, N \). If\(\{A_{j}\}_{j=1}^{N} \)is generalized phase retrievable, then the column vectors\(\{b_{j,i}\}_{i=1,j=1}^{r_{j},N} \)satisfy the complementary property and therefore become a phase retrievable frame.

Proof

We prove it by contradiction. Let \(\varLambda:=\{(j,i) : 1\leq i\leq r_{j}, 1\leq j \leq N \} \). Assuming S is an arbitrary subset of Λ, it can be divided into two parts: \(S_{j}=\{i|(j,i)\in S\} \) and \(S^{C}_{j}=\{1,2,\ldots,r_{j}\}\setminus S_{j} \). If neither \(\{b_{j,i}\}_{(j,i)\in S} \) nor \(\{b_{j,i}\}_{(j,i)\in S^{C}} \) is a spanning set of \(\mathbb{R}^{N} \), then there exist two nonzero elements \(x,y\in\mathbb{R}^{d} \) such that \(\langle x, b_{j,i} \rangle=0 \) for \(he (j,i)\in S \) and \(\langle y, a_{j,i} \rangle=0 \) for \((j,i)\in S^{C} \). Consequently, for \(j=1,\ldots, N \), we have
$$\begin{aligned} \bigl\Vert {B_{j}(x+y)} \bigr\Vert ^{2} &= \sum _{i=1}^{r_{j}} \bigl\vert \langle x+y, b_{j,i} \rangle \bigr\vert ^{2} \\ &= \sum_{i\in S_{j}} \bigl\vert \langle x+y, b_{j,i} \rangle \bigr\vert ^{2}+ \sum _{i\in S^{C}_{j}} \bigl\vert \langle x+y, b_{j,i} \rangle \bigr\vert ^{2} \\ &= \sum_{i\in S_{j}} \bigl\vert \langle x-y, b_{j,i} \rangle \bigr\vert ^{2}+ \sum _{i\in S^{C}_{j}} \bigl\vert \langle x-y, b_{j,i} \rangle \bigr\vert ^{2} \\ &= \bigl\Vert {B_{j}(x-y)} \bigr\Vert ^{2}. \end{aligned}$$
Incorporating \(x^{T}A_{j}x= \Vert {B_{j}x} \Vert ^{2} \) for any \(x\in \mathbb {R}^{d} \), the above equation indicates that \((x+y)^{T}A_{j}(x+y)=(x-y)^{T}A_{j}(x-y) \) for all j. Since \(\{A_{j}\}_{j=1}^{N} \) is phase retrievable, we have \(x+y=\pm(x-y) \). This contradicts the nonzeroness of x and y. □
If \(\{A_{j}\}_{j=1}^{N} \) is generalized phase retrievable, then \(\{ A_{j,i}\}_{j=1,i=1}^{N,r_{j}}= \{b_{j,i}b_{j,i}^{T}\}_{j=1,i=1}^{N,r_{j}} \) is generalized phase retrievable by Theorem 2.2. Again, by Theorem 2.1, there exist positive constants \(a_{1}\), \(b_{1} \) such that
$$ a_{1} \Vert {x} \Vert ^{2} \Vert {y} \Vert ^{2}\leq \sum_{j=1}^{N}\sum _{i=1}^{r_{j}} \bigl\vert \langle A_{j,i}x, y \rangle \bigr\vert ^{2} \leq b_{1} \Vert {x} \Vert ^{2} \Vert {y} \Vert ^{2}. $$
On the other hand,
$$\sum_{j=1}^{N}\bigl|x^{T}A_{j}y\bigr|^{2}= \sum_{j=1}^{N} \Biggl\vert \sum _{i=1}^{r_{j}} \langle A_{j,i}x, y \rangle \Biggr\vert ^{2} \leq r\sum_{j=1}^{N} \sum_{i=1}^{r_{j}} \bigl\vert \langle A_{j,i}x, y \rangle \bigr\vert ^{2} \leq rb_{1} \Vert {x} \Vert ^{2} \Vert {y} \Vert ^{2}, $$
with \(r=\max_{j}\{r_{j}\} \). Therefore, we have the upper bound relation \(b_{0}\leq rb_{1} \).

2.1 Bi-Lipschitz property

In this subsection, we consider the bi-Lipschitz property of tmappings \(M_{A} \) and \(\sqrt{M_{A}} \). At first, we show the stability of the mapping \(M_{A} \) with respect to the metric \(d_{2} \).

Theorem 2.3

Let\(\{A_{j}\}_{j=1}^{N} \subset H_{d} (\mathbb {R}) \)be generalized phase retrievable. Then\(M_{A} \)is bi-Lipschitz with respect to matrix metric\(d_{2}(x,y)= \Vert {x+y} \Vert \Vert {x-y} \Vert \).

Proof

For any \(x,y \in \mathbb {R}^{d}\), we have
$$\bigl\Vert {M_{A}(x)-M_{A}(y)} \bigr\Vert ^{2}= \sum_{j=1}^{N} \bigl\vert \bigl\langle A_{j}(x+y), x-y \bigr\rangle \bigr\vert ^{2}. $$
By Lemma 2.1, we have
$$ a_{0} \Vert {x+y} \Vert ^{2} \Vert {x-y} \Vert ^{2} \leq \sum_{j=1}^{N} \bigl\vert \bigl\langle A_{j}(x+y), x-y \bigr\rangle \bigr\vert ^{2} \leq b_{0} \Vert {x+y} \Vert ^{2} \Vert {x-y} \Vert ^{2}. $$
This is equivalent to
$$ a_{0}d_{2}^{2}(x,y)\leq \bigl\Vert {M_{A}(x)-M_{A}(y)} \bigr\Vert ^{2} \leq b_{0}d^{2}_{2}(x,y). $$
(2.3)
 □

Now, we consider the stability of the mapping \(\sqrt{M_{A}} \) with respect to the metric \(d_{1} \).

Lemma 2.2

Let\(\{A_{j}\}_{j=1}^{N} \subset H_{d}(\mathbb {R}) \)be a collection of positive semidefinite matrices and generalized phase retrievable. Then\(\sqrt{ M_{A}} \)is upper bounded with respect to the metric\(d_{1}(x,y)=\min\{ \Vert {x+y} \Vert , \Vert {x-y} \Vert \} \).

Proof

First, by the definition of \(\sqrt{M_{A}} \), we have
$$ \bigl\Vert {\sqrt{M_{A}}(x)-\sqrt{M_{A}}(y)} \bigr\Vert ^{2} = \sum_{j=1}^{N} \bigl(\sqrt{x^{T}A_{j}x}-\sqrt{y^{T}A_{j}y} \bigr)^{2} =\sum_{j=1}^{N} \bigl( \Vert {B_{j}x} \Vert - \Vert {B_{j}y} \Vert \bigr) ^{2}, $$
where \(B_{j} \) is the square root of \(A_{j} \). Then, by the reverse triangle inequality,
$$\begin{aligned} \sum_{j=1}^{N} \bigl( \Vert {B_{j}x} \Vert - \Vert {B_{j}y} \Vert \bigr) ^{2} &\leq \sum_{j=1}^{N} \bigl( \min \bigl\{ \bigl\Vert {B_{j}(x-y)} \bigr\Vert , \bigl\Vert {B_{j}(x+y)} \bigr\Vert \bigr\} \bigr)^{2} \\ &\leq \min \Biggl\{ \sum_{j=1}^{N} \bigl\Vert {B_{j}(x-y)} \bigr\Vert ^{2}, \sum _{j=1}^{N} \bigl\Vert {B_{j}(x+y)} \bigr\Vert ^{2} \Biggr\} \\ &= \min \Biggl\{ \sum_{j=1}^{N}(x-y)^{T}A_{j}(x-y), \sum_{j=1}^{N}(x+y)^{T}A_{j}(x+y) \Biggr\} \\ &= \min \Biggl\{ (x-y)^{T} \Biggl(\sum_{j=1}^{N}A_{j} \Biggr) (x-y), (x+y)^{T} \Biggl(\sum_{j=1}^{N}A_{j} \Biggr) (x+y) \Biggr\} . \end{aligned}$$
Since \(A_{j} \) is positive semidefinite, so is \(\sum_{j=1}^{N}A_{j} \). Let \(\lambda_{1} \) be the maximum eigenvalue of \(\sum_{j=1}^{N}A_{j} \), Then we have
$$\min \Biggl\{ (x-y)^{T} \Biggl(\sum_{j=1}^{N}A_{j} \Biggr) (x-y), (x+y)^{T} \Biggl(\sum_{j=1}^{N}A_{j} \Biggr) (x+y) \Biggr\} \leq \lambda_{1} d_{1}^{2} (x,y). $$
Combining all the above inequalities, we have
$$\bigl\Vert {\sqrt{M_{A}}(x)-\sqrt{M_{A}}(y)} \bigr\Vert ^{2} \leq \lambda_{1} d_{1}^{2} (x,y). $$
This demonstrates the mapping \(\sqrt{M_{A}} \) is upper bounded by \(\lambda_{1} \). Furthermore, picking an eigenvector x of \(\sum_{j=1}^{N}A_{j} \) corresponding to \(\lambda_{1}\), then we have
$$\bigl\Vert {\sqrt{M_{A}}(x)-\sqrt{M_{A}}(0)} \bigr\Vert ^{2} = \sum_{j=1}^{N} \Vert {B_{j}x} \Vert ^{2} =x^{T} \Biggl(\sum _{j=1}^{N}A_{j} \Biggr)x= \lambda_{1} \Vert {x} \Vert ^{2}, $$
which means \(\lambda_{1} \) is the optimal upper bound. □
For the lower bound, we consider the parallelogram law
$$\Vert {x+y} \Vert ^{2}+ \Vert {x-y} \Vert ^{2}= 2 \bigl( \Vert {x} \Vert ^{2}+ \Vert {y} \Vert ^{2} \bigr) $$
in two cases. At first, if \(\Vert {x+y} \Vert \leq \Vert {x-y} \Vert \), we have
$$\frac{ \Vert {x+y} \Vert ^{2} \Vert {x-y} \Vert ^{2}}{ \Vert {x} \Vert ^{2} + \Vert {y} \Vert ^{2}} \geq \Vert {x+y} \Vert ^{2}. $$
Secondly, if \(\Vert {x+y} \Vert \geq \Vert {x-y} \Vert \), we have
$$\frac{ \Vert {x+y} \Vert ^{2} \Vert {x-y} \Vert ^{2}}{ \Vert {x} \Vert ^{2} + \Vert {y} \Vert ^{2}} \geq \Vert {x-y} \Vert ^{2}. $$
Combining the above two cases,
$$ d_{1}^{2}(x,y)=\min\bigl\{ \Vert {x+y} \Vert ^{2}, \Vert {x-y} \Vert ^{2}\bigr\} \leq \frac{ \Vert {x+y} \Vert ^{2} \Vert {x-y} \Vert ^{2}}{ \Vert {x} \Vert ^{2}+ \Vert {y} \Vert ^{2}} =\frac{d^{2}_{2}(x,y)}{ \Vert {x} \Vert ^{2}+ \Vert {y} \Vert ^{2}}, $$
(2.4)
which indicates the relation between two metrics. This allows us to estimate the lower bound of \(\sqrt{M_{A}} \).

Lemma 2.3

Let\(\{A_{j}\}_{j=1}^{N} \subset H_{d}^{N}(\mathbb {R}) \)be generalized phase retrievable and positive semidefinite. Then\(\sqrt{ M_{A}} \)is lower bounded with respect to the metric\(d_{1}(x,y)=\min\{ \Vert {x+y} \Vert , \Vert {x-y} \Vert \} \).

Proof

By the formula for the difference of a square, we have
$$\bigl\Vert {\sqrt{M_{A}}(x)-\sqrt{M_{A}}(y)} \bigr\Vert ^{2} = \sum_{j=1}^{N} \bigl( \Vert {B_{j}x} \Vert - \Vert {B_{j}y} \Vert \bigr)^{2} = \sum_{j=1}^{N} \biggl( \frac{ \Vert {B_{j}x} \Vert ^{2}- \Vert {B_{j}y} \Vert ^{2}}{ \Vert {B_{j}x} \Vert + \Vert {B_{j}y} \Vert } \biggr)^{2}, $$
where \(B_{j} \) is the square root of \(A_{j} \). Let C is the uniform upper operator bound for \(\{ A_{j}\}_{j=1}^{N} \), that is, \(\Vert {A_{j} x} \Vert \leq C \Vert {x} \Vert \) for \(x\in \mathbb {R}^{d} \) and \(j=1,\ldots,N \). Therefore \(\Vert {B_{j}x} \Vert \leq\sqrt{C} \Vert {x} \Vert \) and we have
$$\begin{aligned} \sum_{j=1}^{N} \biggl(\frac{ \Vert {B_{j}x} \Vert ^{2}- \Vert {B_{j}y} \Vert ^{2}}{ \Vert {B_{j}x} \Vert + \Vert {B_{j}y} \Vert } \biggr)^{2} &\geq\frac{\sum_{j=1}^{N} ( \Vert {B_{j}x} \Vert ^{2}- \Vert {B_{j}y} \Vert ^{2} )^{2}}{C( \Vert {x} \Vert + \Vert {y} \Vert )^{2}} \\ &\geq\frac{a_{0} d_{2}^{2}(x,y)}{ 2C( \Vert {x} \Vert ^{2}+ \Vert {y} \Vert ^{2})} \\ &\geq\frac{a_{0}}{2C} d_{1}^{2}(x,y), \end{aligned}$$
where the last inequality is due to (2.4). Thus we have proved that \(\frac{a_{0}}{2C} \) is a lower bound of the mapping \(\sqrt{M_{A}} \). □

Our discussion of the bi-Lipschitz property of \(\sqrt{M_{A}} \) is summarized in the following theorem by combining Lemma 2.2 and Lemma 2.3.

Theorem 2.4

Let\(\{A_{j}\}_{j=1}^{N} \subset H_{d}^{N}(\mathbb {R}) \)be generalized phase retrievable and positive semidefinite. Then\(\sqrt{ M_{A}} \)is bi-Lipschitz with respect to the metric\(d_{1}(x,y)=\min\{ \Vert {x+y} \Vert , \Vert {x-y} \Vert \} \)as follows:
$$\frac{a_{0}}{2C}d_{1}^{2}(x,y)\leq \bigl\Vert { \sqrt{M_{A}}(x)-\sqrt {M_{A}}(y)} \bigr\Vert ^{2} \leq\lambda_{1} d_{1}^{2}(x,y). $$

Phase retrieval by projections introduced by Cahill et al. in [6] aims at recovering a signal from measurements consisting of norms of its orthogonal projections onto a family of subspaces. Since \(x^{T}P_{j}x= \Vert {P_{j}x} \Vert ^{2} \) when \(P_{j} \) is a projection to an appropriate subspace of \(\mathbb {R}^{d} \), phase retrieval by projections is a special case of generalized phase retrieval with \(A_{j}=P_{j} \). Therefore, Theorem 2.3 and Theorem 2.4 also hold for phase retrieval by projections. In this special case, \(\lambda_{1} \) can be upper bounded by N and C equals one.

2.2 Cramer–Rao lower bound

Given signal \(x\in \mathbb {R}^{d} \), we take measurements of the form \(Y=\varphi(x)+Z \), where the entries of Z are independent Gaussian random variables with mean value 0 and variance \(\sigma^{2} \). The generalized phase retrieval problem with noise is to estimate x from measurements Y. In this case, we apply the theory of Fisher information to evaluate the stability of \(\varphi(x) \). The Fisher information matrix is defined entry-wise by
$$\bigl(\mathbb {I}(x)\bigr)_{m,\ell} =-\mathbb {E}\biggl[ \frac{\partial^{2}\log L(x)}{ \partial x_{m} \, \partial x_{\ell}} \biggr], $$
where \(L(x) \) is the likelihood function. By assumption, Y is a random vector with
$$L(x)=\frac{1}{(2\pi\sigma^{2})^{N/2}} e^{-\frac{1}{2\sigma^{2}} \Vert {y-\varphi(x)} \Vert ^{2}}. $$
By some simple computations, the Fisher information matrix entry \((\mathbb {I}(x))_{m,\ell} \) equals
$$\frac{1}{\sigma^{2}} \sum_{j=1}^{N}\mathbb {E}\biggl[ \frac{\partial(\varphi(x))_{j}}{\partial x_{m}} \frac{\partial(\varphi(x))_{j}}{\partial x_{\ell}} \biggr]- \frac{1}{\sigma^{2}} \sum _{j=1}^{N}\mathbb {E}\biggl[\bigl(y_{j}-\bigl( \varphi(x)\bigr)_{j}\bigr) \frac{\partial^{2}(\varphi(x))_{j}}{\partial x_{m}\, \partial x_{\ell}} \biggr] , $$
where \((\varphi(x) )_{j} \) is the jth component of \(\varphi(x) \). Since \(\mathbb {E}[y_{j}]=(\varphi(x))_{j} \), the second expectation equals zero and
$$ \bigl(\mathbb {I}(x)\bigr)_{m,\ell}= \frac{1}{\sigma^{2}} \sum _{j=1}^{N}\frac{\partial}{\partial x_{m}} \bigl(\varphi(x) \bigr)_{j} \frac{\partial}{\partial x_{\ell}} \bigl(\varphi(x) \bigr)_{j}. $$
(2.5)
In terms of generalized phase retrieval problem in real case, we have \(\varphi(x)=(x^{T}A_{j}x)_{j=1}^{N} \) and in order to obtain a unique solution, we make an assumption of the signal x introduced in [1]: the signal x is in a half super plane with respect to a vector \(e\in \mathbb {R}^{d} \), i.e. \(\langle x, e \rangle>0 \). See Ref. [4] for another assumption to guarantee uniqueness. Substituting \(\varphi(x)=(x^{T}A_{j}x)_{j=1}^{N} \) into (2.5), we have
$$\bigl(\mathbb {I}(x)\bigr)_{m,\ell}= \frac{4}{\sigma^{2}}\sum _{j=1}^{N}(A_{j}x)_{m}(A_{j}x)_{\ell}, $$
where \((A_{j}x)_{m} \) is the mth element of vector \(A_{j}x \). This indicates the Fisher information matrix can be expressed as
$$ \mathbb {I}(x)=\frac{4}{\sigma^{2}}\sum_{j=1}^{N}(A_{j}x) (A_{j}x)^{T} =\frac{4}{\sigma^{2}}\sum _{j=1}^{N}A_{j}xx^{T}A_{j}= \frac{4}{\sigma^{2}}R_{x}. $$
(2.6)
Since Corollary 2.1 implies the matrix \(R_{x} \) is positive definite, we obtain the Cramer–Rao lower bound by Theorem 3.2 in [12], as incorporated in the following theorem.

Theorem 2.5

The Fisher information matrix for the noisy generalized phase retrieval model in real case is given by (2.6). Consequently, for any unbiased estimator\(\varPhi(y) \)forx, the covariance matrix is bounded below by the Cramer–Rao lower bound as follows:
$$\operatorname{Cov}\bigl[\varPhi(y)\bigr]\geq \bigl(\mathbb {I}(x)\bigr)^{-1} = \frac{\sigma^{2}}{4} (R_{x})^{-1}. $$
Therefore, the mean square error of any unbiased estimator\(\varPhi(y)\)is given by
$$\mathbb {E}\bigl[ \bigl\Vert {\varPhi(y)-x} \bigr\Vert ^{2}|x\bigr] \geq \frac{\sigma^{2}}{4} \operatorname{Tr}\bigl(R_{x}^{-1}\bigr). $$
Taking inverse operator to matrices in (2.2) leads to
$$\frac{I}{b_{0} \Vert {x} \Vert ^{2}} \leq R_{x}^{-1} \leq \frac{I}{a_{0} \Vert {x} \Vert ^{2}}. $$
Then taking trace of every matrix yields
$$\frac{d}{b_{0} \Vert {x} \Vert ^{2}} \leq \operatorname{Tr}\bigl(R_{x}^{-1}\bigr) \leq \frac{d}{a_{0} \Vert {x} \Vert ^{2}}. $$
Therefore, using Theorem (2.5), we get the mean square error bounds of unbiased estimator as in the following corollary.

Corollary 2.2

If\(A=\{A_{j}\}_{j=1}^{N} \)is generalized phase retrievable, then, for any unbiased estimator\(\varPhi(y) \)for x, we have
$$\mathbb {E}\bigl[ \bigl\Vert {\varPhi(y)-x} \bigr\Vert ^{2}|x\bigr] \geq \frac{\sigma^{2} d }{4b_{0} \Vert {x} \Vert ^{2}}. $$
Furthermore, any unbiased estimate that achieves the Cramer–Rao lower bound has a mean square error that is bounded above by
$$\mathbb {E}\bigl[ \bigl\Vert {\varPhi(y)-x} \bigr\Vert ^{2}|x\bigr] \leq \frac{\sigma^{2} d }{4a_{0} \Vert {x} \Vert ^{2}}. $$

3 Stability of generalized affine phase retrieval

The standard affine phase retrieval introduced by Bing Gao et al. in [9] can be used for recovering signals with prior knowledge. In this section, we consider generalized affine phase retrieval theoretically and give some basic mathematical properties at first, then we focus on its stability property.

Let \(B_{j} \in F^{r_{j}\times d} \), where \(r_{j} \) is a positive integer. We consider recovering signal x from the norm of the affine linear measurements \(\Vert {B_{j}x+b_{j}} \Vert \), \(j=1,\ldots,N \), where \(b_{j} \in F^{r_{j}} \) and \(x\in F^{d} \). Let \(B=\{B_{j}\}_{j=1}^{N} \) and \(b=\{b_{j}\}_{j=1}^{N} \), we define the mapping \(M_{B,b}:F^{d} \rightarrow \mathbb {R}^{N}_{+} \) by
$$M_{B,b}(x)= \bigl( \Vert {B_{1}x+b_{1}} \Vert ^{2}, \Vert {B_{2}x+b_{2}} \Vert ^{2} ,\ldots, \Vert {B_{N}x+b_{N}} \Vert ^{2} \bigr). $$
The pair \((B,b) \) is said to be generalized affine phase retrieval for \(F^{d} \) if \(M_{B,b} \) is injective on \(F^{d} \). This definition is a little bit different from the one in [10] where all \(r_{j} \) equals same integer \(r\geq1 \). Similar to generalized phase retrieval, we define the mapping \(\sqrt {M_{B,b}} \) by
$$\sqrt{M_{B,b}}(x)= \bigl( \Vert {B_{1}x+b_{1}} \Vert , \Vert {B_{2}x+b_{2}} \Vert ,\ldots, \Vert {B_{N}x+b_{N}} \Vert \bigr). $$

Theorem 3.1

Let\(B_{j} \in \mathbb {R}^{r_{j}\times d} \)and\(b_{j}\in \mathbb {R}^{r_{j}} \). Then the following are equivalent:
  1. (A)

    The pair\((B,b) \)is generalized affine phase retrievable for\(\mathbb {R}^{d} \).

     
  2. (B)

    There exist no nonzero\(u \in \mathbb {R}^{d} \)such that\(\langle B_{j}u, B_{j}v+b_{j} \rangle=0 \)for all\(1\leq j \leq N \)and\(v\in \mathbb {R}^{d} \).

     
  3. (C)

    Ifvis the solution of equations\(B_{j}v+b_{j}=0 \)for\(j\in S\subset\{1,2,\ldots,N\} \), then\(\{B_{j}^{T}B_{j}v+B_{j}^{T}b_{j}\}_{j\in S^{C}} \)is a spanning set of\(\mathbb {R}^{d} \).

     
  4. (D)

    The Jacobian of\(M_{B,b} \)has rankdeverywhere on\(\mathbb {R}^{d}\).

     

Proof

(A) ⇔ (B). Assume that \(M_{B,b}(x)=M_{B,b}(y) \) for some \(x \neq y \) in \(\mathbb {R}^{d} \). For any j, we have
$$ \Vert {B_{j}x+b_{j}} \Vert ^{2}- \Vert {B_{j}y+b_{j}} \Vert ^{2} = \bigl\langle B_{j}(x-y), B_{j}(x+y)+2b_{j} \bigr\rangle . $$
Set \(2u=x-y \) and \(2v=x+y \). Then \(u\neq0 \) and for all j,
$$ \langle B_{j}u, B_{j}v+b_{j} \rangle=0. $$
(3.1)
Conversely, assume that (3.1) hold for all j. Let \(x,y \in \mathbb {R}^{d} \) be given by \(x-y=2u \) and \(x+y=2v \). Then \(x\neq y \). However, we have \(M_{B,b}(x)=M_{B,b}(y) \). Hence \((B,b) \) cannot be affine phase retrievable.

(B) ⇔ (C). Assume \(\{B_{j}^{T}B_{j}v+B_{j}^{T}b_{j}\}_{j\in S^{C}} \) is not a spanning set of \(\mathbb {R}^{d} \), then there is a nonzero vector \(u\in \mathbb {R}^{d} \) such that \(\langle B_{j}u, B_{j}v+b_{j} \rangle= \langle u, B_{j}^{T}B_{j}v+B_{j}^{T}b_{j} \rangle=0 \) for \(j\in S^{C} \). For \(j\in S \), since v is the solution of equations \(B_{j}v+b_{j}=0 \), the inner product \(\langle B_{j}u, B_{j}v+b_{j} \rangle \) also equals zero, which contradicts (B). The converse can be proven similarly.

(C) ⇔ (D). The Jacobian of \(M_{B,b} \) at x is exactly
$$J_{B,b}(x)= 2\bigl(B_{1}^{T}B_{1}x+B_{1}^{T}b_{1},B_{2}^{T}B_{2}x+B_{2}^{T}b_{2}, \ldots, B_{N}^{T}B_{N}x+B_{N}^{T}b_{N} \bigr), $$
which means the jth column of \(J_{B,b} \) is precisely \(B_{j}^{T}B_{j}x+B_{j}^{T}b_{j} \). This indicates the equivalence of (C) and (D). □

Minimality problems have attracted much attention from different areas recently. For generalized affine phase retrieval, the answer is related to different constraints on \(B_{j} \), \(b_{j} \) and prior knowledge of signal x. The following theorem is given in [10].

Theorem 3.2

([10])

Let\(N\geq2d \)and\(r>1 \). Then a generic\(\{(B_{j},b_{j})\}_{j=1}^{N} \subset \mathbb {R}^{r\times(d+1)} \)has the generalized affine phase retrieval property in\(\mathbb {R}^{d} \).

Let \(r=\max_{j} r_{j} \). The \(r_{j}\times(d+1) \) matrix \((B_{j},b_{j}) \) in Theorem 3.1 can be extended to \(r\times(d+1) \) matrix by filling with zero rows. The extended matrix can be viewed as an affine phase retrieval matrix where all \(r_{j}=r \) and hence leads to the following corollary by Theorem 3.2.

Corollary 3.1

Let\(\tilde{A}_{j}=(B_{j}^{T},b_{j}^{T})^{T}(B_{j},b_{j})\), where\(b_{j}\in \mathbb {R}^{r_{j}} \)and\(B_{j}\in \mathbb {R}^{r_{j}\times d} \)is a nonzero matrix. If\(N\geq2d \)and\(\tilde{A}=(\tilde{A}_{j})_{j=1}^{N} \)is a generic set in\(H_{d}^{N}(\mathbb {R}) \), Then the pair\((B,b) \)is generalized affine phase retrievable.

Example 3.1

Let \(B_{1}=B_{2} \) be the \(2\times2 \) identity matrix, \(B_{3}=(1,0) \), \(b_{1}=(0,0)^{T} \), \(b_{2}=(0,1)^{T} \), \(b_{3}=1 \). Then the pair \((B,b) \) is generalized affine phase retrievable in \(\mathbb {R}^{2} \). In fact, assuming \(u=(x,y)^{T}\in \mathbb {R}^{2} \), then
$$\begin{aligned}& \Vert {B_{1}u+b_{1}} \Vert ^{2}= x^{2}+y^{2}, \\& \Vert {B_{2}u+b_{2}} \Vert ^{2}= x^{2}+(y+1)^{2}, \\& \Vert {B_{3}u+b_{3}} \Vert ^{2}= (x+1)^{2}. \end{aligned}$$
By simple computation, one can easily solve the equations with respect to x, y. The number of measurements equals 3 with \(r_{1}=r_{2}=2\), \(r_{3}=1 \).

Now, we consider the stability of generalized affine phase retrieval. Let \(\tilde{B}_{j}=(B_{j},b_{j}) \), \(\tilde{A}_{j}=\tilde{B}_{j}^{T}\tilde{B}_{j} \), and \(\tilde{x}=(x^{T},1)^{T} \). We have the following theorem.

Theorem 3.3

Suppose\(\tilde{A}=\{\tilde{A}_{j}\}_{j=1}^{N} \)is a generic set with\(N\geq2d \), then\((B, b) \)is generalized affine phase retrievable. Furthermore, there exist positive constants\(c_{0}\), \(c_{1}\), \(C_{0}\), \(C_{1} \)depending on\((B, b)\)such that, for any\(x,y \in\mathbb{R}^{d} \),
$$\begin{aligned}& c_{0}\bigl(d^{2}_{2}(x,y)+d^{2}(x,y) \bigr) \leq \bigl\Vert {M_{B,b}(x)-M_{B,b}(y)} \bigr\Vert ^{2} \leq c_{1}\bigl(d^{2}_{2}(x,y)+d^{2}(x,y) \bigr), \end{aligned}$$
(3.2)
$$\begin{aligned}& C_{0} d_{1}^{2}(x,y)\leq \bigl\Vert {\sqrt{M_{B,b}}(x)- \sqrt{M_{B,b}}(y)} \bigr\Vert ^{2}\leq C_{1} d^{2}(x,y). \end{aligned}$$
(3.3)

Proof

The generalized affine phase retrievable property of pair \((B,b) \) is due to Corollary 3.1. Noticed that \(\Vert {B_{j}x+b_{j}} \Vert ^{2}= \Vert {\tilde{B}_{j}\tilde {x}} \Vert ^{2} =\tilde{x}^{T}\tilde{A}\tilde{x}\) implies \(M_{B,b}(x)=M_{\tilde{A}}(\tilde{x}) \), we have \(\Vert {M_{B,b}(x)-M_{B,b}(y)} \Vert ^{2} = \Vert {M_{\tilde{A}}(\tilde{x})- M_{\tilde{A}}(\tilde{y})} \Vert ^{2}\). By Theorem 2.3, we have
$$ \bigl\Vert {M_{B,b}(x)-M_{B,b}(y)} \bigr\Vert ^{2} \simeq d_{2}^{2}(\tilde{x},\tilde{y}), $$
where the symbol “≃” denotes the bi-Lipschitz relation. Since \(d_{2}^{2}(\tilde{x},\tilde{y})= \Vert {\tilde{x}+\tilde{y}} \Vert ^{2} \Vert {\tilde {x}-\tilde{y}} \Vert ^{2} =( \Vert {x+y} \Vert ^{2}+4) \Vert {x-y} \Vert ^{2} \), there exist constants \(c_{0}\), \(c_{1} \) such that (3.2) holds. Similarly, by Theorem 2.4, we have
$$ \bigl\Vert {\sqrt{M_{B,b}}(x)-\sqrt{M_{b,b}}(y)} \bigr\Vert ^{2} \simeq d_{1}^{2}(\tilde{x},\tilde{y}). $$
Since \(d_{1}^{2}(\tilde{x},\tilde{y})=\min \{ \Vert {\tilde{x}+\tilde{y}} \Vert ^{2}, \Vert {\tilde{x}-\tilde{y}} \Vert ^{2}\} =\min\{ \Vert {x+y} \Vert ^{2}+4, \Vert {x-y} \Vert ^{2} \} \), there exist constants \(C_{0}\), \(C_{1} \) such that (3.3) holds. □

In contrast to Theorem 4.1 in [9], Theorem 3.4 leads to a slack constraint of the signal from a compact set to \(\mathbb{R}^{d} \). Although affine phase retrieval is not bi-Lipschitz with respect to one metric, the mappings \(M_{B,b} \) and \(\sqrt{M_{B,b}} \) is bounded by two metrics.

We now consider the additive white Gaussian noise model
$$Y=\varphi(x)+Z, $$
with \(\varphi(x)=( \Vert {B_{j}x+b_{j}} \Vert ^{2})_{j=1}^{N} \). Then, by Eq. (2.5), the Fisher information of this model is \(\frac{4}{\sigma^{2}}R_{x}^{a} \) where \(R_{x}^{a}=\sum_{j=1}^{N}B_{j}^{T}(B_{j}x+b_{j})(B_{j}x+b_{j})^{T}B_{j} \).

Lemma 3.1

If the pair\((B,b) \)is generalized affine phase retrievable, then the Fisher information\(R_{x}^{a} \)is positive definite for any\(x\in \mathbb {R}^{d} \).

Proof

It is easy to see that \(R_{x}^{a} \) is positive semidefinite. Assume \(y^{T}R_{x}^{a}y=0 \) for some \(y\in \mathbb {R}^{d} \), that is,
$$y^{T}R_{x}^{a}y=\sum _{j=1}^{N} \bigl\Vert {(B_{j}x+b_{j})^{T}B_{j}y} \bigr\Vert ^{2}=0. $$
Therefore, for all \(j=1,\ldots,N \),
$$(B_{j}x+b_{j})^{T}B_{j}y= \bigl\langle B_{j}^{T}(B_{j}x+b_{j}), y \bigr\rangle =0. $$
Since \((B,b) \) is generalized affine phase retrievable, by Theorem 3.1, the collection \(\{B_{j}^{T}(B_{j}x+b_{j})\}_{j=1}^{N} \) is a spanning set of \(\mathbb {R}^{d} \) and hence \(y=0 \). □

Similar to generalized phase retrieval, we have the following theorem.

Theorem 3.4

The Fisher information matrix for the noisy generalized affine phase retrieval model in real case is\(\frac{4}{\sigma^{2}}R_{x}^{a} \). Consequently, for any unbiased estimator\(\varPhi(y) \)forx, The covariance matrix is bounded below by the Cramer–Rao lower bound as follows:
$$\operatorname{Cov}\bigl[\varPhi(y)\bigr]\geq \bigl(\mathbb {I}(x)\bigr)^{-1} = \frac{\sigma^{2}}{4} \bigl(R_{x}^{a}\bigr)^{-1}. $$
Therefore, the mean square error of any unbiased estimator\(\varPhi(y)\)is given by
$$\mathbb {E}\bigl[ \bigl\Vert {\varPhi(y)-x} \bigr\Vert ^{2}|x\bigr] \geq \frac{\sigma^{2}}{4} \operatorname{Tr}\bigl(\bigl(R_{x}^{a} \bigr)^{-1}\bigr). $$
As a generalization of frame, g-frame is introduced by Wenchang Sun in [15]. The operator sequence \(\{\varLambda_{j}\}_{j=1}^{N}\) is a g-frame if there are two positive constants c and C such that
$$c \Vert {x} \Vert ^{2}\leq\sum_{j=1}^{N} \Vert {\varLambda _{j} x} \Vert ^{2} \leq C \Vert {x} \Vert ^{2}, \quad \forall x\in \mathbb {R}^{d}. $$

Lemma 3.2

If the pair\((B,b) \)is generalized affine phase retrievable, then the collection\(\{B_{j}^{T}B_{j}\}_{j=1}^{N} \)is a g-frame for\(\mathbb {R}^{d} \).

Proof

Since \(\Vert {B_{j}^{T}B_{j}x} \Vert ^{2}=x^{T}(B_{j}^{T}B_{j})^{2}x \), the summation \(\sum_{j=1}^{N} \Vert {B_{j}^{T}B_{j}x} \Vert ^{2} \) is upper bounded by \(\Delta:=\lambda_{\mathrm{max}} ( \sum_{j=1}^{N} (B_{j}^{T}B_{j} )^{2} ) \). For the lower bound, we prove by contradiction. If the summation is not lower bounded, we can find a vector \(y\in \mathbb {R}^{d} \) such that \(\Vert {y} \Vert =1 \) and \(\sum_{j=1}^{N} \Vert {B_{j}^{T}B_{j}y} \Vert ^{2}=0 \), which means \(B_{j}^{T}B_{j}y=0 \) for all \(j=1,\ldots,N \). Therefore, we have
$$y^{T}B_{j}^{T}B_{j}y= \Vert {B_{j}y} \Vert ^{2} =0, $$
which implies \(B_{j}y=0 \) for all \(j=1,\ldots,N\). Consequently, we have \(y\ne0 \) and
$$\Vert {B_{j}y+b_{j}} \Vert ^{2}= \Vert {B_{j}0+b_{j}} \Vert ^{2}, $$
which contradicts the assumption that \((B,b) \) is generalized affine phase retrievable. □

Corollary 3.2

If the pair\((B,b) \)is generalized affine phase retrievable, then, for any unbiased estimator\(\varPhi(y) \)for nonzerox, we have
$$\mathbb {E}\bigl[ \bigl\Vert {\varPhi(y)-x} \bigr\Vert ^{2}|x\bigr] \geq \frac{\sigma^{2} d^{2} }{ 8(\Delta \Vert {x} \Vert ^{2}+C)}. $$

Proof

Since the matrix \(R_{x}^{a} \) is positive definite by Lemma 3.1, the inequality \(\operatorname{Tr}(R_{x}^{a}) \cdot \operatorname{Tr}((R_{x}^{a})^{-1})\geq d^{2} \) holds and we can estimate the lower bound of mean square error by Theorem 3.4 as follows:
$$ \mathbb {E}\bigl[ \bigl\Vert {\varPhi(Y)-x} \bigr\Vert ^{2}|x \bigr] \geq \frac{\sigma^{2}d^{2}}{4\operatorname{Tr}(R_{x}^{a})} . $$
(3.4)
By Lemma 3.2, the trace can be estimated as
$$\operatorname{Tr}\bigl(R_{x}^{a}\bigr)=\sum _{j=1}^{N} \bigl\Vert {B_{j}^{T}B_{j}x+B_{j}^{T}b_{j}} \bigr\Vert ^{2} \leq 2\sum_{j=1}^{N} \bigl\Vert {B_{j}^{T}B_{j}x} \bigr\Vert ^{2} +2\sum_{j=1}^{N} \bigl\Vert {B_{j}^{T}b_{j}} \bigr\Vert ^{2} \leq2\Delta \Vert {x} \Vert ^{2}+2C, $$
where \(C=\sum_{j=1}^{N} \Vert {B_{j}^{T}b_{j}} \Vert ^{2} \). Substituting it into (3.4), we have
$$\mathbb {E}\bigl[ \bigl\Vert {\varPhi(Y)-x} \bigr\Vert ^{2}|x \bigr] \geq \frac{\sigma^{2} d^{2}}{8(\Delta \Vert {x} \Vert ^{2}+C)}. $$
 □

We discussed the stability of generalized phase retrieval and affine generalized phase retrieval in this paper. The first one can be viewed as a generalization of stability of phase retrieval in [3, 4], or as a continuation of the work in [17]. The second one is an extension of the work in [9, 10]. As all the results in this paper are obtained in real Hilbert space, the stability property in complex Hilbert space still needs to be addressed.

Notes

Acknowledgements

The authors would like to thank the referees for their useful comments and remarks.

Availability of data and materials

Not applicable.

Authors’ contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

Funding

This study was partially supported by National Natural Science Foundation of China (Grant No. 11601152).

Competing interests

The authors declare that they have no competing interests.

References

  1. 1.
    Balan, R.: Reconstruction of signals from magnitudes of redundant representations: the complex case. Found. Comput. Math. 16(3), 677–721 (2016) MathSciNetCrossRefGoogle Scholar
  2. 2.
    Balan, R., Casazza, P., Edidin, D.: On signal reconstruction without phase. Appl. Comput. Harmon. Anal. 20(3), 345–356 (2006) MathSciNetCrossRefGoogle Scholar
  3. 3.
    Balan, R., Wang, Y.: Invertibility and robustness of phaseless reconstruction. Appl. Comput. Harmon. Anal. 38(3), 469–488 (2015) MathSciNetCrossRefGoogle Scholar
  4. 4.
    Bandeira, A.S., Cahill, J., Mixon, D.G., Nelson, A.A.: Saving phase: injectivity and stability for phase retrieval. Appl. Comput. Harmon. Anal. 37(1), 106–125 (2014) MathSciNetCrossRefGoogle Scholar
  5. 5.
    Bendory, T., Beinert, R., Eldar, Y.C.: Fourier phase retrieval: uniqueness and algorithms. In: Compressed Sensing and Its Applications. Appl. Numer. Harmon. Anal., pp. 55–91. Springer, Cham (2017) CrossRefGoogle Scholar
  6. 6.
    Cahill, J., Casazza, P.G., Peterson, J., Woodland, L.: Phase retrieval by projections. Houst. J. Math. 42(2), 537–558 (2016) MathSciNetzbMATHGoogle Scholar
  7. 7.
    Eldar, Y.C., Mendelson, S.: Phase retrieval: stability and recovery guarantees. Appl. Comput. Harmon. Anal. 36(3), 473–494 (2014) MathSciNetCrossRefGoogle Scholar
  8. 8.
    Fienup, J.R.: Phase retrieval algorithms: a comparison. Appl. Opt. 21(15), 2758–2769 (1982) CrossRefGoogle Scholar
  9. 9.
    Gao, B., Sun, Q., Wang, Y., Xu, Z.: Phase retrieval from the magnitudes of affine linear measurements. Adv. Appl. Math. 93, 121–141 (2018) MathSciNetCrossRefGoogle Scholar
  10. 10.
    Huang, M., Xu, Z.: Phase retrieval from the norms of affine transformations. ArXiv e-prints (2018) Google Scholar
  11. 11.
    Hurt, N.E.: Phase Retrieval and Zero Crossings: Mathematical Methods in Image Reconstruction. Mathematics and Its Applications, vol. 52. Kluwer Academic, Dordrecht (1989) CrossRefGoogle Scholar
  12. 12.
    Kay, S.M.: Fundamentals of Statistical Signal Processing: Estimation Theory. Prentice Hall, New York (1993) zbMATHGoogle Scholar
  13. 13.
    Sayre, D.: Some implications of a theorem due to Shannon. Acta Crystallogr. 5(6), 843 (1952) CrossRefGoogle Scholar
  14. 14.
    Shechtman, Y., Eldar, Y.C., Cohen, O., Chapman, H.N., Miao, J., Segev, M.: Phase retrieval with application to optical imaging: a contemporary overview. IEEE Signal Process. Mag. 32(3), 87–109 (2015) CrossRefGoogle Scholar
  15. 15.
    Sun, W.: G-frames and g-Riesz bases. J. Math. Anal. Appl. 322(1), 437–452 (2006) MathSciNetCrossRefGoogle Scholar
  16. 16.
    Waldspurger, I.: Wavelet transform modulus: phase retrieval and scattering. Thesis, Ecole normale supérieure—ENS Paris (2015) Google Scholar
  17. 17.
    Wang, Y., Xu, Z.: Generalized phase retrieval: measurement number, matrix recovery and beyond. Appl. Comput. Harmon. Anal. (2017).  https://doi.org/10.1016/j.acha.2017.09.003 CrossRefGoogle Scholar

Copyright information

© The Author(s) 2019

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.School of Mathematics and StatisticsNorth China University of Water Resources and Electric PowerZhengzhouP.R. China

Personalised recommendations