On recovery guarantees for angular synchronization

The angular synchronization problem of estimating a set of unknown angles from their known noisy pairwise differences arises in various applications. It can be reformulated as a optimization problem on graphs involving the graph Laplacian matrix. We consider a general, weighted version of this problem, where the impact of the noise differs between different pairs of entries and some of the differences are erased completely; this version arises for example in ptychography. We study two common approaches for solving this problem, namely eigenvector relaxation and semidefinite convex relaxation. Although some recovery guarantees are available for both methods, their performance is either unsatisfying or restricted to the unweighted graphs. We close this gap, deriving recovery guarantees for the weighted problem that are completely analogous to the unweighted version.


Introduction
In this paper we consider the problem of recovering a d dimensional vector of angles ϕ ∈ [0, 2π) d from noisy pairwise differences of its entries ϕ ℓ − ϕ j + η ℓ,j mod 2π, ℓ, j ∈ {1, . . . , d}, where η ℓ,j denotes noise. This problem is commonly referred to as angular synchronization or phase synchronization. It frequently arises in various applications such as recovery from phaseless measurements [1,2,3,4,5,6,7,8], ordering of data from relative ranks [9], digital communications [10], and distributed systems [11]. The problem of angular synchronization is also closely related to the broader problem of pose graph optimization [12], which appears in robotics and computer vision.
Rather than working with the angles ϕ ℓ directly, one typically considers the associated phase factors x ℓ := e iϕ ℓ , ℓ ∈ {1, . . . , d}. Hence the vector x = (x j ) d j=1 to be recovered belongs to the d-dimensional torus After this transformation, the pairwise differences ϕ ℓ − ϕ j mod 2π, ℓ, j ∈ {1, . . . , d} take the form of a product e i(ϕ ℓ −ϕ j ) = e iϕ ℓ · e −iϕ j = x ℓ x * j , where z * stands for complex conjugate of the number z and complex conjugate transpose in the case of vectors. The angular synchronization problem has clearly no unique solution as multiplying the vector x by a factor e iθ leads to the same product x ℓ x * j . Hence we can at best recover x up to a global phase factor, that is, two solutions x, x ′ ∈ C d are to be considered equivalent if y = e iθ x ′ for some θ ∈ [0, 2π). A natural distance measure between two equivalence classes is given by A solution to the angular synchronization problem is thus any vector for which this expression vanishes. In many applications such as certain algorithms for ptychography [1,2,6,7,8], noisy observations of only a strict subset of the set of differences are available. To mathematically describe this restriction we will work with the quantity E := {(ℓ, j) ∈ {1, . . . , d} × {1, . . . , d} : noisy ϕ ℓ − ϕ j is known and j = ℓ}.
In these ptychography applications, one also encounters a version of the problem that is generalized in yet another way. Namely, the entries of the vector y to be recovered are not all of modulus 1 (but still assumed to be known). The measurements are still of the form y j y * k affected by noise. Clearly this generalized problem can be directly reduced to the angular synchronization problem in its original form by just dividing each measurement by the product of the known magnitudes of the associated entries, but one should note that the noise is also affected by this transformation.
We will now present a short overview of the major developments in angular synchronization. The approaches to the problem mainly split into two dominant branches, which essentially differ by the underlying noise model.
In the first branch, it is assumed that the observed pairwise products of the unknown phase factors are affected by independent Gaussian noise. Typically these results work with E = {(ℓ, j)|j, ℓ, ∈ {1, . . . , d}, j = ℓ}, i.e., assuming control of the full set of pairwise differences. That is, the matrix of measurementsX is given bŷ where ∆ is a d × d Hermitian matrix with ∆ ℓ,ℓ = 0 and ∆ ℓ,j , ℓ > j, being independent centered complex Gaussian random variables with unit variance and σ > 0. This noise model allows to perform maximum likelihood estimation which leads to the least squares problem (LSP) with weights w ℓ,j = 1/σ 2 , ℓ = j and w ℓ,ℓ = 0. Due to the condition z ∈ T d , the LSP (3) is NP-hard [13]. Therefore, Singer [14] proposed two possible relaxations, the eigenvector relaxation (ER) and the semidefinite convex relaxation (SDP). Both will be discussed in Section 2. By a closer inspection of the maximum likelihood approach, Bandeira, Boumal and Singer [15] were able to establish an error bound for the solution of the LSP (3) which holds with high probability. In addition the authors gave sufficient conditions on the standard deviation σ under which the SDP recovers the solution of the LSP (3). As an alternative to the relaxation approaches Boumal [16] proposed an iterative approach called generalized power method (GPM) to solve the LSP directly. He showed that the method converges to the minimizer of (3). Later Liu et al. [17] provided additional details about the convergence rate of the GPM. In subsequent work [18], Zhong and Boumal extended the admissible range of σ providing near-optimal error bounds for solutions of the LSP, ER and the GPM and improved the sufficient conditions for the tightness of the SDP relaxation.
For the variant of the angular synchronization problem where the vector y to be recovered does not only have entries of modulus one, this theory does not directly apply, as the added Gaussian noise will encounter entrywise rescaling and hence no longer have the same variance for all entries. The least squares approach will, however, have a natural generalization. In analogy to (3) where all differences are multiplied by the inverse of the variance of the i.i.d. noise variables, one weights each difference with the inverse of the variance of the corresponding rescaled noise term, which yields a linear scaling in |y j y ℓ |. While this method is not covered by the theory just discussed, it serves as an important motivation for the approach of this paper.
The second branch of development for the angular synchronization problem works with the model that the angular differences rather than the associated phase factors are affected by noise. This version of the problem has also been studied for more general sets E. Consequently, the matrix of measurementsX in this model is given by the entrieŝ where η ℓ,j corresponds to the angular noise, or when the entries to be recovered are not of unit modulus. Under this model, random noise is somewhat harder to study due to the multiplicative structure. Consequently, most works employ an adversarial noise model making no assumptions on the distribution of the noise. That is, maximum likelihood estimation is no longer applicable. Nevertheless, weighted least squares minimization (3) can still be applied without the statistical justification; and a natural choice for the weights remains w j,k = |y j y k |. This is in line with the observation that if for two vectors y andỹ the modulus of each entry agrees, then smaller entries play less of a role in determining distance in the sense of (1). Indeed the expansion motivates weights of |y ℓ | 2 for the diagonal entries, which naturally generalizes to the weights motivated by the maximum likelihood formula. For ptychography applications, these weights have also been shown to numerically outperfom the unweighted scenario (see Section 4.4 in [2]).
For the multiplicative noise model, several error bounds have been presented in the literature. Iwen et al. [1] worked with the unweighted LSP (3) and established recovery guarantees for the ER based on Cheeger's inequality [19]. Later in [2], Preskitt developed error bounds for the unweighted case of the LSP. He additionally developed alternative bounds for any selection of weights in the problem (3) and provided sufficient conditions for tightness of the SDP relaxation.
In the literature, the SDP relaxation is studied more often, as under certain conditions it recovers a true solution of the optimization problem (3). On the other hand, it is computationally heavy and above a certain noise level the relaxation is longer tight, so SDP fails to return the exact solution of the LSP. Thus beyond this threshold, no recovery guarantees for SDP are available. ER, in contrast, is much faster, especially for large dimension d, and its recovery guarantees, where available, are not restricted by tightness assumptions. Before this paper, however, such guarantees were only available for the unweighted scenarios, even though SDP and ER exhibit similar reconstruction accuracy in numerical experiments.
In this paper, we close this gap, providing recovery guarantees for weighted angular synchronization via eigenvector relaxation from measurements of the form (4), following the setup of [1,2]. We numerically demonstrate that our guarantees even outperform the best known guarantees for the unrelaxed problem LSP. Along the way, we also establish improved bounds for LSP.

Problem setup and previous results
We study the problem of recovering a vector x = (x j ) d j=1 with unimodular entries x j = e iϕ j from partial and possibly noisy information on the pairwise differences . Here we used the notation [n] = {1, . . . , n}. As we consider angular noise, the noisy observations will take the form e i(ϕ ℓ −ϕ j +η ℓ,j ) , where η ℓ,j ∈ (−π, π] is the angular noise.
The phase factors corresponding to the true pairwise differences will be arranged as a matrix X ∈ C d×d , the noisy observation as a matrixX ∈ C d×d , that is, the entries of these matrices are given by By H d we denote the space of all d × d Hermitian matrices. With N ∈ H d with entries N ℓ,j = e iη ℓ,j denoting the matrix rearrangement of the multiplicative noise, one observes that these two matrix representations are related viaX = X • N , where for two matrices A, B ∈ C d×d , A•B denotes their Hadamard product as defined by (A•B) n,m = A n,m B n,m . As a measure for the noise level, we will use size of X −X in a weighted Frobenius norm or a weighted spectral norm; recall that for A ∈ C d×d the Frobenius norm and the spectral norm are given by Av 2 , respectively.
The quality of reconstruction will be measured in the Euclidean norm on C d , given by For the proofs we will also need the supremum norm v ∞ := max 1≤j≤d |v j |.
This operator is extended to any matrix space C d×d ′ by entrywise operation, i.e. for any . Similarly to previous works, our analysis is based on a graph theoretic interpretation. Namely, the matrices X andX can be seen as edge weight matrix of a weighted undirected graph G = (V, E, W ). Consequently, one has |V | = d, and we can identify V with [d] = {1, . . . , d}. The set of edges E is naturally identified with the index set of the observed noisy angular differences introduced above. It directly follows from the problem setup that the weight function W : To analyze this graph, we need some basic concepts from graph theory. The adjacency matrix A G of G is given by With this notation, one obtains the compact expression X = A G (xx * ). In case W ≡ 1 on its support, i.e., W = A G , we speak of G as an unweighted graph. The degree of the vertex ℓ is defined as and the corresponding degree matrix is the diagonal matrix The Laplacian of the graph G is given by Observe that, as the graph is undirected, the Laplacian is symmetric. Moreover, since w ℓ,j ≥ 0 we have Hence the Laplacian is positive semidefinite and therefore has a spectrum consisting of non-negative real numbers, which we denote by λ j with indices j arranged in ascending order, i.e., 0 = λ 1 ≤ λ 2 ≤ · · · ≤ λ d .
Here the first equality follows from the observation that the vector 1 = (1, . . . , 1) T satisfies L G 1 = 0. The spectral gap of G is defined as τ G = λ 2 . A graph G is connected if and only if τ G > 0, see [20]. In that case, the null space of L G is spanned by 1.
Besides the Laplacian L G the normalized Laplacian L N of G is often used. It is defined as Its spectrum consists of non-negative real numbers as well and we write τ N for its second smallest eigenvalue λ 2 (L N ). The data dependent Laplacians associated to X andX are defined as Note that under the multiplicative noise model used in this paper, both these Laplacians are positive semidefinite matrices by Gershgorin's circle theorem.
The data dependent LaplacianL corresponding to the noisy observations allows for a compact representation of the least squares problem (3) at the core of our recovery method. Indeed, observe that Due to the quadratic constraint z ∈ T d the quadratic minimization problem (6) is nonconvex and thus NP-hard in general. One way to obtain a feasible problem is to relax the constraint in (6) to z 2 2 = d and obtain min This is nothing else but the determination of the smallest eigenvalue of the matrixL and can be solved efficiently using Rayleigh quotient. We will refer to (7) as eigenvector relaxation (ER). An error bound for the ER based reconstruction was given by Iwen et. al. in [1] for the case of unweighted graphs. Their proof is based on the Cheeger inequality that is only available for the normalized Laplacian, which is why the minimization problem in their theorem has a different normalization than (7). In the special case that deg(ℓ) is a constant for all ℓ (as in [1]), the two normalizations agree up to a constant. Using the terminology introduced above their result reads as follows.
An alternative approach is based on the idea of lifting the problem to the matrix space. It makes use of the relation With this the minimization problem (6) transforms into Z 0, rank(Z) = 1.
The class of minimization problems with explicit rank constraints is known to include many NP-hard instances [21, Chapter 2], so a common strategy is to perform a semidefinite relaxation. For (9), the following relaxation has been proposed in [22].
We will refer to this minimization problem as SDP. Note that if Z meets the rank condition in (9) one obtains that Z = zz * , where z is a solution of (6). Without the rank condition, however, the solution to (10) may have higher rank. In this case the methods outputs the phase factors corresponding to the entries of the eigenvector associated to the largest eigenvalue as an approximation for the solution of (6) [23].
As it was mentioned before, the error bound is commonly derived for the solution of the LSP and then applied to the solution of the SDP when it has rank 1. For unweighted graphs, a first result on recovery guarantees of the SDP has been established by Preskitt [2]. Adjusted to our terminology his result reads as follows.
If additionally inequality For a detailed comparison of Theorems 2.1 and 2.2, we refer reader to Section 4.3.2 of [2].
The first results addressing a generalization to the important case of weighted graphs have been derived by Preskitt [2]. The following formulations have again been adjusted to our notation.
As the square root in (2.3.A) produces slow convergence as the noise diminishes, i.e. X approaches X, in the many cases bound (2.3.B) outperforms (2.3.A). For unweighted graphs, as we numerically explore in Section 5, the bounds of Theorem 2.3.B are similar to those of Theorem 2.2 and superior to the bounds of Theorem 2.1 for ER in many cases. They are, however, only valid for SDP when the relaxation is tight, which is only guaranteed when the weighted noise on the phase factor matrix is bounded by τ G /(1+ √ d). As the spectral gap τ G is typically rather small, as compared to the dimension d, tightness is guaranteed only for very small noise levels. In fact, our numerical simulations in Section 5 show that the SDP relaxation is indeed not tight in many cases. In constrast, the recovery guarantees for ER provided by Theorem 2.1 for the unweighted case are applicable independently of the tightness of the relaxation.

Improved error bounds
The main contribution of this paper concerns recovery guarantees for weighted angular synchronization via eigenvector relaxation, which are often stronger than even the best known bounds for the unrelaxed problem and do not require any a priori bound for the error to ensure tightness of the relaxation. Along the way, we derive similar error bounds for the solution of the least squares problem, which are exactly analogous to those provided by Theorem 2.2 in the unweighted case. The superior scaling of our error bounds as compared to Theorem 2.3 is also confirmed by numerical simulations in Section 5. We first state our result in general form, before discussing three special cases of interest.
We note that R • (X − X) F can be estimated from above as The first consequence of interest concerns the unweighted case, where the noise norm in Theorem 3.1 simplifies to X − X F as in Theorem 2.2. For LSP, Theorems 3.1 and 2.2 yield the exact same bound, the bounds in Theorem 3.1 for ER only differ by multiplicative factor c z , which is 2 when relaxation is tight and √ 2 + 2d in the worst case.
is an unweighted graph with τ G > 0. Let z ∈ C d be the minimizer of the ER (7). Then Our next examples are related to the ptychography problem. We remind the reader that the goal of ptychography is to estimate the a signal y ∈ C d from phaseless measurements through localized masks. A recent method for recovering the signal y from such observation is the BlockPR algorithm by Iwen et al. [1], see [2,6,7,8] for follow-up works developing this algorithm further that also rely on weighted angular synchronization. The BlockPR algorithm proceeds by combining neighboring masks to obtain estimates for the products of entries located close to each other. In mathematical terms, this procedure yields an approximation of the squared absolute values of the entries (so these can be assumed to be approximately known) and a noisy version of T δ (yy * ), where δ is the size of the mask and T δ is the restriction operator mapping a matrix to its entries indexed by the set , and |ℓ − j| < δ or |ℓ − j| > d − δ} (14) corresponding to the 2δ − 1 central sub-and superdiagonals, excluding the entries on the main diagonal. Thus the resulting measurements exactly correspond to (5) for E = E δ , which is why weighted angular synchronization is the natural method of choice. The weights in this problem are given by the matrix yy * restricted to the index set E, which yields the setup of the following corollary. Letx be a minimizer of (6) and let z be the minimizer of (7). Then we have In the next statement the set-up is analogous to the previous corollary but instead of having weights defined by |Ŷ ℓ,j | we work with |Ŷ ℓ,j | 2 .
Corollary 3.4. Consider a weighted graph G = (V, E, W ) whose weight matrix W is defined as follows. Let y ∈ C d with sgn(y) = x. Define matrices Y = (I + A G ) • y y * and X = sgn(Y ). Let M andX be the matrices containing the perturbed magnitudes and phases of Y , respectively, so that M ≈ |Y | andX = X • N and setŶ = M •X. Consider the weight matrix W with entries given by w ℓ,j = |Ŷ ℓ,j | 2 (1 − δ ℓ,j ) and assume that τ G > 0. Letx be a minimizer of (6) and let z be the minimizer of (7). Then we have

Proofs
Proof of Theorem 3.1. We will proceed by establishing the following four inequalities. and Note that Inequality (15) has been derived in [2], we will nevertheless include a proof for completeness. Equation (11) then follows by combining (15) and (18), Equation (12) is obtained as a combination of (16) and (17). The condition for xx * to be a minimizer of (10) directly follows from the following Lemma.
It remains to prove the four inequalities. To that extent, we recall that for α, β ∈ C with |β| = 1 we have With help of this inequality we obtain that Moreover, since x 2 2 = d and z 2 2 = d we have that The right hand side is minimal if Re (x * z) is maximal and equal to |x * z|. Hence with e iϑ := sgn(x * z) * we arrive at Re e −iϑ x * z = Re sgn(x * z) * · sgn(x * z) · |x * z| = |x * z|, (21) which motivates employing the following inequality Define the unitary matrix and by union modularity of x i where we used commutativity of diagonal matrices. This results in which shows in particular that the eigenvalues of L and L G coincide. By assumption τ G > 0 and hence the null space of L G is spanned by 1. Thus the null space of L is spanned by x.
The projection of e iϑ z onto the orthogonal complement of x is given by where we used that by (21), the inner product is real. Consequently, as q ⊥ x, one has that by Pythagoras' theorem and as x is in the null space of L In view of (20) and (21), we have With the Cauchy-Schwarz inequality and (23), this yields that By definition q is orthogonal to the null space of L which implies Combining this with (22) and the fact that λ 2 (L) = τ G as well as the definition of L, we obtain both (15) and (16) Indeed, (15) follows by comparing the second and the last item in this chain of inequalities, and (16) by comparing the first and the last item. Now we will prove inequality (17) and (18), again with largely identical proofs. For simplicity of notation, we introduce the following auxiliary variables g ℓ := x * ℓxℓ , h ℓ := x * ℓ z ℓ , and Λ ℓ,j := X * ℓ,jXℓ,j .
We start by using (α + β) 2 ≤ 2α 2 + 2β 2 to get and we further estimate For the first sum we observe that and obtain using (6) and the fact that z minimizes (7) that For the second sum we use that |h j | = |x * j z j | = |z j | and obtain The last step is to notice that Putting everything together we arrive at This concludes the proof of Inequality (17). For Inequality (18) we proceed analogously, withx taking the role of z; the only difference is that in (24) we are using the fact thatx minimizes (6) rather than the fact that z minimizes (7). The bound for the second sum is simplified as compared to (17), as we replaced z ∞ by x ∞ = 1.
The combined bound reads as

Proof of Corollary 3.2.
For an unweighted graph G we immediately get

Proof of Corollary 3.3.
Define an auxiliary weight matrix W 0 by (W 0 ) ℓ,j = |Y ℓ,j | (1−δ ℓ,j ). Using inequality (13) we obtain where in the third line we only increased the number of non-negative summands by adding the diagonal elements and in the last line we used the inequality ||α| − |β|| ≤ |α − β|.
Proof of Corollary 3.4. We rewrite the right side of the bound in Theorem 3.1 as and apply the inequality (19) to get

Numerical evaluation
In this section we present a numerical comparison of the error bounds discussed above.
Our goal is to illustrate that Theorem 3.1 indeed provides superior recovery guarantees for an important class of weighted angular synchronization problems, namely those appearing in the context of ptychography, as covered by Corollaries 3.3 and 3.4. In particular, we work with the edge set E δ as in (14), for some parameter δ ∈ [⌊(d + 1)/2⌋], which determines the neighborhood of indices for which the pairwise phase differences are known. In our numerical experiments, we consider measurements affected by angular noise, that is, the measurements are of the form (4), i.e.
with the noise model that η ℓ,j , (ℓ, j) ∈ E δ are independent random variables uniformly distributed on [−α, α] for some parameter α > 0 representing the noise level. We consider signals y drawn at random with coordinates y ℓ = a ℓ + ib ℓ , where a ℓ and b ℓ are independent identically distributed standard Gaussian random variables. We assume that the |y ℓ | are known, so the phases of the y ℓ are our unknown ground truth entries x ℓ = e iϕ ℓ . In most of the following examples, we fix the dimension to be d = 64 and the parameter δ = 16, so that approximately half of the pairwise phase differences are known. For each point in the figures we generated 30 test signals and plot the average norm of the error. All experiments were performed on the laptop computer running Windows 10 Pro with an Intel(R) Core(TM) i7-8550U processor, with 16 GB RAM and Matlab R2018b.
We begin with the comparison of the recovery guarantees for the different weight matrices covered by Corollaries 3.2, 3.3, and 3.4 in terms of the angular noise level α measured in degrees. To put the bounds into perspective, we include the empirical error of both SDP and ER.
Due to the fact that the coordinates ofx and x have modulus 1, a naïve bound for the phase error is given by Beyond this threshold, the error bounds provided by the statements are non-informative, which is why we indicate the threshold by a dashed black line in the plots.
Both for unweighted graphs ( Figure 1) and weighted graphs (Figures 2 and 3), we observe that the empirical error performs similarly for both SDP and ER; there is no significant difference between the two methods in terms of the phase error. For the low and medium noise levels, the phase error rises linearly with the angular noise level. Only for very high noise it exhibits faster growth.
For unweighted graphs (Figure 1), the guarantees provided by Corollary 3.2 for ER more or less agree with the bounds given by Theorem 2.2 for the least squares problem. This is remarkable not only because ER is faster than SDP (see Figure 4a below), but also in view of the significantly larger range of tightness of the relaxation, also depicted in Figure 1. Namely, Corollary 3.2 exhibits additional dimensional scaling factors only when the supremum norm of z is very large, which is not the case even for a large noise level. On the other hand, the SDP relaxation is only tight when the solution is of rank one, which already fails for a noise level of a few degrees. Thus for larger noise, the error bounds for LSP no longer apply for the SDP solution.
For weighted graphs (Figures 2 and 3), our bounds for the LSP improve upon previous works for a very large range of noise levels, and again, our guarantees for ER are very close to those of LSP. This is even more relevant than in the unweighted case, as the SDP relaxation is not tight even when the noise level is as small as 10 −3 . The tightness of ER is also inferior to the unweighted case, but we do observe approximate tightness for noise levels of a few degrees.
In terms of the runtime complexity (Figure 4a), ER exhibits almost linear scaling in the dimension of the problem and clearly outperforms SDP, whose runtime exhibits quadratic scaling. This difference is to be expected as SDP lifts the problem to a d × ddimensional matrix space and thus estimates d 2 unknowns instead of d in the case of      ER. In fact the large runtime complexity is a crucial bottleneck for SDP relaxations in ptychography, where the dimensions are commonly high. The last simulation (Figure 4b) illustrates how both the empirical error and our error bounds depend on the size of the mask in ptychography (which in turn is related to the connectivity of the graph). Again, up to constants, we observe a similar decay pattern for the error between theory and experiment with a fast decay for small δ (up to O(log d)) and slower decay for larger values of δ.

Discussion and future work
The main focus of this paper is the eigenvector relaxation of the angular synchronization problem. We derived new flexible error bounds for this method. Along the way, we established new recovery guarantees for the solution of the weighted least squares problem (3). Our numerical evaluation shows that obtained recovery guarantees outperform other results in the literature. As compared to semidefinite programming, eigenvector relaxation shows similar performance in the terms of empirical error and has significantly shorter runtime; at the same time our recovery guarantees are not subject to additional constraint corresponding to the tightness of the relaxation as they appear for the semidefinite programming.
Our numerical experiments are based on the simple random angular noise model, which likely does not correspond to the noise arising in applications. Also minimizing the phase error, as explored in Section 5 is likely not the optimal objective for ptychography.
Rather a weighted error model should be used to capture the varying impact of errors arising for coefficients of different magnitude. We consider it to be an interesting direction for future work to explore more application driven noise models and error metrics. Right: Angular synchronization as a part of the ptychographic reconstruction in [1] Noise decreases as SNR increases.
A first numerical exploration in this direction is shown in Figure 5. We observe that while for the simplified noise model, unweighted angular synchronization seems most appropriate, for Gaussian noise applied directly to the phaseless measurements [1], weighted angular synchronization is the method of choice.
Another interesting direction of future work is to extend the generalized power method developed by Boumal in [16] to arbitrary sets E. Our current results can be considered as a first step, since the generalized power method uses the solution of the eigenvector relaxation problem as initialization, so good bounds on the error are crucial for estimating the quality of the initialization.