Robust Group Synchronization via Cycle-Edge Message Passing

We propose a general framework for solving the group synchronization problem, where we focus on the setting of adversarial or uniform corruption and sufficiently small noise. Specifically, we apply a novel message passing procedure that uses cycle consistency information in order to estimate the corruption levels of group ratios and consequently solve the synchronization problem in our setting. We first explain why the group cycle consistency information is essential for effectively solving group synchronization problems. We then establish exact recovery and linear convergence guarantees for the proposed message passing procedure under a deterministic setting with adversarial corruption. These guarantees hold as long as the ratio of corrupted cycles per edge is bounded by a reasonable constant. We also establish the stability of the proposed procedure to sub-Gaussian noise. We further establish exact recovery with high probability under a common uniform corruption model.


Introduction
The problem of synchronization arises in important data-related tasks, such as structure from motion (SfM), simultaneous localization and mapping (SLAM), Cryo-EM, community detection and sensor network localization. The underlying setting of the problem includes objects with associated states, where examples of states are locations, rotations and binary labels. The main problem is estimating the states of objects from the relative state measurements between pairs of objects. One example is rotation synchronization, which aims to recover rotations of objects from the relative rotations between pairs of objects. The problem is simple when one has the correct measurements of all relative states. However, in practice the measurements of some relative states can be erroneous or missing. The main goal of this paper is to establish a theoretically-guaranteed solution for general compact group synchronization that can tolerate large amounts of measurement error.
We mathematically formulate the general problem in Section 1.1 and discuss common special cases of this problem in Section 1.2. Section 1.3 briefly mentions the computational difficulties in solving this problem and the disadvantages of the common convex relaxation approach. Section 1.4 non-technically describes our method, and Section 1.5 highlights its contributions. At last, Section 1.6 provides a roadmap for the rest of the paper.

Problem Formulation
The most common mathematical setting of synchronization is group synchronization, which asks to recover group elements from their noisy group ratios. It assumes a group G, a subset of this group {g * i } n i=1 and a graph G([n], E) with n vertices indexed by [n] = {1, . . . , n}. The group ratio between g * i and g * j is defined as g * ij = g * i g * −1 j . We use the star superscript to emphasize original elements of G, since the actual measurements can be corrupted or noisy. We remark that since g * ji = g * ij −1 , our setting of an undirected graph, G([n], E), is fine. We say that a ratio g * ij is corrupted when it is replaced byg ij ∈ G \{g * ij }, either deterministically or probabilistically. We partition E into the sets of uncorrupted (good) and corrupted (bad) edges, which we denote by E g and E b , respectively.
We denote the group identity by e G . We assume a metric d G on G, which is bi-invariant. This means that for any g 1 , g 2 , g 3 ∈ G, d G (g 1 , g 2 ) = d G (g 3 g 1 , g 3 g 2 ) = d G (g 1 g 3 , g 2 g 3 ).
We further assume that G is bounded with respect to d G and we thus restrict our theory to compact groups. We appropriately scale d G so that the diameter of G is at most 1.
Additional noise can be applied to the group ratios associated with edges in E g . For ij ∈ E g , the noise model replaces g * ij with g * ij g ij , where g ij is a G-valued random variable such that d G (g ij , e G ) is sub-Gaussian. We denote the corrupted and noisy group ratios by {g ij } ij∈E and summarize their form as follows: We refer to the case where g ij = e G for all ij ∈ E as the noiseless case. We view (1) as an adversarial corruption model since the corrupted group ratios and the corrupted edges in E b can be arbitrarily chosen; however, our theory introduces some restrictions on both of them. The problem of group synchronization asks to recover the original group elements {g * i } i∈ [n] given the graph G([n], E) and corrupted and noisy group ratios {g ij } ij∈E . One can only recover, or approximate, the original group elements {g * i } i∈ [n] up to a right group action. Indeed, for any g 0 ∈ G, g * ij can also be written as g * i g 0 (g * j g 0 ) −1 and thus {g * i g 0 } i∈ [n] is also a solution. It is natural to assume that G([n], E g ) is connected, since in this case the arbitrary right multiplication is the only degree of freedom of the solution.
In the noiseless case, one aims to exactly recover the original group elements under certain conditions on the corruption and the graph. In the noisy case, one aims to nearly recover the original group elements with recovery error depending on the distribution of d G (g ij , e G ).
At last, we remark that for similar models where the measurement g ij may not be in G but in an embedding space, one can first project g ij onto G and then apply our proposed method. Any theory developed for our model can extend for the latter one by projecting onto G.

Examples of Group Synchronization
We review the three common instances of group synchronization.

Z 2 Synchronization
This is the simplest and most widely known problem of group synchronization. The underlying group, Z 2 , is commonly represented in this setting by {−1, 1} with direct multiplication. A natural motivation for this problem is binary graph clustering, where one wishes to recover the labels in {−1, 1} of two different clusters of graph nodes from corrupted measurements of signed interactions between pairs of nodes connected by edges. Namely, the signed interaction of two nodes is 1 if they are in the same cluster and -1 if they are in a different cluster. Note that without any erroneous measurement, the signed interaction is obtained by multiplying the corresponding labels and thus it corresponds to the group ratio g * ij = g * i g * −1 j . Also note that clusters are determined up to a choice of labels, that is, up to multiplication by an element of Z 2 . The Z 2 synchronization problem is directly related to the Max-Cut problem [45] and to a special setting of community detection [1,12]. It was also applied to solve a specific problem in sensor network localization [13].

Permutation Synchronization
The underlying group of this problem is the symmetric group, that is, the discrete group of permutations, S N . This synchronization problem was proposed in computer vision in order to find globally consistent image matches from relative matches [35]. More specifically, one has a set of images and N feature points that are common to all images, such as distinguished corners of objects that appear in all images (they correspond to a set of N points in the 3D scene). These feature points, often referred to as keypoints, are arbitrarily labeled in each image. For any pair of images one is given possibly corrupted versions of the relative permutations between their keypoints. One then needs to consistently label all keypoints in the given images. That is, one needs to find absolute permutations of the labels of keypoints of each image into the fixed labels of the N 3D scene points.

Rotation Synchronization
The problem of rotation synchronization, or equivalently, SO(3) synchronization, asks to recover absolute rotations from corrupted relative rotations up to a global rotation. Its special case of angular synchronization, or SO(2) synchronization, asks to recover the locations of points on a circle (up to an arbitrary rotation) given corrupted relative angles between pairs of points. More generally, one may consider SO(d) synchronization for any d ≥ 2. Rotation synchronization is widely used in 3D imaging and computer vision tasks. In particular, [43] applies rotation synchronization for solving absolute rotations of molecules and [3,8,19,21,30,34,45] synchronize the relative rotations of cameras to obtain the global camera rotations in the problem of structure from motion.

On the Complexity of the Problem and Its Common Approach
Many groups, such as Z 2 , S N and SO(d) are non-convex and their synchronization problems are usually NP-hard [6,17,35]. Thus, many classic methods of group synchronization instead solve a relaxed semidefinite programming (SDP) problem (see review of previous methods and guarantees in Section 2). However, relaxation techniques may change the original problem and may thus not recover the original group elements when the group ratios are severely corrupted. Furthermore, the SDP formulations and analysis are specialized to the different groups. Moreover, their computational time can still be slow in practice.

Short and Non-technical Description of Our Work and Guarantees
The goal of this work is to formulate a universal and flexible framework that can address different groups in a similar way. It exploits cycle consistency, which is a common property shared by any group. That is, let L = {i 1 i 2 , i 2 i 3 . . . i m i 1 } be any cycle of length m 1 and define g * L = g * i 1 i 2 g * i 2 i 3 · · · g * i m i 1 , then the cycle consistency constraint is That is, the multiplication of the original group ratios along a cycle yields the group identity. In practice, one may only compute the following approximation for g * L : where for faster computation we prefer using only 3-cycles, so that g L = g ij g jk g ki . One basic idea is that the distances between g L and e G for cycles L containing edge ij, which we refer to as cycle inconsistencies, provide information on the distance between g * ij and g ij , which we refer to as the corruption level of edge ij. Our proposed Cycle-Edge Message Passing (CEMP) algorithm thus estimates these corruption levels using the cycle inconsistencies by alternatingly updating messages between cycles and edges. The edges with high corruption levels can then be confidently removed.
In theory, the latter cleaning (or removal) procedure can be used for recovering the original group elements in the noiseless case and for nearly recovering them in the case of sufficiently small noise. In fact, we obtain the strongest theoretical guarantees for general group synchronization with adversarial corruption.
In practice, Section 4.2.8 suggests methods of using the estimated corruption levels for solving the group synchronization problem in general scenarios.
The basic idea of this work was first sketched for the different problem of camera location estimation in a conference paper [38] (we explain this problem later in Section 2.1). In addition to formulating this idea to the general group synchronization problem as well as carefully explaining it in the context of message passing, we present nontrivial theoretical guarantees, unlike the very basic and limited ones in [38]. Most importantly, we establish exact and fast recovery of the underlying group elements.

Contribution of This Work
The following are the main contributions of this work: New insight into group synchronization: We mathematically establish the relevance of cycle consistency information to the group synchronization problem (see Section 3). Unified framework via message passing: CEMP applies to any compact group. This is due to the careful incorporation of cycle consistency, which is a general property of groups. As later explained in Section 4.3, our algorithm is different from all previous message passing approaches, and in particular, does not require assumptions on the underlying joint distributions. Strongest theory for adversarial Corruption: We claim that CEMP is the first algorithm that is guaranteed to exactly recover group elements from adversarially corrupted group ratios under reasonable assumptions (see Section 5.2). Previous guarantees for group synchronization assume very special generative models and often asymptotic scenarios and special groups. We are only aware of somewhat similar guarantees in [20,23,28], but for the different problem of camera location estimation. We claim that our theory is stronger since it only requires a constant uniform upper bound on the local corruption levels, whereas a similar upper bound in [20,28] depends on n and the sparsity of the graph. Moreover, our argument is much simpler than [20,28] and we also need not assume the restrictive Erdős-Rényi model for generating the graph. While [20,28] suggest a constructive solution and we only estimate the corruption levels, the guarantees of [20,28] only hold for the noiseless case, and in this case correct estimation of corruption by our method is equivalent with correct solution of the group elements. Stability to noise: We establish results for approximate recovery of CEMP in the presence of both adversarial corruption and noise (see Sections 5.3 and 5.4). For sub-Gaussian noise, we only require that the distribution of d G (g ij , g * ij ) is independent and sub-Gaussian, unlike previous specific noise distribution assumptions of g ij [35,36]. For the case where d G (g ij , g * ij ) is bounded for ij ∈ E g , we state a deterministic perturbation result. Recovery under uniform corruption: When the edges in G([n], E) are generated by the Erdős-Rényi model and the corrupted group ratios are i.i.d. sampled from the Haar measure on G, we can guarantee with high probability exact recovery and fast convergence by CEMP for any fixed corruption probability, 0 ≤ q < 1, and any edge connection probability, 0 < p ≤ 1, as long as the sample size is sufficiently large. Our analysis is not restricted anymore by a sufficiently small uniform upper bound on the local corruption levels. Using these results, we derive sample complexity bounds for CEMP with respect to common groups. We point at a gap between these bounds and the information-theoretic ones as well as ones for other algorithms. Nevertheless, to the best of our knowledge, there are no other results for continuous groups that hold for any q < 1.

Organization of the Paper
Section 2 gives an overview of previous relevant works. Section 3 mathematically establishes the relevance of cycle-based information to the solution of group synchronization. Section 4 describes our proposed method, CEMP, and carefully interprets it as a message passing algorithm. Section 5 establishes exact recovery and fast convergence for CEMP under the adversarial corruption model and shows the stability of CEMP under bounded and sub-Gaussian noise. Section 6 establishes guarantees under a special random corruption model. Section 7 demonstrates the numerical performance of CEMP using artificial datasets generated according to either adversarial or uniform corruption models. Section 8 concludes this work, while discussing possible extensions of it. The appendix contains various proofs and technical details, where the central ideas of the proofs are in the main text.

Related Works
This section reviews existing algorithms and guarantees for group synchronization and also reviews methods that share similarity with the proposed approach. Section 2.1 overviews previous works that utilize energy minimization and formulates a general framework for these works. Section 2.2 reviews previous methods for inferring corruption in special group synchronization problems by the use of cycle-consistency information. Section 2.3 reviews message passing algorithms and their applications to group synchronization.

Energy Minimization
Most works on group synchronization require minimizing an energy function. We first describe a general energy minimization framework for group synchronization and then review relevant previous works. This framework uses a metric d G defined on G and a function ρ from R |E| + to R + . We remark that R + denotes the set of nonnegative numbers and | · | denotes the cardinality of a set. The general framework aims to solve Natural examples of ρ include the sum of pth powers of elements, where p > 0, the number of non-zero elements, and the maximal element. The elements of Z 2 , S m and SO(d) (that is, the most common groups that arise in synchronization problems), can be represented by orthogonal matrices with sizes N = 1, m, and d, respectively. For these groups, it is common to identify each g i , i ∈ [n], with its representing matrix, choose d G as the Frobenius norm of the difference of two group elements (that is, their representing matrices), ρ(·) = · ν ν , where ν = 2 or ν = 1, and consider the following minimization problem The best choice of ν depends on G, the underlying noise and the corruption model. For Lie groups, ν = 2 is optimal under Guassian noise, and ν = 1 is more robust to outliers (i.e., robust to significantly corrupted group ratios). For some examples of discrete groups, such as Z 2 and S N , ν = 2 is information-theoretically optimal for both Gaussian noise and uniform corruption.
For ν = 2, one can form an equivalent formulation of (5). It uses the block , is the solution of (5), or equivalently, X = xx T , where x = (g i ) i∈[n] ∈ R nN ×N . In order to obtain this, X needs to be positive semi-definite of rank N and its blocks need to represent elements of G, where the diagonal ones need to be the identity matrices. For SO (2), it is more convenient to represent g i and g ij by elements of U (1), the unit circle in C, and thus replace R 2n×2 and R 2n×2n with C n×1 and C n×n . Using these components, the equivalent formulation can be written as The above formulation is commonly relaxed by removing its two nonconvex constraints: rank(X) = N and {X ij } n i,j=1 ⊂ G. The solutionX of this relaxed formulation can be found by an SDP solver. One then commonly computes its top N eigenvectors and stacks them as columns to obtain the n × 1 vector of N × N blocks,x (note thatxx T is the best rank-N approximation ofX in Frobenius norm). Next, one projects each of the N blocks ofx (of size N × N ) onto G. This whole procedure, which we refer to in short as SDP, is typically slow to implement [41]. A faster common method, which we refer to as Spectral, applies a similar procedure while ignoring all constraints in (6). In this case, the highly relaxed solution of (6) isX := Y and one only needs to find its top N eigenvectors and project their blocks on the group elements [41].
The formulation (6) and its SDP relaxation first appeared in the celebrated work of Goemans and Williamson [18] on the max-cut problem. Their work can be viewed as a formulation for solving Z 2 synchronization. Amit Singer [41] proposed the generalized formulation and its relaxed solutions for group synchronization, in particular, for angular synchronization.
The exact recovery for Z 2 synchronization is studied in [2,5] by assuming an Erdős-Rényi graph, where each edge is independently corrupted with probability q < 1/2. Abbe et al. [2] specified an information-theoretic lower bound on the average degree of the graph in terms of q. Bandeira [5] established asymptotic exact recovery for SDP for Z 2 synchronization w.h.p. (with high probability) under the above information-theoretic regime. Montanari and Sen [32] studied the detection of good edges, instead of their recovery, under i.i.d. additive Gaussian noise.
Asymptotic exact recovery for convex relaxation methods of permutation synchronization appears in [10,35]. In [35], noise is added to the relative permutations in S N . The permutations are represented by N × N matrices and the elements of the additive N × N noise matrix are i.i.d. N (0, η 2 ). In this setting, exact recovery can be guaranteed when η 2 < (n/N )/(1 + 4(n/N ) −1 ) as nN → ∞. An SDP relaxation, different from (6), is proposed in [10,22]. It is shown in [22] that for fixed N and probability of corruption less than 0.5, their method exactly recovers the underlying permutations w.h.p. as n → ∞. We remark that [22] assumes element-wise corruption of permutation matrices which is different from ours. An improved theoretical result is given by Chen et al. [10], which matches the information-theoretic bound.
Rotation synchronization has been extensively studied [3,8,19,21,30,45]. In order to deal with corruption, it is most common to use 1 energy minimization [8,21,45]. For example, Wang and Singer formulated a robust SO(d) synchronization, for any d ≥ 2, as the solution of (5) with ν = 1 and G = SO(d). Inspired by the analysis of [48,27], they established asymptotic and probabilistic exact recovery by the solution of their minimization problem under the following very special probabilistic model: The graph is complete or even Erdős-Rényi, the corruption model for edges is Bernoulli with corruption probability less than a critical probability p c that depends on d, and the corrupted rotations are i.i.d. sampled from the Haar distribution on SO(d). They proposed an alternating direction augmented Lagrangian method for practically solving their formulation, but their analysis only applies to the pure minimizer.
A somewhat similar problem to group synchronization is camera location estimation [20,33,34,38]. It uses the non-compact group R 3 with vector addition and its input includes possibly corrupted measurements of {T g * ij } ij∈E , where T (g * ij ) = g * ij / g * ij and · denotes the Euclidean norm. The application of T distorts the group structure and may result in loss of information.
For this problem other forms of energy minimization have been proposed, which often differ from the framework in (5). The first exact recovery result for a specific energy-minimization algorithm was established by Hand, Lee and Voroniski [20]. The significance of this work is in the weak assumptions of the corruption model, whereas in the previously mentioned works on exact recovery [2,5,12,22,35,45], the corrupted group ratios followed very specific probability distributions. More specifically, the main model in [20] assumed an Erdős-Rényi graph G([n], E) with parameter p for connecting edges and an arbitrary corrupted set of edges E b , whose corruption is quantified by the maximal degree of G([n], E b ) divided by n, which is denoted by b . The transformed group ratios, T (g ij ), are T (g * ij ) for ij ∈ E g and are arbitrarily chosen in S 2 , the unit sphere, for ij ∈ E b . They established exact recovery under this model with b = O(p 5 / log 3 n). A similar exact recovery theory for another energy-minimization algorithm, namely the Least Unsquared Deviations (LUD) [33], was established by Lerman, Shi and Zhang [28], but with the stronger corruption bound, b = O(p 7/3 / log 9/2 n).
Huang et al. [23] solved an 1 formulation for 1D translation synchronization, where G = R with regular addition. They proposed a special version of IRLS and provided a deterministic exact recovery guarantee that depends on b and a quantity that uses the graph Laplacian.

Synchronization Methods Based on Cycle Consistency
Previous methods that use the cycle consistency constraint in (2) only focus on synchronizing camera rotations. Additional methods use a different cycle consistency constraint to synchronize camera locations. Assuming that G lies in a metric space with a metric d G (· , ·), the corruption level in a cycle L can be indicated by the cycle inconsistency measure d G (g L , e G ), where g L was defined in (3). There exist few works that exploit such information to identify and remove the corrupted edges. A likelihood-based method [47] was proposed to classify the corrupted and uncorrupted edges (relative camera motion) from observations d G (g L , e G ) of many sampled L's. This work has no theoretical guarantees. It seeks to solve the following problem: max The variables {x ij } ij∈E provide the assignment of edge ij in the sense that x ij = 1 {ij∈E g } , where 1 denotes the indicator function. One of the proposed solutions in [47] is a linear programming relaxation of (7). The other proposed solution of (7) uses belief propagation. It is completely different from the message passing approach proposed in this work. Shen et al. [37] finds a cleaner subset of edges by searching for consistent cycles. In particular, if a cycle L of length m satisfies d G (g L , e) < / √ m, then all the edges in the cycle are treated as uncorrupted. However, this approach lacks any theoretical guarantees and may fail in various cases. For example, the case where edges are maliciously corrupted and some cycles with corrupted edges satisfy d G (g L , e) < / √ m. An iterative reweighting strategy, referred to as IR-AAB, was proposed in [38] to identify corrupted pairwise directions when estimating camera locations. Experiments on synthetic data showed that IR-AAB was able to detect exactly the set of corrupted pairwise directions that were uniformly distributed on S 2 with low or medium corruption rate. However, this strategy was only restricted to camera location estimation and no exact recovery guarantees were provided for the reweighting algorithm. We remark that our current work is a generalization of [38] to compact group synchronization problems. We also provide a message-passing interpretation for the ideas of [38] and stronger mathematical guarantees in our context, but we do not address here the camera location estimation problem.

Message Passing Algorithms
Message passing algorithms are efficient methods for statistical inference on graphical models. The most famous message passing algorithm is belief propagation (BP) [46]. It is an efficient algorithm for solving marginal distribution or maximizing the joint probability density of a set of random variables that are defined on a Bayesian network. The joint density and the corresponding Bayesian network can be uniquely described by a factor graph that encodes the dependencies of factors on the random variables. In particular, each factor is considered as a function of a small subset of random variables and the joint density is assumed as the product of these factors. The BP algorithm passes messages between the random variables and factors in the factor graph. When the factor graph is a tree, then BP is equivalent to dynamic programming and can converge in finite iterations. However, when the factor graph contains loops, BP has no guarantee of convergence and accuracy. The BP algorithm is applied in [47] to solve the maximal likelihood problem (7). However, since the factor graph defined in [47] contains many loops, there are no convergence and accuracy guarantees of the solution.
Another famous class of message passing algorithms is approximate message passing (AMP) [14,36]. AMP can be viewed as a modified version of BP and it is also used to compute marginal distribution and maximal likelihood. The main advantage of AMP over BP is that it enjoys asymptotic convergence guarantees even on loopy factor graphs. AMP was first proposed by Donoho, Maleki, and Montanari [14] to solve the compressed sensing problem. They formulated the convex program for this problem as a maximal likelihood estimation problem and then solved it by AMP. Perry et al. [36] applies AMP to group synchronization over any compact group. However, they have no corruption and only assume additive i.i.d. Gaussian noise model, where they seek an asymptotic solution that is statistically optimal.
Another message passing algorithm [12] was proposed for Z 2 synchronization. It assigns probabilities of correct labeling to each node and each edge. These probabilities are iteratively passed and updated between nodes and edges until convergence. There are several drawbacks of this method. First of all, it cannot be generalized to other group synchronization problems. Second, its performance is worse than SDP under high corruption [12]. At last, no theoretical guarantee of exact recovery is established. We remark that this method is completely different from the method proposed here.

Cycle Consistency is Essential for Group Synchronization
In this section, we establish a fundamental relationship between cycle consistency and group synchronization, while assuming the noiseless case. We recall that d G is a bi-invariant metric on G and that the diameter of G is 1, that is, d G (· , ·) ≤ 1.
Although the ultimate goal of this paper is to estimate group elements {g * i } i∈[n] from group ratios {g ij } ij∈E , we primarily focus on a variant of such a task. That is, estimating the corruption level from the cycle-inconsistency measure where C is a set of cycles that are either randomly sampled or deterministically selected. We remark that in our setting, exact estimation of {s * ij } ij∈E is equivalent to exact recovery of {g * i } i∈ [n] . Proposition 1, which is proved in Appendix A. .
We first remark that, in practice, shorter cycles are preferable due to faster implementation and less uncertainties [47], and thus when establishing the theory for CEMP in Sections 5 and 6 we let C be the set of 3-cycles C 3 . However, we currently leave the general notation as our work extends to the more general case.
We further remark that for corruption estimation, only the set of real numbers {d L } L∈C is needed, which is simpler than the set of given group ratios {g ij } ij∈E . This may enhance the underlying statistical inference.
We next explain why cycle-consistency information is essential for solving the problems of corruption estimation and group synchronization. Section 3.1 shows that under a certain condition the set of cycle-inconsistency measures, {d L } L∈C , provides sufficient information for recovering corruption levels. Section 3.2 shows that cycle consistency is closely related to group synchronization and plays a central role in its solution. It further explains that many previous works implicitly exploit cycle consistency information.

Exact Recovery Relies on a Good-Cycle Condition
In general, it is not obvious that the set {d L } L∈C contains sufficient information for recovering {s * ij } ij∈E . Indeed, the former set generally contains less information than the original input of our problem, {g ij } ij∈E . Nevertheless, Proposition 2 implies that if every edge is contained in a good cycle (see formal definition below), then {d L } L∈C actually contains the set {s * ij } ij∈E .
Definition 1 (Good-Cycle Condition) G([n], E), E g and C satisfy the goodcycle condition if for each ij ∈ E, there exists at least one cycle L ∈ C containing ij such that L \ {ij} ⊆ E g .

Proposition 2
Assume data generated by the noiseless adversarial corruption model, satisfying the good-cycle condition. Then, s * ij = d L ∀ij ∈ E, L ∈ C such that ij ∈ L and L \ {ij} ⊆ E g .
Proof Fix ij ∈ E and let L = {ij, jk 1 , k 1 k 2 , k 2 k 3 , . . . , k m i} ij be a good cycle, i.e., L \ {ij} ⊆ E g . Applying the definitions of d L and then g L , next right multiplying with g * ij while using the bi-invariance of d G , then applying (2) and at last using the definition of s * ij , yield We formulate a stronger quantitative version of Proposition 2, which we frequently use in establishing our exact recovery theory. We prove it in Appendix A.2.
Lemma 1 For all ij ∈ E and any cycle L containing ij in G([n], E),

A Natural Mapping of Group Elements onto Cycle-Consistent Ratios
Another reason for exploiting the cycle consistency constraint (2) is its crucial connection to group synchronization. Before stating the relationship clearly, we define the following notation. Denote by (g i ) i∈[n] ∈ G n and (g ij ) ij∈E ∈ G |E| the elements of the product spaces G n and G |E| , respectively. We say that (g i ) i∈[n] and (g i ) i∈[n] are equivalent, which we denote by (g i ) i∈[n] ∼ (g i ) i∈[n] , if there exists g 0 ∈ G such that g i = g i g 0 for all i ∈ [n]. This relationship induces an equivalence class [(g i ) i∈[n] ] for each (g i ) i∈[n] ∈ G n . In other words, each [(g i ) i∈ [n] ] is an element of the quotient space G n /∼. We define the set of cycle-consistent (g ij ) ij∈E with respect to C by The following proposition demonstrates a bijection between the group elements and cycle-consistent group ratios. Its proof is included in Appendix A.3.

Proposition 3
Assume that G([n], E) is connected and any ij ∈ E is contained in at least one cycle in C. Then, h : Remark 1 The function f is an isomorphism, that is, if and only if G is Abelian. Indeed, if G is Abelian the above equation is obvious. If the above equation holds ∀(g i ) i∈[n] , (g i ) i∈[n] ∈ G n , then g i g i g −1 , and thus G is Abelian.

Remark 2
The condition on C of Proposition 3 holds under the good-cycle condition.
This proposition signifies that previous works on group synchronization implicitly enforce cycle consistency information. Indeed, consider the formulation in (4) that searches for (g i ) i∈[n] ∈ G n (more precisely, [(g i ) i∈ [n] ] ∈ G n /∼) that minimize a function of {d G (g ij , g i g −1 j )} ij∈E . In view of the explicit expression for the bijection f in Proposition 3, this is equivalent to finding the closest cycle-consistent group ratios (g ij ) ij∈E ∈ G C to the given group ratios (g ij ) ij∈E . However, direct solutions of (4) are hard and proposed algorithms often relax the original minimization problem and thus their relationship with cycle-consistent group ratios may not be clear. A special case that may further demonstrate the implicit use of cycle-consistency in group synchronization is when using ρ(·) = · 0 (that is, ρ is the number of non-zero elements) in (4). We note that this formulation asks to minimize among g i ∈ G the number of non-zero elements in (d G (g ij , g i g −1 j )) ij∈E . By Proposition 3, it is equivalent to minimizing among (g ij ) ij∈E ∈ G C the number of elements in {ij ∈ E : g ij = g ij }, or similarly, maximizing the number of elements in {ij ∈ E : g ij = g ij }. Thus the problem can be formulated as finding the maximal E ⊆ E such that {g ij } ij∈E is cycle-consistent. If the maximal set is E g , which makes the problem well-defined, then in view of Proposition 1, its recovery is equivalent with exact recovery of {s * ij } ij∈E .

Cycle-Edge Message Passing (CEMP)
We describe CEMP and explain the underlying statistical model that motivates the algorithm. Section 4.1 defines the cycle-edge graph (CEG) that will be used to describe the message passing procedure. Section 4.2 describes CEMP and discusses at length its interpretation and some of its properties. Section 4.3 compares CEMP with BP, AMP and IRLS.

Cycle-Edge Graph
We define the notion of a cycle-edge graph (CEG), which is analogous to the factor graph in belief propagation. We also demonstrate it in Figure 1. Given the graph G([n], E) and a set of cycles C, the corresponding cycle-edge graph G CE (V CE , E CE ) is formed in the following way.
1. The set of vertices in G CE is V CE = C ∪ E. All L ∈ C are called cycle nodes and all ij ∈ E are called edge nodes. 2. G CE is a bipartite graph, where the set of edges in G CE is all the pairs (ij, L) such that ij ∈ L in the original graph G([n], E).
For each cycle node L in G CE , the set of its neighboring edge nodes in G CE is N L = {ij ∈ E : ij ∈ L}. We can also describe it as the set of edges contained in L in the original graph G([n], E). We remark that we may treat edges and cycles as elements of either G CE or G([n], E) depending on the context. For each edge node ij in G CE , the set of its neighboring cycle nodes in G CE is N ij = {L ∈ C : ij ∈ L}. Equivalently, it it is the set of cycles containing ij in the original graph G([n], E). cycle nodes edge nodes

Description of CEMP
Given relative measurements (g ij ) ij∈E with respect to a graph G([n], E), the CEMP algorithm tries to estimate the corruption levels s * ij , ij ∈ E, defined in (8) by using the inconsistency measures d L , L ∈ C, defined in (9). It does it iteratively, where we denote by s ij (t) the estimate of s * ij at iteration t. Algorithm 1 sketches CEMP and Figure 2 illustrates its main idea. We note that Algorithm 1 has the following stages: 1) generation of CEG (which is described in Section 4.1); 2) computation of the cycle inconsistency measures (see (11)); 3) corruption level initialization for message passing (see (12)); 4) message passing from edges to cycles (see (13)); and 5) message passing from cycles to edges (see (14)).
The above first three steps of the algorithms are straightforward. In order to explain the last two steps we introduce some notation in Section 4.2.1. Section 4.2.2 explains the fourth step of CEMP and for this purpose it introduces a statistical model. We emphasize that this model and its follow-up extensions are only used for clearer interpretation of CEMP, but are not used in our theoretical guarantees. Section 4.2.3 explains the fifth step of CEMP using this model with additional two assumptions. Section 4.2.4 summarizes the basic insights about CEMP in a simple diagram. Section 4.2.5 interprets the use of two specific reweighting functions in view of the statistical model (while extending it). Section 4.2.6 explains why the exponential reweighting function is preferable in practice. Section 4.2.7 clarifies the computational complexity of CEMP. Section 4.2.8 explains how to post-process CEMP in order to recover the underlying group elements (and not just the corruption levels) in general settings.
We remark that we separate the fourth and fifth steps of CEMP for clarity of presentation, however, one may combine them using a single loop that computes for each ij ∈ E s ij (t + 1) = For C = C 3 , the update rule (15) can be further simplified (see (36) and (37)).

Algorithm 1 Cycle-Edge Message Passing (CEMP)
Input: graph G([n], E), relative measurements (g ij ) ij∈E , choice of metric d G , the set of sampled/selected cycles C (default: C = C 3 ), total time step T , increasing parameters {βt} T t=0 (theoretical choices are discussed in Sections 5 and 6), reweighting function Steps: Generate CEG from G([n], E) and C for ij ∈ E and L ∈ N ij do end for for ij ∈ E do end for for t = 0 : T do for ij ∈ E and L ∈ N ij do end for end for Output: (s ij (T )) ij∈E

Notation
Let D ij = {d L : L ∈ N ij } denote the set of inconsistencies levels with respect to ij, G ij = {L ∈ N ij : N L \ {ij} ⊆ E g } denote the set of "good cycles" with respect to ij, and CI ij = {L ∈ N ij : d L = s * ij } denote the set of cycles with correct information of corruption with respect to ij.

Message Passing from Edges to Cycles and a Statistical Model
Here we explain the fourth step of the algorithm, which estimates w ij,L (t) according to (13). We remark that Z ij (t) is the normalization factor assuring that In order to better interpret our procedure, we propose a statistical model. We assume that {s * ij } ij∈E and {s ij (t)} ij∈E are both i.i.d. random variables and that for any ij ∈ E, s * ij is independent of s kl (t) for kl = ij ∈ E. We further assume that Pr(s * ab = 0|s ab (t) = x) = f (x; β t ). an estimate for the probability that L is good given estimates of the corruption levels for edges ik and ij, s ik (t) and s jk (t). Note that {w ij,L (t)} L∈N ij (normalized so that its sum is 1) is a discrete distribution on the set of cycle inconsistencies, {d ij,L } L∈N ij . This distribution aims to emphasize good cycles L with respect to the edge ij. For example, for the good cycle w.r.t. ij, , βt) and f (s jk 2 (t), βt) are expected to be relatively high (since the edges ik 2 and jk 2 are good); thus the weight w ij,L 2 (t) = f (s ik 2 (t), βt) · f (s jk 2 (t), βt) is relatively high. On the other hand, f (s ik 2 (t), βt) is expected to be relatively low (since ik 2 is bad) and thus the weight w ij,L 2 (t) is relatively low. At last, an estimate of s ij (t + 1) is obtained by a weighted average that uses the weights {w ij,L (t)} L∈N ij .
Unlike common message passing models, we do not need to specify other probabilities, such as joint densities. In view of these assumptions, (13) can be formally rewritten as We note that the choices for f (x; β t ) in (10) lead to the following update rules: We refer to CEMP with rules A and B as CEMP-A and CEMP-B, respectively. Given this statistical model, in particular, using the i.i.d. property of s * ij and s ij (t), the update rule (17) can be rewritten as Finally, we use the above new interpretation of the weights to demonstrate a natural fixed point of the update rules (14) and (20). Theory of convergence to this fixed point is presented later in Section 5. We first note that (14) implies the following ideal weights for good approximation: Indeed, where the equality before last uses Proposition 2. We further note that This equation follows from the fact that the events L ∈ G ij and s * ab = 0 ∀ab ∈ N L \ {ij} coincide, and thus (21) and (23) are equivalent, where the normalization factor Z * ij equals |G ij |. Therefore, in view of (14) and (22) as well as (20) and (23) is a fixed point of the system of the update rules (14) and (20).

Message Passing from Cycles to Edges and Two Additional Assumptions
Here we explain the fifth step of the algorithm, which estimates, at iteration t, s * ij according to (14). We further assume the good-cycle condition and that G ij = CI ij . We remark that the first assumption implies that G ij ⊆ CI ij according to Proposition 2, but it does not imply that G ij ⊇ CI ij . The first assumption, which we can state as G ij = ∅, also implies that CI ij = ∅, or equivalently, This equation suggests an estimation procedure of {s * ij } ij∈E . One may greedily search for s * ij among all elements of D ij , but this is a hard combinatorial problem. Instead, (14) relaxes this problem and searches over the convex hull of D ij , using a weighted average.
We further interpret (14) in view of the above statistical model. Applying the assumption G ij = CI ij , we rewrite (20) as The update rule (14) can thus be interpreted as an iterative voting procedure then its inconsistency measure d L is contaminated by corrupted edges in L and we expect its weight to decrease with the amount of corruption. This is demonstrated in the update rules of (18) and (19), where any corrupted edge ab in a cycle L ∈ N ij , whose corruption is measured by the size of s ab (t), would decrease the weight w ij,L (t).
We can also express (14) in terms of the following probability mass function µ ij (x; t) on D ij : This probability mass function can be regarded as the estimated posterior mass function of s * ij given the estimated corruption levels (s ab (t)) ab∈E\{ij} . The update rule (14) can then be reformulated as follows:

Summarizing Diagram for the Message Passing Procedure
We further clarify the message passing procedure by the following simple diagram in Figure 3. The right hand side (RHS) of the diagram expresses two main distributions. The first one is for edge ij being uncorrupted and the second one is that cycle L ∈ N ij provides the correct information for edge ij. We use the term "Message Passing" since CEMP iteratively updates these two probabilistic distributions by using each other in turn. The update of the second distribution by the first one is more direct. The opposite update requires the estimation of corruption levels.
probabilities of edges ij ∈ E being uncorrupted by (10) and (16) =⇒ Pr(s * ij = 0|s ij (t)) ij∈E estimation of corruption levels by (24) probabilities that cycles L ∈ N ij provide the correct corruption information for edges ij ∈ E by (14) ⇐=

Refined Statistical Model for the Specific Reweighting Functions
The two choices of f (x; β t ) in (10) correspond to a more refined probabilistic model on s * ij and s ij (t), which can also apply to other choices of reweighting functions. In addition to the above assumptions, this model assumes that the edges in E are independently corrupted with probability q.
We denote by F g (x; t) and F b (x; t) the probability distributions of s ij (t) conditioned on the events s * ij = 0 and s * ij = 0, respectively. We further denote by p g (x; t) and p b (x; t), the respective probability density functions of F g (x; t) and F b (x; t) and define r(x; t) = p b (x; t)/p g (x; t). By Bayes' rule and the above assumptions, One can note that the update rule A in (18) corresponds to (17) with (26) and Due to the normalization factor and the fact that each cycle has the same length, the update rule A is invariant to the scale of r(x; t), and we thus used the proportionality symbol. Note that there are infinitely many F g (x; t) and F b (x; t) that result in such r(x; t). One simple example is uniform F g (x; t) and F b (x; t) on [0, 1/β t ] and [0, 1], respectively. One can also note that the update rule B approximately corresponds to (17) with (26) and r(x; t) = αe β t x for sufficiently large α and x ∈ [0, 1].
Indeed, by plugging (28) in (26) we obtain that for Since the update rule B is invariant to scale (for the same reason explained above for the update rule A), α can be chosen arbitrarily large to yield a good approximation in (29) with sufficiently large α . One may obtain (28) by choosing F g (x; t) and F b (x; t) as exponential distributions restricted to [0, 1], or normal distributions restricted to [0, 1] with the same variance but different means. As explained later in Section 5, β t needs to approach infinity in the noiseless case. We note that this implies that r(x; t) (in either (27) or (28)) is infinite at x ∈ (0, 1] and finite at x = 0. Therefore, in this case, F g (x; t) → δ 0 . This makes sense since s * ij = 0 when ij ∈ E g .

Remark 3
Neither rules A nor B makes explicit assumptions on the distributions of s ij (t) and s * ij and thus there are infinitely many choices of F g (x; t) and F b (x; t), which we find flexible.

The Practical Advantage of the Exponential Reweighting Function
In principle, one may choose any nonincreasing reweighting functions f (x; β) such that In practice, we advocate using f (x; β) = exp(−βx) of CEMP-B due to its nice property of shift invariance, which we formulate next and prove in Appendix A.4.

Proposition 4
Assume that C consists of cycles with equal length l. For any fixed ij ∈ E and s ∈ R, the estimated corruption levels {s ij (t)} ij∈E and {s ij (t)+s} ij∈E result in the same cycle weights {w ij, We demonstrate the advantage of the above shift invariance property with a simple example: Assume that an edge ij is only contained in two cycles L 1 = {ij, ik 1 , jk 1 } and L 2 = {ij, ik 2 , jk 2 }. Using the notation s ij,L 1 (t) := s ik 1 (t) + s jk 1 (t) and s ij,L 2 (t) := s ik 2 (t) + s jk 2 (t), we obtain that and since w ij,L 1 (t) + w ij,L 2 (t) = 1, w ij,L 1 (t) and w ij,L 2 (t) are determined by s ij,L 1 (t) − s ij,L 2 (t). Therefore, the choice of β t for CEMP-B only depends on the "corruption variation" for edge ij, s ij,L 1 (t)−s ij,L 2 (t). It is completely independent of the average scale of the corruption levels, which is proportional in this case to s ij,L 1 (t) + s ij,L 2 (t). On the contrary, CEMP-A heavily depends on the average scale of the corruption levels. Indeed, the general expression in CEMP-A is The choice of β t depends on both values of s ij,L 1 (t) and s ij,L 2 (t) and not on any meaningful variation. One can see that in more general cases, the correct choice of β t for CEMP-A can be rather restrictive and will depend on different local corruption levels of edges.

On the Computational Complexity of CEMP
We note that for each L ∈ C, CEMP needs to compute d L and thus the complexity at each iteration is of order O( L∈C |L|). In the case of C = C 3 , which we advocate later, this complexity is of order O(|C 3 |) and thus O(n 3 ) for sufficiently dense graphs. In practice, one can implement a faster version of CEMP by only selecting a fixed number of 3-cycles per edge which reduces the complexity per iteration to O(|E|), which is O(n 2 ) for sufficiently dense graphs. Nevertheless we have not discussed the full guarantees for this procedure. In order to obtain the overall complexity, and not the complexity per iteration, one needs to guarantee sufficiently fast convergence. Our later theoretical statements guarantee linear convergence of CEMP under various conditions and consequently guarantee that the overall complexity is practically of the same order of the complexity per iteration. We note that the upper bound O(n 3 ) for the complexity of CEMP with C = C 3 is lower than the complexity of SDP for common group synchronization problems. We thus refer to our method as fast. In general, for CEMP with C = C l , the complexity is O(ln l ). We exemplify two different scenarios where the complexity of CEMP can be lower than the bounds stated above.
An example of lower complexity due to graph sparsity: We assume the special case, where the underlying graph G([n], E) is generated by the Erdős-Rényi model G(n, p) and the group is G = SO(d). We estimate the complexity of CEMP with C = C 3 . Note that the number of edges concentrates at n 2 p. Each edge is contained in about np 2 3-cycles. Thus, the number of d L 's concentrate at n 3 p 3 . The computational complexity of each d L is d 3 . Therefore, the computational complexity of initializing CEMP is about n 3 p 3 d 3 . In each iteration of the reweighting stage, one only needs to compute w ij,L for each 3-cycle and average over ij ∈ E and thus the complexity is n 3 p 3 . We assume, e.g., the noiseless adversarial case, where the convergence is linear. Thus, the total complexity of CEMP is O((npd) 3 ). The complexity of Spectral is of order O(n 2 d 3 ), so the complexity of CEMP is lower than that of Spectral when p = O(n −1/3 ). Observe that the upper bound p = n −1/3 is higher than the phase transition threshold for the existence of 3-cycles (which is p = n −1/2 ); thus, this fast regime of CEMP (with C = C 3 ) is nontrivial. An example of low complexity with high-order cycles: In [40], a lower complexity of CEMP with C = C l , l > 3, is obtained for the following special case: denotes the Frobenius norm and we associate g 1 and g 2 with their matrix representations). In this case, it is possible to compute the weights {w ij } ij∈E by calculating powers of the graph connection weight matrix [44]. Consequently, the complexity of CEMP is reduced to O(ln 3 ). It seems that this example is rather special and we find it difficult to generalize its ideas.

Post-processing: Estimation of the Group Elements
After running CEMP for T iterations, one obtains the estimated corruption levels s ij (T ), ij ∈ E. As a byproduct of CEMP, one also obtains which we interpreted in (16) as the estimated probability that ij ∈ E g given the value of s ij (T ). Alternatively, one may normalize p ij as follows: , for ij ∈ E, so that j∈[n]:ij∈Ep ij = 1. Using either of these values (s ij (T ), p ij (T ) orp ij (T ) for all ij ∈ E), we describe different possible strategies for estimating the underlying group elements {g * i } n i=1 in more general settings than that of our proposed theory. As we explain below we find our second proposed method (CEMP+GCW) as the most appropriate one in the context of the current paper. Nevertheless, there are settings where other methods will be preferable. Application of the minimum spanning tree (CEMP+MST): One can assign the weight s ij (T ) for each edge ij ∈ E and find the minimum spanning tree (MST) of the weighted graph. The resulting spanning tree minimizes the average of the estimated corruption levels. Next, one can fix g 1 = e G and estimate the rest of the group elements by subsequently multiplying group ratios (using the formula g i = g ij g j ) along the spanning tree. We refer to this procedure as CEMP+MST. Alternatively, one can assign edge weights {p ij (T )} ij∈E and find the maximum spanning tree, which aims to maximize the expected number of good edges. These methods can work well when there is a connected inlier graph with little noise, but will generally not perform well in noisy situations. Indeed, when the good edges are noisy, estimation errors will rapidly accumulate with the subsequent applications of the formula g i = g ij g j and the final estimates are expected to be erroneous. A CEMP-weighted spectral method (CEMP+GCW): Using {p ij (T )} ij∈E obtained by CEMP one may try to approximately solve the following weighted least squares problem: and use this solution as an estimate of {g * i } n i=1 . Note that since G is typically not convex, the solution of this problem is often hard. When G is a subgroup of the orthogonal group O(N ), an argument of [4] for the same optimization problem with G = SE(3) suggests the following relaxed spectral solution to (31). First, build a matrix Y p whose [i, j]-th block isp ij g ij for ij ∈ E, and 0 otherwise. Next, compute the top N eigenvectors of Y p to form the block vectorx, and finally project the i-th block ofx onto G to obtain the estimate of g * i for i ∈ [n]. Note that Y p is exactly the graph connection weight (GCW) matrix in vector diffusion maps [44], given the edge weightsp ij . Thus, we refer to this method as CEMP+GCW. We note that the performance of CEMP+GCW is mainly determined by the accuracy of estimating the corruption levels. Indeed, if the corruption levels {s ij (T )} ij∈E are sufficiently accurate, then the weights {p ij } ij∈E are sufficiently accurate and (31) is close to a direct least squares solver for the inlier graph. Since the focus of this paper is accurate estimation of {s * ij } ij∈E , we mainly test CEMP+GCW as a direct CEMP-based group synchronization solver (see Section 7).

Iterative application of CEMP and weighted least squares (MPLS):
In highly corrupted and noisy datasets, iterative application of CEMP and the weighted least squares solver in (31) may result in a satisfying solution. After the submission of this paper, the authors proposed a special procedure like this, which they called Message Passing Least Squares (MPLS) [39]. Combining CEMP with any another solver: CEMP can be used as an effective cleaning procedure for removing some bad edges (with estimated corruption levels above a chosen threshold). One can then apply any group synchronization solver using the cleaned graph. Indeed, existing solvers often cannot deal with high and moderate levels of corruption and should benefit from initial application of CEMP. Such a strategy was tested with the AAB algorithm [38], which motivated the development of CEMP.

Comparison of CEMP with BP, AMP and IRLS
CEMP is different from BP [46] in the following ways. First of all, unlike BP that needs to explicitly define the joint density and the statistical model a-priori, CEMP does not use an explicit objective function, but only makes weak assumptions on the corruption model. Second, CEMP is guaranteed (under a certain level of corruption) to handle factor graphs that contain loops. Third, CEMP utilizes the auxiliary variable s ij (t) that connects the two binary distributions on the RHS of the diagram in Figure 3. Thus, unlike (7) of BP that only distinguishes the two events: ij ∈ E g and ij ∈ E b , CEMP also tries to approximate the exact value of corruption levels s * ij for all ij ∈ E, which can help in inferring corrupted edges. In practice, AMP [36] directly solves group elements, but with limited theoretical guarantees for group synchronization. CEMP has two main advantages over AMP when assuming the theoretical setting of this paper. First of all, AMP for group synchronization [36] assumes additive Gaussian noise without additional corruption and thus it is not robust to outliers. In contrast, we guarantee the robustness of CEMP to both adversarial and uniform corruption. We further establish the stability of CEMP to sufficiently small bounded and sub-Gaussian noise. Second of all, the heuristic argument for deriving AMP for group synchronization (see Section 6 of [36]) provides asymptotic convergence theory, whereas CEMP has convergence guarantees under certain deterministic conditions for finite sample with attractive convergence rate.
Another related line of work is IRLS that is commonly used to solve 1 minimization problems. At each iteration, it utilizes the residual of a weighted least squares solution to quantify the corruption level at each edge. New weights, which are typically inversely proportional to this residual, are assigned for an updated weighted least squares problem, and the process continues till convergence. The IRLS reweighting strategy is rather aggressive, and in the case of high corruption levels, it may wrongly assign extremely high weights to corrupted edges and consequently it can get stuck at local minima. When the group is discrete, some residuals of corrupted edges can be 0 and the corresponding weights can be extremely large. Furthermore, in this case the residuals and the edge weights lie in a discrete space and therefore IRLS can easily get stuck at local minima. For general groups, the 1 formulation that IRLS aims to solve is statistically optimal to a very special heavy-tailed distribution, and is not optimal, for example, to the corruption model proposed in [45]. Instead of assigning weights to edges, CEMP assigns weights to cycles and uses the weighted cycles to infer the corruption levels of edges. It starts with a conservative reweighting strategy with β t small and gradually makes it more aggressive by increasing β t . This reweighting strategy is crucial for guaranteeing the convergence of CEMP. CEMP is also advantageous when the groups are discrete because it estimates conditional expectations whose values lie in a continuous space. This makes CEMP less likely to get stuck in a local minima.

Theory for Adversarial Corruption
We show that when the ratio between the size of G ij (defined in Section 4.2.1) and the size of N ij (defined in Section 4.1) is uniformly above a certain threshold and {β t } T t=0 is increasing and chosen in a certain way, then for all ij ∈ E, the estimated corruption level s ij (t) linearly converges to s * ij , and the convergence is uniform over all ij ∈ E. The theory is similar for both update rules A and B. Note that the uniform lower bound on the above ratio is a geometric restriction on the set E b . This is the only restriction we consider in this section; indeed, we follow the adversarial setting, where the group ratios g ij for ij ∈ E b can be arbitrarily chosen, either deterministically or randomly. We mentioned in Section 1.5 that the only other guarantees for such adversarial corruption but for a different problem are in [20,28] and that we found them weaker.
The rest of the section is organized as follows. Section 5.1 presents preliminary notation and background. Section 5.2 establishes the linear convergence of CEMP to the ground truth corruption level under adversarial corruption. Section 5.3 establishes the stability of CEMP to bounded noise, and Section 5.4 extends these results to sub-Gaussian noise.

Preliminaries
For clarity of our presentation, we assume that C = C 3 and thus simplify some of the above notation and claims. Note that L ∈ C 3 contains 3 edges and 3 vertices. Therefore, given i, j ∈ [n] and L ∈ N ij , we index L by the vertex k, which is not i or j. We thus replace the notation d L with d ij,k . We also note that the sets N ij and G ij can be expressed as follows: , E) (with 1 if ij ∈ E and 0 otherwise), then by the definitions of matrix multiplication and N ij , An upper bound for the parameter λ quantifies our adversarial corruption model. Let us clarify more carefully the "adversarial corruption" model and the parameter λ, while repeating some previous information. This model assumes a graph G([n], E) whose nodes represent group elements and whose edges are assigned group ratios satisfying (1) (1)), we refer to this model as noiseless, and otherwise, we refer to it as noisy. For the noisy case, we will specify assumptions on the distribution of d G (g ij , e G ) for all ij ∈ E g , or equivalently (since d G is bi-invariant) the distribution of s * ij for all ij ∈ E g . In view of the above observations, we note that the parameter λ, whose upper bound quantifies some properties of this model, can be directly expressed using the adjacency matrices A and A g as follows Thus an upper bound m on λ is the same as a lower bound 1−m on min ij∈E A 2 g (i, j) /A 2 (i, j). This lower bound is equivalent to a lower bound on the ratio between the size of G ij and the size of N ij . We note that this bound implies basic properties mentioned earlier. First of all, it implies that G ij is nonempty for all ij ∈ E and it thus implies that the good-cycle condition holds. This in turn implies that Our proofs frequently use Lemma 1, which can be stated in our special case of C = C 3 as We recall that d G (· , ·) ≤ 1 and thus Since C = C 3 , the update rule (15) can be further simplified as follows. For CEMP-A, and for CEMP-B, .
The initial corruption estimate at ij ∈ E in (12) for both versions of CEMP is

Deterministic Exact Recovery
The following two theorems establish linear convergence of CEMP-A and CEMP-B, assuming adversarial corruption and exponentially increasing β t . The proofs are straightforward.
Theorem 1 Assume data generated by the noiseless adversarial corruption model with parameter λ < 1/4. Assume further that the parameters {β t } t≥0 of CEMP-A satisfy: 1 < β 0 ≤ 1/λ and for all t ≥ 1 β t+1 = rβ t for some 1 < r < 1/(4λ). Then the estimates Proof The proof uses the following estimate, which applies first (36) and then (34): Using the notation A ij (t) := {k ∈ N ij : s ik (t), s jk (t) ≤ 1/β t } and the fact that s * ik + s * jk = 0 for ij ∈ G ij , we can rewrite the estimate in (40) as follows The rest of the proof uses simple induction. For t = 0, (39) is verified as follows where the first inequality uses (38), the second equality follows from the fact that d ij,k = s * ij for k ∈ G ij , the second inequality follows from (35) (which implies that |d ij,k − s * ij | ≤ 1) and the last two inequalities use the assumptions of the theorem. Next, we assume that 1/β t ≥ (t) for an arbitrary t > 0 and show that 1/β t+1 ≥ (t + 1). We note that the induction assumption implies that and consequently, for We further note that Combining (41) and (45) and then applying basic properties of the different sets, in particular, (44) and the fact that By taking the maximum of the left hand side (LHS) and RHS of (46) over ij ∈ E and using the assumptions λ < 1/4 and 4λβ t+1 < β t , we obtain that Theorem 2 Assume data generated by the noiseless adversarial corruption model with parameter λ < 1/5. Assume further that the parameters {β t } t≥0 of CEMP-B satisfy: Proof Combining (34) and (37) yields .
Applying (47), the definition of ij (t) and the facts that G ij ⊆ N ij and s * ik +s * jk = 0 for k ∈ G ij , we obtain that The proof follows by induction. For t = 0, (42) implies that λ ≥ (0) and thus 1/(4β 0 ) ≥ λ ≥ (0). Next, we assume that 1/(4β t ) ≥ (t) and show that 1/(4β t+1 ) ≥ (t + 1). We do this by simplifying and weakening (48) as follows. We first bound each term in the sum on the RHS of (48) by applying the inequality xe −ax ≤ 1/(ea) for x ≥ 0 and a > 0. We let x = s * ik +s * jk and a = β t and thus each term is bounded by 1/(ea). We then use the induction assumption ( (t) ≤ 1/(4β t )) to bound the exponential term in the numerator on the RHS of (48) by e. We therefore conclude that By applying the assumption λ < 1/5 and maximizing over ij ∈ E both the LHS and RHS of (49), we conclude the desired induction as follows

Stability to Bounded Noise
We assume the noisy adversarial corruption model in (1) and an upper bound on λ. We further assume that there exists δ > 0, such that for all ij ∈ E g , s * ij ≡ d G (g ij , e G ) ≤ δ. This is a general setting of perturbation without probabilistic assumptions. Under these assumptions, we show that CEMP can approximately recover the underlying corruption levels, up to an error of order δ. The proofs of the two theorems below are similar to the proofs of the theorems in Section 5.2 and are thus included in Appendices A.5 and A.6.

Remark 5
The RHSs of (52) and (54) imply that CEMP approximately recover the corruption levels with error O(δ). Since this bound is only meaningful with values at most 1, δ can be at most 1/2 (this bound is obtained when ε = 1 and λ = 0). Furthermore, when λ increases or ε decreases, the bound on δ decreases. The bound on δ limits the applicability of the theorem, especially for discrete groups. For example, in Z 2 synchronization, s * ij ∈ {0, 1} and thus the above theorem is inapplicable. For S N synchronization, the gap between nearby values of s * ij decreases with N , so the theorem is less restrictive as N increases. In order to address noisy situations for Z 2 and S N with small N , one can assume instead an additive Gaussian noise model [31,35]. When the noise is sufficiently small and the graph is generated from the Erdős-Rényi model with sufficiently large probability of connection, projection of the noisy group ratios onto Z 2 or S N results in a subset of uncorrupted group ratios whose proportion is sufficiently large (see e.g. [35]), so that Theorems 1 or 2 can be applied to the projected elements.

Extension to Sub-Gaussian Noise
Here we directly extend the bounded noise stability of CEMP to sub-Gaussian noise. We assume noisy adversarial corruption satisfying (1). We further assume that {s * ij } ij∈E g are independent and for ij ∈ E g , s * ij ∼ sub(µ, σ 2 ), namely, s * ij is sub-Gaussian with mean µ and variance σ 2 . More precisely, s * ij = σX ij where Pr(X ij − µ > x) < exp(−x 2 /2) and Pr(X ij ≥ 0) = 1. The proof of Theorem 5 is included in Appendix A.7.
Theorem 5 Assume data generated by the adversarial corruption model with independent sub-Gaussian noise having mean µ and variance σ 2 . For any x > 0, if one replaces λ and δ in Theorems 3 and 4 with λ + 2e − x 2 2 and σµ + σx, respectively, then the conclusions of these theorems hold with probability at least λ)). Remark 6 The above probability is sufficiently large when x is sufficiently small and when min ij∈E |N ij | is sufficiently large. We note that min ij∈E |N ij | > min ij∈E |G ij | > 0, where the last inequality follows from the good-cycle condition. We expect min ij∈E |N ij | to depend on the size of the graph, n, and its density. To demonstrate this claim we note that if G([n], E) is Erdős-Rényi with probability of connection p, then min ij∈E |N ij | ≈ np 2 .
Theorem 5 tolerates less corruption than Theorems 3 and 4. This is due to the fact that, unlike bounded noise, sub-Gaussian noise significantly extort the group ratios. Nevertheless, we show next that in the case of a graph generated by the Erdős-Rényi model, the sub-Gaussian model may still tolerate a similar level of corruption as that in Theorems 3 and 4 by sacrificing the tolerance to noise.

Corollary 1 Assume that G([n], E)
is generated by the Erdős-Rényi model with probability of connection p. If s * ij ∼ sub(µ, σ 2 ) for ij ∈ E g , then for any α > 6 and n sufficiently large, Theorems 3 and 4, with λ and δ replaced by respectively, hold with probability at least 1 − O(n −α/3+2 ).
Note that this corollary is obtained by setting exp(−x 2 /2) = 6α log(np 2 )/((1 − λ)np 2 ) in Theorem 5 and noting that in this case min ij∈E |N ij | ≥ np 2 /2 with high probability. We note that σ needs to decay with n, in order to have bounded δ n . In particular, if σ 1/ √ log n and p is fixed, δ n = O(1).

Exact Recovery Under Uniform Corruption
This section establishes exact recovery guarantees for CEMP under the uniform corruption model. Its main challenge is dealing with large values of λ, unlike the strong restriction on λ in Theorems 1 and 2. Section 6.1 describes the uniform corruption model. Section 6.2 reviews exact recovery guarantees of other works under this model and the best informationtheoretic asymptotic guarantees possible. Section 6.3 states the main results on the convergence of both CEMP-A and CEMP-B. Section 6.4 clarifies the sample complexity bounds implied by these theorems. Since these bounds are not sharp, Section 6.5 explains how a simpler estimator that uses the cycle inconsistencies obtains sharper bounds. Section 6.6 includes the proofs of all theorems. Section 6.7 exemplifies the technical quantities of the main theorems for specific groups of interest.

Description of the Uniform Corruption Model
We follow the uniform corruption model (UCM) of [45] and apply it for any compact group. It has three parameters: n ∈ N, 0 < p ≤ 1 and 0 ≤ q < 1, and we thus refer to it as UCM(n, p, q).
UCM(n, p, q) assumes a graph G([n], E) generated by the Erdös-Rényi model G(n, p), where p is the connection probability among edges. It further assumes an arbitrary set of group elements {g * i } n i=1 . Each group ratio is generated by the following model, whereg ij is independently drawn from the Haar measure on G (denoted by Haar(G)): We note that the set of corrupted edges E b is thus generated in two steps. First, a set of candidates of corrupted edges, which we denote byẼ b , is independently drawn from E with probability q. Next, E b is independently drawn fromẼ b with probability 1−p 0 , where p 0 = Pr(u G = e G ) for an arbitrarily chosen u G ∼ Haar(G). It follows from the invariance property of the Haar measure that for any ij ∈ E, p 0 = Pr(u G = g * ij ). Therefore, the probability that g ij is uncorrupted, Pr(ij ∈ E g |ij ∈ E) is q * = 1 − q + qp 0 . We further denote q min = min(q 2 * , 1 − q 2 * ), q g = 1 − q and z G = E(d G (u G , e G )), where u G ∼ Haar(G). For Lie groups, such as SO(d), p 0 = 0, q * = p g , Pr(ij ∈ E b |ij ∈ E) = q and E b =Ẽ b .

Information-theoretic and Previous Results of Exact Recovery for UCM
We note that when 0 ≤ q < 1 and 0 < p ≤ 1, the asymptotic recovery problem for UCM is well-posed since Pr(d ij,k = 0|ij ∈ E g ) is greater than Pr(d ij,k = 0|ij ∈ E b ) and thus good and bad edges are distinguishable. Furthermore, when q = 1 or p = 0 the exact recovery problem is clearly ill-posed. It is thus desirable to consider the full range of parameters, 0 ≤ q < 1 and 0 < p ≤ 1, when studying the asymptotic exact recovery problem of a specific algorithm assuming UCM. It is also interesting to check the asymptotic dependence of the sample complexity (the smallest sample size needed for exact recovery) on q and p when p → 0 and q → 1.
For the special groups of interest in applications, Z 2 , S N , SO(2) and SO (3), it was shown in [2,5], [10,11], [41] and [11], respectively, that exact recovery is information-theoretically possible under UCM whenever where, for simplicity, for S N we omitted the dependence on N (which is a factor of 1/N ). That is, ignoring logarithmic terms (and the dependence on N for S N ), the sample complexity is Ω(p −1 q −2 g ). There are not many results of this kind for actual algorithms. Bandeira [5] and Cucuringu [12] showed that SDP and Spectral, respectively, for Z 2 synchronization achieve the information-theoretic bound in (55). Chen and Candès [9] established a similar result for Spectral and the projected power method when G = Z N . Another similar result was established by [10] for a variant of SDP when G = S N . After the submission of this work, [29] extended the latter result for Spectral.
When G is a Lie group, methods that relax (5) with ν = 2, such as Spectral and SDP, cannot exactly recover the group elements under UCM. Wang and Singer [45] showed that the global minimizer of the SDP relaxation of (5) with ν = 1 and G = SO(d) achieves asymptotic exact recovery under UCM when q ≡ Pr(ij ∈ E b |ij ∈ E) < p c , where p c depends on d (e.g., p c ≤ 0.54 and p c = O(d −1 )). Due to their limited range of q, they cannot estimate the sample complexity when q → 1. As far as we know, [45] is the only previous work that provides exact recovery guarantees (under UCM) for synchronization on Lie groups.

Main Results
Section 6.3.1 establishes exact recovery guarantees under UCM, which are most meaningful when q * is sufficiently small. Section 6.3.2 sharpens the above theory by considering the complementary region of q * (q * ≥ q c for some q c > 0). The proofs of all theorems are in Section 6.6.

Main Results when q * is Sufficiently Small
The two exact recovery theorems below use different quantities: z G , P max (x) and V (x). We define these quantities before each theorem and later exemplify them for common groups in Section 6.7. For simplicity of their already complicated proofs, we use concentration inequalities that are sharper when q * is sufficiently small. Therefore, the resulting estimates for the simpler case where q * is large are not satisfying and are corrected in the next section.
The condition of the first theorem uses the cdf (cumulative density function) of the random variable max{s * ik , s * jk }, where ij ∈ E and k ∈ B ij are arbitrarily fixed. We denote this cdf by P max and note that due to the model assumptions it is independent of i, j and k.
Theorem 6 Let 0 < r < 1, 0 ≤ q < 1, 0 < p ≤ 1, n ∈ N and assume data generated by UCM(n, p, q). If the parameters {β t } t≥0 of CEMP-A satisfy for t ≥ 1, then with probability at least The second theorem uses the following notation. Let Y denote the random variable s * ik + s * jk for any arbitrarily fixed ij ∈ E and k ∈ B ij . We note that due to the model assumptions, Y is independent of i, j and k. Let P denote the cdf of Y and Q denote the corresponding quantile function, that is, the inverse is the variance of f τ (Y ) for any fixed τ . Since V * (x) might be hard to compute, our theorem below is formulated with any function V , which dominates V * , that is, and assume data generated by UCM(n, p, q). Assume further that either s for t ≥ 1, then with probability at least We note that Theorem 6 requires that in order to have a sufficiently large probability. Similarly, Theorem 7 requires the following lower bound on the sample size: We will use these estimates in Section 6.4 to bound the sample complexity.

Main Results when q * is Sufficiently Large
We tighten the estimates established in Section 6.3.1 by considering two different regimes of q * divided by a fixed value q c . For CEMP-A we let q c be any number in ( √ 3/2, 1). For CEMP-B we let q c be any number in (2/ √ 5, 1). We restrict the results of Theorems 6 and 7 to the case q * < q c and formulate below the following two simpler theorems for the case q * ≥ q c .
Theorem 8 Let 0 < r < 1, 0 ≤ q < 1, 0 < p ≤ 1, n ∈ N and assume data generated by UCM(n, p, q). Let q c be any number ∈ ( √ 3/2, 1) and ∆ q = q 2 c /2 − 3/8 ∈ (0, 1/8). For any q * ≥ q c , if the parameters {β t } t≥0 of CEMP-A satisfy then with probability at least Theorem 9 Let 0 < r < 1, 0 ≤ q < 1, 0 < p ≤ 1, n ∈ N and assume data generated by UCM(n, p, q). Let q c be any number ∈ (2/ √ 5, 1) and ∆ q = q 2 c /2 − 2/5 ∈ (0, 1/10). For any q * ≥ q c , if the parameters {β t } t≥0 of CEMP-B satisfy then with probability at least Theorems 6 and 7 for the regime q * < q c seem to express different conditions on {β t } t≥0 than those in Theorems 8 and 9 for the regime q * ≥ q c . However, after carefully clarifying the corresponding conditions in Theorems 6 and 7 for specific groups of interests (see Section 6.7), one can formulate conditions that apply to both regimes. Consequently, one can formulate unified theorems (with the same conditions for any choice of q * ) for special groups of interest.

Sample Complexity Estimates
Theorems 6 and 7 imply upper bounds for the sample complexity of CEMP. However, these bounds depend on various quantities that are estimated in Section 6.7 for the groups Z 2 , S N , SO(2) and SO(3), which are common in applications. Table  1 below first summarizes the estimates of these quantities (only upper bounds of P max (x) and V (x) are needed, but for completeness we also include the additional quantity z G ). It then lists the consequent upper bounds of the sample complexities of CEMP-A and CEMP-B, which we denote by SC-A and SC-B, respectively. At last, it lists the information-theoretic sample complexity bounds (discussed in Section 6.2), which we denote by SC-IT.
The derivation of the sample complexity bounds, SC-A and SC-B, requires an asymptotic lower bound of β 1 and an asymptotic upper bound of 1/β 0 − (1 − q 2 g )z G (or equivalently, a lower bound of β 0 ). Then, one needs to use these asymptotic bounds together with (61) or (62) to estimate SC-A or SC-B, respectively. We demonstrate the estimation of SC-A for G = SO(2). Here we assume two bounds: . We first note from Table 1 that P max (x) = O(x) and consequently the first bound implies the required middle equation of (56). The combination of both bounds with the fact that in this case q g = q * and the obvious assumption β 0 > 0 yields the first equation of (56). Incorporating both bounds into (61) we obtain that a sufficient sample size n for exact recovery w.h.p. by CEMP-A satisfies n/ log(n) = Ω(p −2 q −8 * ); thus, the minimal sample for exact recovery w.h.p. by CEMP-A is of order O(p −2 q −8 * ). Table 1 Summary of estimates of the main quantities of Theorems 6 and 7 for the common groups in applications, and of the derived sample complexity bounds. SC-A and SC-B denote the sample complexity of CEMP-A and CEMP-B (ignoring log factors) and SC-IT denotes the information-theoretic sample complexity (ignoring log factors).
We remark that these asymptotic bounds were based on estimates for the regime q * < q c , but we can extend them for any q * and p → 0. Indeed, when q * ≥ q c , (64) of Theorem 8 and the equivalent equation of Theorem 9 imply that the minimum sample required for CEMP is of order Ω(1/p 2 ). Clearly, this estimate coincides with all estimates in Table 1 when q * ≥ q c .
Our upper bounds for the sample complexity are far from the informationtheoretic ones. Numerical experiments in Section 7 may indicate a lower sample complexity of CEMP than these bounds, but still possibly higher than the information theoretic ones. We expect that one may eventually obtain the optimal dependence in q g for a CEMP-like algorithm, however, CEMP with three cycles is unable to improve the dependence on p from Ω(1/p 2 ) to Ω(1/p). The issue is that when C = C 3 , the expected number of good cycles per edge is np 2 q 2 g , so that n = Ω(1/(p 2 q 2 g )). Indeed, the expected number of 3-cycles per edge is np 2 and the expected fraction of good cycles is q 2 g . The use of higher-order cycles should improve the dependence on p, but may harm the dependence on q g .
Despite the sample complexity gap, we are unaware of other estimates that hold for q * → 0 (recall that q * → 0 only for continuous groups). The current best result for SO(d) synchronization appears in [45]. It only guarantees exact recovery for the global optimizer (not for an algorithm) for sufficiently large q * (e.g., q * > 0.5 for d = 3 and q * > 1 − O(d −1 ) for large d).

A Simple Estimator with the Optimal Order of q g for Continuous Groups
We present a very simple and naive estimator for the corruption levels that uses cycle inconsistencies and achieves the optimal order of q g for continuous groups. We denote by mode D ij the mode of Their following theoretical guarantees are proved in Appendix A.8.
Proposition 5 Let 0 ≤ q < 1, 0 < p ≤ 1, n ∈ N such that n/ log n ≥ c/(p 2 q 2 g ) for some absolute constant c ≥ 10. If G is a continuous group and the underlying dataset is generated by UCM(n, p, q), then (66) yields exact estimates of {s * ij } ij∈E with probability at least 1 − n −2/15 .
We remark that although the naive estimator of (66) achieves tighter sample complexity bounds than CEMP in the very special setting of UCM, it suffers from the following limitations that makes it impractical to more general scenarios. First of all, in real applications, all edges are somewhat noisy, so that all the elements in each fixed D ij are different and finding a unique mode is impossible. Second, the mode statistic is very sensitive to adversarial outliers. In particular, one can maliciously choose the outliers to form peaks in the histogram of each D ij that are different than s * ij . We currently cannot prove a similar guarantee for CEMP, but the phase transition plots of Section 7.5 seem to support a similar behavior. Nevertheless, the goal of presenting this estimator was to show that it is possible to obtain sharp estimates in q g by using cycle inconsistencies.
6.6 Proofs of Theorems 6-9 Section 6.6.1 formulates some preliminary results that are used in the main proofs. Section 6.6.2 proves Theorem 6, Section 6.6.3 proves Theorem 7 and Section 6.6.4 proves Theorems 8 and 9.

Preliminary Results
We present some results on the concentration of λ and good initialization. The proofs of all results are in Appendix A.9.
We formulate a concentration property of the ratio of corrupted cycles, λ ij , where ij ∈ E (see (32)), and the maximal ratio λ.
Proposition 6 Let 0 ≤ q < 1, 0 < p ≤ 1, n ∈ N and assume data generated by UCM(n, p, q). For any 0 < η < 1, and Proposition 6 is not useful when q * ≈ 1, since then |N ij | needs to be rather large, and this is counter-intuitive when there is hardly any corruption. On the other hand, this proposition is useful when q * is sufficiently small. In this case, if |N ij | is sufficiently large, then λ ij concentrates around 1 − q 2 * . In particular, with high probability λ can be sufficiently high. The regime of sufficiently high λ is interesting and challenging, especially as Theorems 1 and 2 do not apply then.
The next concentration result is useful when q * is sufficiently large.
Proposition 7 Let 0 ≤ q < 1, 0 < p ≤ 1, n ∈ N and assume data generated by UCM(n, p, q). For any x ∈ (0, 1], q 2 * > 1 − x and ij ∈ E, Next, we show that the initialization suggested in (38) is good under the uniform corruption model. We first claim that it is good on average, while using the notation z G of Section 6.1.
Proposition 8 Let 0 ≤ q < 1, 0 < p ≤ 1, n ∈ N and assume data generated by UCM(n, p, q). For any ij ∈ E, s ij (0) is a scaled and shifted version of s * ij as follows At last, we formulate the concentration of s ij (0) around its expectation. It follows from direct application of Hoeffding's inequality, while using the fact that 0 ≤ d ij,k ≤ 1 are i.i.d.

Proof of Theorem 6
This proof is more involved than previous ones. Figure 4 thus provides a simple roadmap for following it.  The proof frequently uses the notation It relies on the following two lemmas. (70) Proof We use the following upper bound on ij (1), which is obtained by plugging t = 0 into (40) Denote γ ij = |s ij (0) − E(s ij (0))| for ij ∈ E and γ = max ij∈E γ ij , so that the condition of the lemma can be written more simply as 1/β 0 ≥ (1 − q 2 g )z G + γ. We use (69) to write s ij (0) = q 2 g s * ij + (1 − q 2 g )z G + γ ij and thus conclude that The combination of the latter observation with (71) results in Applying the assumption 1/β 0 ≥ (1 − q 2 g )z G + γ into the above equation, while also maximizing the LHS of this equation over ij ∈ E, results in (70).

Lemma 3 Assume that |A
Proof We prove (72), equivalently, (t) < 1/β t for all t ≥ 1, by induction. We note that (1) < 1/β 1 is an assumption of the lemma. We next show that (t + 1) < 1/β t+1 if (t) < 1/β t . We note that applying (43) and then (45) result in the following two inclusions Applying first (40), then (73) and at last the definition of λ, we obtain that for any given ij ∈ E Combining the above equation with the assumption Maximizing over ij ∈ E the LHS of the above equation concludes the induction and the lemma.

Proof of Theorem 7
This proof is similar to that of Theorem 7, but it is more difficult since it requires additional tools from empirical risk minimization (see Lemma 6). Figure 5 provides a roadmap for following the proof.  The proof of the theorem relies on the following three lemmas.
Lemma 6 If either s * ij for ij ∈ E b is supported on [a, ∞) and a ≥ 1/|B ij | or Q is differentiable and Q (x)/Q(x) 1/x for x < P (1), then there exists an absolute constant c such that The proofs of Lemmas 4 and 5 are similar to the ones of Lemmas 2 and 3. For completeness, we include them in Appendices A.10 and A.11, respectively. The proof of Lemma 6 requires tools from empirical risk minimization, and we thus provide it later in Appendix A.12.

Proof of Theorems 8 and 9
We prove Theorem 8, whereas the proof of Theorem 9 is identical. Note that (63) describes the same conditions of Theorem 1 with λ replaced by 1/4 − ∆ q . Thus, it suffices to prove that λ ≤ 1/4 − ∆ q with the probability specified in (64). This implies the conclusion of Theorem 1 with the latter probability, or equivalently, the conclusion of Theorem 8. Applying Proposition 7 with x = 1/4−∆ q and q * = Ω(1) (q * ≥ q c ), and then the fact that We note that application of Chernoff bound in (80)  , where µ = p 2 , m = n (for each fixed ij ∈ E), and then a union bound, yields that with probability at least 1 − exp(−Ω(n 2 p)) − n 2 p exp(−Ω(np 2 )), or equivalently, 1 − n 2 p exp(−Ω(np 2 )), the following events hold: |E| n 2 p and min ij∈E |N ij | np 2 . Combining this observation, (84) and a union bound over ij ∈ E results in the desired probability bound of (64) for the event λ ≤ 1/4 − ∆ q .

Clarification of Quantities Used in Theorems 6 and 7
Theorems 6 and 7 use the quantities P max (x), z G , V (x) and Q(x). In this section, we provide explicit expressions for these quantities for common group synchronization problems. We also verify that the special condition of Theorem 7 holds in these cases. This special condition is that either s 1/x for x < P (1). When using the first part of this condition, then Q is not needed and we will thus not specify it in this case. We recall that Y denotes the random variable s * ik + s * jk for any arbitrarily fixed ij ∈ E and k ∈ B ij .

Permutation Synchronization
In this problem, G = S N , whose elements are commonly represented by permutation matrices in R N ×N . A common bi-invariant metric on S N is d G (P 1 , P 2 ) = 1−Tr(P 1 P −1 2 )/N and thus d ij,k = 1−Tr(P ij P jk P ki )/N . The cdf of max{s * ik , s * jk }, P max (x), can be complicated, but one can find a more concise formula for an upper bound for it, which is sufficient for verifying the middle inequality in (56). Indeed, the cdf of s * ij for ij ∈Ẽ b , gives an upper bound of P max (x). For N ∈ N, 1 ≤ m ≤ N and ij ∈Ẽ b fixed, s * ij = d G (P Haar , I N ×N ) for P Haar ∼ Haar(S N ). Moreover, s * ij = m/N is equivalent to having exactly m elements displaced (and N − m fixed) by P Haar . Therefore, using the notation [x] for the nearest integer to x, for 1 ≤ m ≤ N , Since z G = E(s * ij ) for ij ∈Ẽ b , the exact formula for computing z G is We claim that V (x) can be chosen as Indeed, if q m denotes the probability density function (pdf) of Y and x m = m/N , then sup τ >x where the last inequality follows from the facts that e 2 e −2τ x m τ 2 x 2 m ≤ 1 for any x m and τ and e −2τ x τ 2 x 2 achieves global maximum at x = 1/τ . To conclude (85) we note that for x > 1/x 1 = N (so x m > 1/x for all m ≥ 1), the right term on the RHS of (86) is bounded by , the special condition of Theorem 7 holds when n = Ω(N/(p 2 (1 − q 2 * ))). As mentioned above the requirement n = Ω(1/(p 2 (1 − q 2 * ))) is necessary so that the third term in (59) is less than 1. The additional dependence on N is specific for this application and makes sense.
We first compute P max (x) and z G . We note that if either ik or jk ∈ E b , but not both in E b , then the cdf of max(s . Furthermore, z G = 1/2. We also note that a simple upper bound for . We note that V (x) can be chosen as the following bound on V * (x) sup τ >x At last, we verify that the special condition of Theorem 7 holds. By integrating the above pdf, the cdf of Y is P (t) = p 1 t1 {t≤1} + p 2 (t 2 /21 {t<1} + (p 1 + 1 − (t − 2) 2 /2)1 {t≥1} ). We note that Q (x) = 1/p(Q(x)) and thus for x < P (1), Q (x) = 1/(p 1 + p 2 Q(x)). Therefore, for x < P (1) where the last inequality follows from the observation p 1 t + p 2 t 2 > P (t) for t ≤ 1.

Numerical Experiments
We demonstrate the numerical performance of CEMP-B and validate the proposed theory. For comparison, we also test some well-known baseline approaches for group synchronization. We consider the following two representatives of discrete and continuous groups: Z 2 and SO(2). Section 7.1 summarizes various implementation details of CEMP-B and the baseline algorithms we compare with. Section 7.2 numerically verifies our theoretical implications for the choice of {β t } T t=1 and our convergence estimates for CEMP-B in the setting of adversarial corruption. Sections 7.3 and 7.4 test the recovery of CEMP and other baseline algorithms under adversarial corruption without and with noise. Finally, Section 7.5 demonstrates phase transition plots of different approaches under uniform corruption.

Details of Implementation and Comparison
Our choices of d G for Z 2 and SO(2) are specified in Sections 6.7.1 and 6.7.3, respectively. We represent the elements of SO(2) by the set of angles modulo 2π, or equivalently, by elements of the unit complex circle U (1).
All implemented codes are available in the following supplementary Github page: https://github.com/yunpeng-shi/CEMP. All experiments were performed on a computer with a 3.8 GHz 8-core i7-10700K CPU and 48 GB memory. For CEMP we only implemented CEMP-B, since it is our recommended practical approach. We used the following natural choice of default parameters for CEMP (i.e., CEMP-B) throughout all experiments: β t = 1.2 t for 0 ≤ t ≤ 20. We justify this choice in Section 7.2. Other choices of parameters are only tested in Section 7.2. We implemented the slower version of CEMP, with C = C 3 (instead of using a subset of C 3 with a fixed number of 3-cycles per edge), since it is fully justified by our theory.
For G = SO(2), we also compare with an IRLS algorithm that aims to solve (4) with ρ(·) = · 1 . It first initializes the group elements using Spectral [41] and then iteratively solves a relaxation of the following weighted least squares formulation , and the new weight is updated by the additional regularization term 10 −4 aims to avoid a zero denominator. More specifically, the relaxed solution of (87) is practically found by the weighted spectral method described after (31), wherep ij in (31) is replaced byw ij (t). The weights w ij (t) are equally initialized, and IRLS is run for maximally 100 iterations, where it is terminated whenever the mean of the distances d G (ĝ i (t)ĝ −1 j (t),ĝ i (t + 1)ĝ −1 j (t+1)) over all ij ∈ E is less than 0.001. We remark that this approach is the 1 minimization version of [4] for SO (2). Since IRLS is not designed for discrete optimization, we do not apply it to Z 2 synchronization.
In the special noiseless setting of Section 7.6, we also test CEMP+MST. Our code for CEMP+MST follows the description in Section 4.2.6, where the MST is found by Prim's algorithm.
Since we can only recover the group elements up to a global right group action, we use the following error metric that overcomes this issue: We use (88) to measure the performance of CEMP+GCW, Spectral, SDP and IRLS. We note that we cannot use (88) to measure the performance of CEMP, as it does not directly solve group elements. Thus, we evaluate CEMP by 7.2 Numerical Implications of the Theoretical Estimates for the Adversarial Case Theorem 2 suggests that in the noiseless adversarial setting, β 0 should be sufficiently small and β t should exponentially increase to infinity with a sufficiently small rate r. Theorem 4 suggests that in the adversarial setting with noise level δ, β 0 should be sufficiently small and β t should start increasing almost exponentially with a small rate r and then slow down and converge to a large number proportional to 1/δ. Nevertheless, one cannot test the algorithm with arbitrarily large t due to numerical instabilities; furthermore the noise level δ is unknown. Therefore, in practice, we use a simpler strategy: we start with a sufficiently small β 0 , and then exponentially increase it with a sufficiently small rate r > 1, so β t = rβ t−1 , and stop when β t exceeds a large number β max . Our default values are β 0 = 1, β max = 40 and r = 1.2. This choice leads to T = 20 and we thus expressed it earlier in Section 7.1 as β t = 1.2 t for 0 ≤ t ≤ 20. Note that if β T ≈ 40, then any s ij (T ) = 1 is assigned a negligible weight ≈ exp(−40) ≈ 10 −17 . Therefore, enlarging the number of iterations cannot help much and it can worsen the accuracy by accumulating errors. We remark that in some noisy scenarios lower T may be preferable and we demonstrate below an issue like this when using the "log max" error. We will check whether the above choices for {β t } T t=1 work sufficiently well under basic corruption models for CEMP-B. We also test two choices that contradict our theory: 1) β 0 = 1 and β max = 5 (β max is too small). 2) β 0 = 30 and β max = 40 (β 0 is too large).
We fix the group SO(2) and generate G([n], E) by the Erdős-Rényi model G(n, p) with n = 200 and p = 0.5. The ground truth elements of SO(2), {θ * i } n i=1 , are i.i.d.∼ Haar(SO(2)) ∈ (−π, π]. We assume an additive noise for the uncorrupted group ratios with noise level σ in . For this purpose we i.i.d. sample {ε ij } ij∈E from either a standard Gaussian or a uniform distribution on [− √ 3, √ 3] (in both cases the variance of ε ij is 1). We also generate adversarially corrupted elements, {θ adv i } n i=1 , that are i.i.d.∼ Haar(SO (2)). For each ij ∈ E, the observed group ratio is independently corrupted with probability q as follows: This setting was adversarially created so that the corrupted group ratios are cycleconsistent. Clearly, the information-theoretic threshold on q is 0.5. That is, exact recovery is impossible if and only if q ≥ 0.5. We thus fix q = 0.45 in our first demonstration so that our setting (especially with noise) is sufficiently challenging.
We consider three noise regimes: σ in = 0, σ in = 0.05 and σ in = 0.2 (the last two cases include both Gaussian and uniform noise). We test the three different choices of β 0 and β max described above (one implied by our theory and two contradicting it) with fixed T = 20. Fig. 6 Scatter plot of the estimated corruption levels v.s. the ground truth. Figure 6 presents scatter plots for the estimated corruption level, s ij (T ), as a function of the ground truth one, s * ij , for all ij ∈ E. The last column corresponds to application of our recommended parameters and the other two columns correspond to other choices of parameters that violate our theory. The rows correspond to different noise levels and noise distributions. Ideally, in the case of exact recovery, the points in the scatter plot should lie exactly on the line y = x. However, we note that in the noisy case (σ in > 0), the exact estimation of s * ij is impossible. The red lines form a tight region around the main line containing all points; their equations are y = x + + and y = x − − , where + = max ij∈E (s ij (T ) − s * ij ) and − = max ij∈E (s * ij − s ij (T )). The blue lines indicate variation by 0.6 · σ in from the main line (these are the lines y = x ± 0.6 · σ in ). We chose the constant 0.6 since in the third column, these lines are close to the red ones.
One can see that CEMP-B with the recommended {β t } T t=1 achieves exact recovery in the noiseless case. It approximately estimates the corruption levels in the presence of noise and its maximal error is roughly proportional to the noise level. In contrast, when β 0 and β max are both small, the algorithm fails to recover the true noise level even when σ in = 0. Indeed, with a small β t , bad cycles are assigned weights sufficiently far from 0 and this results in inaccurate estimates of s * ij . When β 0 and β max are both large, the algorithm becomes unstable in the presence of noise. When the noise level is low (σ in = 0.05), the performance is fine when the distribution is uniform; however, when the distribution is Gaussian, there are already some wrong estimates with large errors. When the noise level is 0.2, the self-consistent bad edges are wrongly recognized as inliers and assigned corruption levels 0. Figure 7 demonstrates the convergence rate of CEMP and verifies the claimed theory. It uses the above adversarial corruption model with Gaussian noise and the same three noise levels (demonstrated in the three columns), but it tests both q = 0.2 and q = 0.45 (demonstrated in the two rows). Each subplot shows three metrics of estimation: "log max", "log mean" and "log median", which correspond to log 10 (max ij∈E |s ij (t) − s * ij |), log 10 ( 1 |E| ij∈E |s ij (t) − s * ij |) and log 10 (median({|s ij (t) − s * ij | : ij ∈ E})), respectively. In the noiseless case, the three log errors decrease linearly with respect to t; indeed, Theorem 2 guarantees linear convergence of CEMP in this case. When σ in > 0, the log errors first demonstrate linear decay (with a smaller rate than above), but the convergence then slows down and seems to approach a constant value in most subfigures; this is consistent with Theorem 4. When q = 0.45 and σ in = 0.2, the log max error increases at the end. We believe that the source of the problem in this example is that β max is slightly larger than what the theory recommends in this setting of high noise. In all plots the mean errors are close to the median errors, and they are about only 1/5 of the maximal errors (a difference of about 0.7 is noticed for the log errors and 10 0.7 ≈ 5). This indicates that on average CEMP performs much better than its worst case (in terms of edges).
At last, we remark that for the data generated for Figures 6 and 7, the maximal ratio of corrupted cycles, λ, exceeds the bound 1/5 of Theorem 4. Indeed, given the underlying model one may note that λ concentrates around 1 − (1 − q) 2 , which is approximately 0.7 and 0.35 when q = 0.45 and q = 0.2, respectively. Nevertheless, CEMP still achieves exact recovery in these cases. Note though that the upper bound on λ, even if tight, is only a sufficient condition for good performance. Furthermore, the adversarial corruption model of this section is very special (with strong assumptions on the generation of E, E b , the ground truth ratios and the corrupted ratios), whereas the theory was formulated for the worst-case scenario.

Exact Recovery under Adversarial Corruption
We consider a more malicious adversarial corruption model than that of Section 7.2. Let G([n], E) be generated by an Erdős-Rényi model G(n, p) with n = 200 and p = 0.5. We independently draw n c graph nodes without replacement. Every time we draw a node, we randomly assign 75% of its neighboring edges to the setẼ b of selected edges for corruption. It is possible that an edge is assigned twice toẼ b (when both of its nodes are selected), butẼ b is not a multiset. Note that unlike the previous example of Section 7.2, the elements ofẼ b are not independently chosen. We denoteẼ g : Unif(−π, π] and the observed group ratios are generated as follows: Unif{−1, 1} and the observed group ratios are generated as follows: Recall that E b is the set of actually corrupted edges.
That is, only 50% of the selected edges inẼ b are expected to be corrupted.
We note that in the case where G = SO(2) and |E g | ≤ |E b |, the exact recovery of {θ * i } i∈[n] becomes ill-posed, as E g is no longer the largest cycle-consistent subgraph and {θ adv i } i∈[n] should have been labeled as the ground truth. We remark that this is not an issue for G = Z 2 , since at most 50% of edges inẼ b belong to E b , and thus |E b | ≤ |E|/2. Therefore, for SO(2), we need to control n c so that |E b | < |E|/2 ≈ n 2 p/4. We argue that n c /n needs to be less than or equal to 0.53. Indeed, since we subsequently corrupt 75% of the neighboring edges of the n c selected nodes, the probability that ij ∈ E b is corrupted twice is 0.6 (we omit the simple calculation). Namely, about 0.6|E b | edges are corrupted twice and 0.4|E b | edges are only corrupted once, and thus about 1.6|E b | corruptions result in |E b | corrupted edges. Note that the total number of corruptions is 0.75np · n c for SO (2), and thus we require that Figure 8 plots error S for CEMP and error G for the other algorithms as a function of the fraction of corrupted nodes, n c /n. Each plotted value is an average of the estimation error over 10 trials; it is accompanied with an error bar, which corresponds to 10% and 90% percentiles of the estimation errors in the 10 trials. The figure only considers n c /n ≤ 0.9 for Z 2 and n c /n ≤ 0.5 for SO(2) as all algorithms performed poorly beyond these regimes. We note that CEMP+GCW outperforms Spectral and SDP for both G = Z 2 and G = SO(2). One interesting phenomenon is that although CEMP and CEMP+GCW use two different error metrics, their errors seem to nicely align. This is strong evidence that the advantage of CEMP+GCW over Spectral is largely due to CEMP. We observe near exact recovery of CEMP+GCW when n c /n ≤ 0.4. In contrast, the errors of Spectral and SDP clearly deviate from 0 when n c /n > 0.2, where SDP performs slightly worse.
In angular synchronization, CEMP+GCW is somewhat comparable to IRLS. It performs better than IRLS when n c /n = 0.4, and worse when n c /n = 0.5. We remark that unlike IRLS that solves a weighted least squares problem in each iteration, CEMP+GCW only solves a single weighted least squares problem.

Stability to Noise under Adversarial Corruption
We use the same corruption model as in Section 7.3, while adding noise to both good and bad edges. We only consider G = SO(2). We do not consider the noisy model of Z 2 since the addition of noise to a group ratio z ij ∈ Z 2 with a follow-up of projection onto Z 2 results in either z ij or −z ij and this is equivalent to corrupting z ij with a certain probability.
Let σ in and σ out be the noise level of inlier and outlier edges, respectively. The observed group ratio θ ij is generated by Unif(−π, π] and the noise variable ε is i.i.d. N (0, 1). Figure 9 plots error S of CEMP and error G of the other algorithms as a function of the fraction of corrupted nodes. As in Figure 8, each plotted value is an average of the estimation error over 10 trials and is accompanied with an error bar corresponding to 10% and 90% percentiles. Four different scenarios are demonstrated in the four different rows of this figure.
Its first row corresponds to a very malicious case where σ in > σ out = 0. In this case, the bad edges in E b are cycle-consistent (since σ out = 0) and the good edges in E g are only approximately cycle-consistent (due to noise). As explained in Section 7.3, when σ in = σ out = 0 the information-theoretic bound of n c /n is 0.53. However, when σ in > σ out = 0 the information-theoretic bound is expected to be smaller. Since we do not know this theoretical bound, we first focus on two simpler regions. The first is when n c /n ≤ 0.3, so E g is much larger than E b . In this case, CEMP and IRLS mainly select the edges in E g for the inlier graph, and CEMP+GCW is comparable to IRLS and outperforms Spectral and SDP. The second region is when n c /n = 0.7 (the same result was noted when n c /n ≥ 0.7). Here, both CEMP and IRLS recognize E b as the edges of the inlier graph and completely ignore E g ; consequently they exactly recover {θ adv i } n i=1 , which results in estimation error of 0.5 (indeed, note that

However, Spectral and SDP cannot recover either {θ
in this regime. In the middle region between these two regions, CEMP seems to weigh the cycle-consistency of E b more than IRLS. On the other hand, IRLS seems to mainly weigh the relative sizes of E g and E b . In particular, the transition of IRLS from mainly using E g to mainly using E b for the inlier graph occurs around the previously mentioned value of n c /n = 0.53 for the noiseless case, whereas CEMP transitions earlier. It seems that Spectral has a similar late transition as IRLS and SDP has an earlier transition than IRLS, but it is hard to locate it due to the poor performance of SDP.
The second row of Figure 9 corresponds to a less adversarial case where σ out > σ in = 0. Thus, good edges are exactly cycle-consistent (σ in = 0), and the bad edges in E b are only approximately cycle-consistent. The information-theoretic bound of n c /n should be above 0.53 in this case. Indeed, CEMP+GCW is able to almost exactly recover the ground truth when n c /n = 0.5. Its estimation error is smaller than 0.1 even when n c /n = 0.7, and all other algorithms perform poorly in this regime.
The third row of Figure 9 corresponds to the case where inlier and outlier edges have the same level of noise. Similarly to the results of Section 7.3, CEMP+GCW is comparable to IRLS and performs better than other methods. Its performance starts to degrade when n c /n approaches the information-theoretic bound of the noiseless case, 0.53. In the last row of Figure 9, both good and bad edges are noisy, and bad edges have higher noise levels. This case is somewhat similar to the one in the second row of the figure. CEMP+GCW performs better than all other methods, especially in the high corruption regime where n c /n > 0.5.

Phase Transition under Uniform Corruption
We demonstrate phase transition plots of the different algorithms under the uniform corruption model. For Z 2 , the group ratios are generated by (2), the observed group ratios θ ij , ij ∈ E, are generated by . Figures 10 and 11 show the phase transition plots for Z 2 and SO(2) synchronization, respectively. They include plots for the averaged error S (for CEMP) and averaged error G (for the other algorithms) over 10 different random runs for various values of p, q and n (p appears on the y axis, q on the x axis and n varies with subfigures). The darker the color the smaller the error. The red and blue curves correspond to possible phase transition thresholds, which we explain below.
For Z 2 , recall that [2] establishes the information-theoretic bound in (55) and that [5] and [12] show that it also holds for SDP and Spectral, respectively. Using this bound, while ignoring its log factor and trying to fit its unknown constant to the phase transition plots for Spectral and SDP, we set the red line as p = 12/(n(1 − q) 2 ). Clearly, this is not the exact theoretical lower bound. For SO (2), recall that the information theoretic bound is the same as the above one for Z 2 [11]. We cannot fit a curve to Spectral and SDP since they do not exactly recover the group ratios. Instead, we try to fit a phase transition curve to IRLS (even though there is no theory for this). We fit the red curve defined by p = 10/(n(1 − q) 2 ).
We note that the red curves (which were determined by the above-mentioned algorithms) do not align well with the phase transition plots of CEMP when p is sufficiently small (that is, when the underlying graph is sparse). Indeed, Section 6.4 explains the limitation of CEMP (and any method based on 3-cycle-consistency) when p is small. On the other hand, the sample complexity of CEMP might be tight as a function of q g = 1 − q, as opposed to our current theoretical estimate (see discussion in Section 6.4). If this assumption is correct, then the red curve can align well with the phase transition when p is not very small as may seem from the figures. For small p, we followed the (necessary) dependence on p of our estimates in Section 6.4 and further experimented with different powers of (1 − q) and consequently fit the following blue curves: p = 3/ n(1 − q) and p = 1.8/ n(1 − q) for CEMP and CEMP+GCW, respectively. We used the same curves for both Z 2 and SO(2) synchronization. It is evident from Figure 10 that the phase transition plots of Spectral and SDP align well with the red curve. For CEMP and CEMP+GCW, the exact recovery region (dark area) seems to approximately lie in the area enclosed by the red and blue curves. The blue curve of CEMP+GCW is slightly closer to the x-axis than that of CEMP. This suggests that combining CEMP with GCW can partially help with dealing with sparse graphs.
In Figure 11, Spectral and SDP do not seem to exactly recover group elements in the presence of any corruption, and thus a phase transition region is not noticed for them. The phase transition plots of IRLS align well with the red curve. The exact recovery regions of both CEMP and CEMP+GCW seem to approximately lie in the area enclosed by the red and blue curves. Again, CEMP+GCW seems to slightly improve the required bound (its blue curve is lower). Nevertheless, more careful research is needed to determine in theory the correct phase transition curves of CEMP and CEMP+GCW.

Testing the Speed of the Algorithms
We compare the speed of the algorithms under different parameters. We assume the uniform corruption model for SO(d) without noise and with q = 0.2. We test the values n = 100, 300, 1000, with p = 50/n and dimensions d = 2, 10, 50. To be consistent with the underlying metric of other algorithms and with different choices of dimensions d, we use the following scaled version of the Frobenius metric: 1]. We test the same algorithms of Section 7.5 and also CEMP+MST (it can be applied here since there is no noise, though in general we don't recommend it).
Tables 2-4 report the runtimes of different algorithms, where each table corresponds to a different value of d. In order to account for a possible tradeoff between runtime and accuracy, they also report the normalized root mean squared error (NRMSE): These tables use "NAp" (not applicable) when NRMSE is not defined (for CEMP) and "NA" (not available) when the memory usage exceeds the 48 GB limit. They report "0" NRMSE error whenever the error is smaller than 10 −15 .    Table 4 Runtime (in seconds) and accuracy for G = SO(50) and np = 50.
We note that SDP and Spectral have the lowest accuracy since they minimize a least squares objective function, which is not robust to outliers. SDP is the slowest algorithm in all experiments, and we could not implement it on our computer (whose specifications are detailed in Section 7.1) when n ≥ 1000 or d ≥ 50. Spectral seems to be the fastest method when either d or n are sufficiently small. However, for n = 1000, CEMP and CEMP+MST are faster than Spectral when d = 10 and d = 50; in the latter case they are more than 6 times faster. The recovery error of IRLS is small in all experiments. However, it is the second slowest algorithm. It also doubles the memory usage of Spectral since it needs to store both the original Y in (6) and the weighted Y in each iteration. Due to this issue, IRLS exceeds the memory limit when n = 1000 and d = 50. We note that in most of the experiments for d = 10, 50, CEMP+GCW achieves NRMSE 3-order of magnitude lower than that of IRLS. We also note that CEMP+MST is completely accurate in all experiments and it is the fastest method when d ≥ 10 and n = 1000. Since we fix np = 50 and since the complexity of CEMP is of order O((npd) 3 ), the runtime of CEMP does not change much in each table; whereas the runtimes of the other algorithms clearly increases with n. We observe that the runtime of the MST post-processing is almost negligible in comparison to CEMP. Indeed, the time complexity of building the MST is pn 2 log n and that of computing g i = g ij g j along the spanning tree is O(nd 3 ).

Conclusion
We proposed a novel message passing framework for robustly solving group synchronization problems with any compact group under adversarial corruption and sufficiently small noise. We established a deterministic exact recovery theory for finite sample size with weak assumptions on the adversarial corruption (the ratio of corrupted cycles per edge needs to be bounded by a reasonable constant). Previous works on group synchronization assumed very special generative models. Some of them only considered asymptotic recovery and they were often restricted to special groups. Somewhat similar guarantees exist for the different problem of camera location estimation, but we already mentioned their weaknesses in view of our guarantees. We also established the stability of CEMP to bounded and sub-Gaussian noise. We further guaranteed exact recovery under a previous uniform corruption model, while considering the full range of model parameters.
There are different theoretical directions that may help in improving this work. First of all, the theory for adversarial corruption assumes a uniform bound on the corruption ratio per edge, whereas in practice one should allow a small fraction of edges to be contained in many corrupted cycles. We believe that it is possible to address the latter setting with CEMP by adaptively choosing β t for different edges. This way, instead of the current ∞ bound on the convergence, one can establish an 1 or 2 convergence bound. Nevertheless, the mathematical ideas behind guaranteeing an adaptive reweighting strategy are highly complicated and hard to verify. Instead, we prefer to clearly explain our theory with a simpler procedure.
Another future direction is to extend the theory to other classes of reweighting functions, in addition to the indicator and exponential functions. In particular, one may further consider finding an optimal sequence of reweighting functions under certain statistical models. This direction will be useful once an adaptive reweighting strategy is developed. On the other hand, when β t is the same for all edges, then Section 4.2.6 advocates for the exponential reweighting function.
We emphasized the validity of the exact recovery guarantees under UCM for any q < 1 and for various groups. However, as we clarified in Section 6.4, the sample complexity bound implied by our estimates does not match the informationtheoretic one. It will be interesting to see if a more careful analysis of a CEMP-type method can fill the gap. We believe that this might be doable for the dependence of the sample complexity on q g , but not on p (see our discussion in Section 6.4 and our numerical results in Section 7.5). We expect that the use of higher-order cycles will improve the dependence on p but worsen the dependence on q g .
The framework of CEMP can be relevant to other settings that exploit cycle consistency information, but with some limitations. First of all, in the case of non-compact groups, one can scale the given group elements. In particular, if both lie in a ball of fixed radius, then by appropriate scaling, one can assume that s * ij ≤ 1 for all ij ∈ E. The theory thus extends to non-compact groups with finite corruption models and bounded noise. If the distribution of the corruption or noise has infinite support, then our theory is invalid when the sample size approaches infinity, though it is still valid for a finite sample size.
We also claim that CEMP can be extended to the problem of camera location estimation. Since the scale information of group ratios is missing, one should define alternative notions of cycle consistency, inconsistency measure and corruption level, such as the ones we proposed in [38]. In fact, using such notions, the AAB algorithm of the conference paper [38] is CEMP-B with s ik (t) + s jk (t) replaced by max{s ik (t), s jk (t)}. We remark that there is no significant difference between these two comparable choices. We can develop a similar, though weaker, theory for exact recovery by CEMP (or AAB) for camera location estimation. In order to keep the current work focused, we exclude this extension. The main obstacle in establishing this theory is that the metric is no longer bi-invariant and thus d ij,k may not equal to s * ij , even for uncorrupted cycles. A different notion of cycle consistency is also used in a histogram-based method for the identification of common lines in cryo-EM imaging [42]. We believe that the reweighting procedure in CEMP can be incorporated in [42] to reduce the rate of false positives.
We claim that cycle consistency is also essential within each cluster of vector diffusion maps (VDM) [44], which aims to solve a different problem of clustering graph nodes, for example, clustering cryo-EM images with different viewing directions [16,44]. Indeed, in VDM, powers of the connection adjacency matrix give rise to "higher-order connection affinities" between nodes i and j obtained by the squared norm of a weighted sum of the products of group ratios g L ij along paths L ij from i to j (see e.g., demonstration in Figure 4(a) in [15]). For i and j in the same cluster, cycle consistency implies that each product of group ratios g L ij is approximately g ij (or exactly g ij if there is no corruption). Consequently, for each ij ∈ E, the sum of g L ij over L ij with fixed length (depending of the power used) is approximately a large number times g ij and thus has a large norm, that is, the higher-order connection affinity is large. On the other hand, if i and j belong to different clusters, then the different g L ij 's may possibly cancel or decrease the effect of each other (due to the different properties of the clusters). Consequently, the higher-order connection affinity is typically small. We note that these affinities are somewhat similar to our weighted average of cycle inconsistencies, L w L d G (g L , e G ). However, unlike our reweighting strategy, VDM weighs cycles in a single step using Gaussian kernels (see (3) and (4) in [16]). We believe that a suitable reweighting strategy can be applied to VDM to improve its classification accuracy. After the submission of this work, [40] showed that for the permutation group with a very special metric, CEMP is equivalent to an iterative application of the graph connection weight (GCW) matrix, which is used in VDM in a different way (see also Section 4.2.7). Unfortunately, we find it unlikely to extend the ideas of [40] to other groups and metrics.
A relevant question is the possibility of extending all results to higher-order cycles and the usefulness of such an extension. We believe that such an extension is not difficult. As mentioned above, we expect higher-order cycles to help with sparse graphs, but possibly degrade the ability to handle very high corruption and also significantly enlarge the computational complexity. We did not find it necessary to explore this, since datasets of structure from motion seem to have enough 3cycles per edge to guarantee that CEMP can reliably estimate the corruption levels of most edges. In this case, the combination of CEMP with other methods can improve results for edges that may not have enough 3-cycles (see e.g., [38,39]).
Finally, we mention that while the theory is very general and seems to apply well to the common compact groups Z 2 , S N and SO(d), specific considerations need to be addressed for special groups and special instances. For example, the problem of recovering orientations of jigsaw puzzles [24] can be solved by Z 4 synchronization, where its ideal graph is a two-dimensional lattice. In this setting, each edge is contained in at most two cycles of length at most 4. Thus effective inference of corruption requires cycles with length greater than 4.

A.3 Proof of Proposition 3
For any ij ∈ E, let L ij denote a path between nodes i and j. We claim that h is invertible and its inverse is , where k ∈ [n] (due to the equivalence relationship, this definition is independent of the choice of k). Using the definitions of h −1 and then h, basic group properties and the cycle consistency constraints for cycles in C, For ij ∈ E, defineĝ ij = g i g −1 j and note that (ĝ ij ) ij∈E is cycle-consistent for any cycle in G([n], E). Thus,ĝ L ik =ĝ ik = g i g −1 k . Using this observation and the definitions of h and h −1 The combination of the above two equations concludes the proof.

A.4 Proof of Proposition 4
For each ij ∈ E and L ∈ N ij , the cycle weight computed from the shifted corruption levels {s ij (t) + s} ij∈E is , which equals the original cycle weight.
1 β t+1 . We use similar notation and arguments as in the proof of Theorem 1. We note that 1 β t ≥ (t) + δ ≥ max ij∈Eg ij (t) + δ = max ij∈Eg s ij (t) and thus for any ij ∈ E, G ij ⊆ A ij (t).
We note that The combination of (96) and (97) results in .
Applying the inequality 1 − (1 − λ)(1 − 2e − x 2 2 ) < λ + 2e − x 2 2 for 0 < λ < 1/4 to the above equation yields That is, with the probability indicated on the RHS of (98), for any ij ∈ E, there is a subset of N ij whose proportion is at least 1 − λ − 2 exp(−x 2 /2) and for any element indexed by k in this subset, both s * ik and s * jk are bounded above by σµ + σx. We thus conclude the proof by applying Theorems 3 and 4, while replacing their parameters δ and λ with the current parameters σµ + σx and λ + 2 exp(−x 2 /2), respectively.

A.9.3 Proof of Proposition 8
We consider three disjoint cases of k's in the sum of (38). Since E(s ij (0)) = E(d ij,k ), we compute in each case the contribution of that case to the expectation of d ij,k given that case.
The first case is when k ∈ G ij , so d ij,k = s * ij , and thus the corresponding elements in (38) equal s * ij . This case occurs w.p. q 2 g . The second case is when k ∈ B ij and either ik or jk (but not both) is corrupted, and it occurs with probability 2qg(1 − qg). Without loss of generality, we assume that ik ∈ Eg and jk ∈ E b . Using the bi-invariance of d G , we obtain that in this case, d ij,k = d G (g ij g jk g * ki , e G ) = d G (g * ki g ij g jk , e G ). For any given g * ki and g ij , g * ki g ij g jk ∼ Haar(G), due to the fact that g jk ∼ Haar(G) and the definition of Haar measure. Thus, in this case E(d ij,k |ik ∈ Eg, jk ∈ E b ) = z G .
The last case is when k ∈ B ij and both ik and jk are corrupted. This case occurs with probability (1 − qg) 2 . We claim that since g jk , g ki ∼ Haar(G) and g jk and g ki are independent, g jk g ki ∼ Haar(G). Indeed, for any g ∈ G, gg jk ∼ Haar(G), and furthermore, g ki is independent of both g jk and gg jk . Thus, g jk g ki and gg jk g ki are identically distributed for any g ∈ G and thus g jk g ki ∼ Haar(G). Consequently, for fixed g ij , g ij g jk g ki ∼ Haar(G) and thus Combining all the three cases, we conclude (69).
By first applying the obvious facts: |γ ik |, |γ jk | ≤ γ and s * ik = s * jk = 0 for k ∈ G ij , then applying the assumption 1/(4β 0 ) ≥ γ, and at last the inequality xe −ax ≤ 1/(ea) for x, a > 0 with x = s * ik + s * jk and a = β 0 q 2 g , we obtain that The lemma is concluded by maximizing over ij ∈ E both the LHS and RHS of the above inequality.

A.11 Proof of Lemma 5
We prove (82), or equivalently, (t) < 1/(4βt) for all t ≥ 1, by induction. We note that (1) < (1/4β 1 ) is an assumption of the lemma. We next show that (t + 1) < 1/(4β t+1 ) if (t) < 1/(4βt). By combining (48) and the induction assumption (t) < 1/(4βt) and then using the definition of λ, Combining (81) with the above equation, then applying the definition of M , and at last using β t+1 = βt/r, A.12 Proof of Lemma 6 We arbitrarily fix ij ∈ E and β > 0. We denote m = |B ij | and assume that k = 1, . . . , m index the elements of B ij . We use the i.i.d. random variables X k = s * ik + s * jk , k = 1, . . . , m, with cdf denoted (as earlier) by P . Let P and Pm denote the functionals that provide the expectation with respect to the probability and empirical measures of {X k } m k=1 , respectively. That is, Pf = f (x)dP (x) and Pmf = 1 m m k=1 f (X k ). For any functional Y : F (β) → R, let Y F (β) = sup f ∈F (β) |Y(f )|. Given this notation, we can rewrite (83) that we need to prove as follows
The above formulation is similar to the following uniform version of Bennett's inequality in our setting (see Theorem 2.3 of [7]): For any t > 0 where h(x) = (x + 1) log(x + 1) − x and v = V (β) + 2E Pm − P F (β) (V is the same as ours). We remark that (101) holds under the condition that sup fτ ∈F (β) fτ − Pfτ ∞ ≤ 1. This condition holds in our setting since 0 ≤ fτ (x) ≤ 1 for any τ ≥ 0 and x ≥ 0. In order to conclude (100) from (101), we formulate the following lemma that provides an upper bound for E Pm − P F (β) in (101). We prove it in Section A.12.1 below.
By letting t = V (β) + 2c 1 log m/m in (101) and c = 3c 1 in (100) and applying Lemma 7, we conclude that the event of (100) contains the event of (101). It thus remains to show that the probability bound in (100) controls the one in (101). This follows from the facts that t/v > 1 (which follows by direct application of Lemma 7) and h(x) > x/3 when x ≥ 1 (which is a direct calculus exercise).
We note that the cdf of min 1≤k≤m X k is 1 − (1 − P (x)) m . Combining this observation, the fact that ε < 1 and (106), and then applying basic inequalities, using the notation a + := max(a, 0), and in particular final application of Jensen's inequality with the concave function √ x, we obtain that E 2σm 0 log N (F (β); 2 (Pm); ε)dε < Next, we give an upper bound for the first term in the RHS of (107), while considering the two cases of Theorem 7. If X k , 1 ≤ k ≤ m, is supported on [a, ∞) and a 1/m, then If on the other hand, the quantile function Q(x) is differentiable and Q (x)/Q(x) 1/x for x < P (1), then we substitute u = 1 − P (x) and obtain that Combining (107)-(109), we conclude (104) and thus Lemma 7.