Skip to main content
Log in

Localization from Incomplete Noisy Distance Measurements

  • Published:
Foundations of Computational Mathematics Aims and scope Submit manuscript

Abstract

We consider the problem of positioning a cloud of points in the Euclidean space ℝd, using noisy measurements of a subset of pairwise distances. This task has applications in various areas, such as sensor network localization and reconstruction of protein conformations from NMR measurements. It is also closely related to dimensionality reduction problems and manifold learning, where the goal is to learn the underlying global geometry of a data set using local (or partial) metric information. Here we propose a reconstruction algorithm based on semidefinite programming. For a random geometric graph model and uniformly bounded noise, we provide a precise characterization of the algorithm’s performance: in the noiseless case, we find a radius r 0 beyond which the algorithm reconstructs the exact positions (up to rigid transformations). In the presence of noise, we obtain upper and lower bounds on the reconstruction error that match up to a factor that depends only on the dimension d, and the average degree of the nodes in the graph.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. Estimates of this type will be repeatedly proved in what follows.

References

  1. A.Y. Alfakih, A. Khandani, H. Wolkowicz, Solving Euclidean distance matrix completion problems via semidefinite programming, Comput. Optim. Appl. 12, 13–30 (1999).

    Article  MathSciNet  MATH  Google Scholar 

  2. L. Asimow, B. Roth, The rigidity of graphs, Trans. Am. Math. Soc. 245, 279–289 (1978).

    Article  MathSciNet  MATH  Google Scholar 

  3. J. Aspnes, T. Eren, D.K. Goldenberg, A.S. Morse, W. Whiteley, Y.R. Yang, B.D.O. Anderson, P.N. Belhumeur, A theory of network localization, IEEE Trans. Mob. Comput. 5(12), 1663–1678 (2006).

    Article  Google Scholar 

  4. M. Belkin, P. Niyogi, Laplacian eigenmaps for dimensionality reduction and data representation, Neural Comput. 15, 1373–1396 (2002).

    Article  Google Scholar 

  5. M. Bernstein, V. de Silva, J. Langford, J. Tenenbaum, Graph approximations to geodesics on embedded manifolds. Technical Report, Stanford University, Stanford, CA, 2000.

  6. P. Biswas, Y. Ye, Semidefinite programming for ad hoc wireless sensor network localization, in Proceedings of the 3rd International Symposium on Information Processing in Sensor Networks, IPSN ’04 (ACM, New York, 2004), pp. 46–54.

    Chapter  Google Scholar 

  7. S.P. Boyd, A. Ghosh, B. Prabhakar, D. Shah, Mixing times for random walks on geometric random graphs, in Proceedings of the 7th Workshop on Algorithm Engineering and Experiments and the 2nd Workshop on Analytic Algorithmics and Combinatorics, ALENEX/ANALCO 2005, Vancouver, BC, Canada, 22 January 2005 (SIAM, Philadelphia, 2005), pp. 240–249.

    Google Scholar 

  8. S. Butler, Eigenvalues and structures of graphs. Ph.D. thesis, University of California, San Diego, CA, 2008.

  9. R. Connelly, Generic global rigidity, Discrete Comput. Geom. 33, 549–563 (2005).

    Article  MathSciNet  MATH  Google Scholar 

  10. T. Cox, M. Cox, Multidimensional Scaling, Monographs on Statistics and Applied Probability, vol. 88 (Chapman & Hall, London, 2001).

    MATH  Google Scholar 

  11. P. Diaconis, L. Saloff-Coste, Comparison theorems for reversible Markov chains, Ann. Appl. Probab. 3(3), 696–730 (1993).

    Article  MathSciNet  MATH  Google Scholar 

  12. D.L. Donoho, C. Grimes, Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data, Proc. Natl. Acad. Sci. USA 100(10), 5591–5596 (2003).

    Article  MathSciNet  MATH  Google Scholar 

  13. S.J. Gortler, A.D. Healy, D.P. Thurston, Characterizing generic global rigidity, Am. J. Math. 132, 897–939 (2010).

    Article  MathSciNet  MATH  Google Scholar 

  14. F. Lu, S.J. Wright, G. Wahba, Framework for kernel regularization with application to protein clustering, Proc. Natl. Acad. Sci. USA 102(35), 12332–12337 (2005).

    Article  MathSciNet  MATH  Google Scholar 

  15. G. Mao, B. Fidan, B.D.O. Anderson, Wireless sensor network localization techniques, Comput. Netw. ISDN Syst. 51, 2529–2553 (2007).

    MATH  Google Scholar 

  16. S. Oh, A. Karbasi, A. Montanari, Sensor network localization from local connectivity: performance analysis for the MDS-MAP algorithm, in IEEE Information Theory Workshop 2010 (ITW 2010) (2010).

    Google Scholar 

  17. N. Patwari, J.N. Ash, S. Kyperountas, R. Moses, N. Correal, Locating the nodes: cooperative localization in wireless sensor networks, IEEE Signal Process. Mag. 22, 54–69 (2005).

    Article  Google Scholar 

  18. M. Penrose, Random Geometric Graphs (Oxford University Press, Oxford, 2003).

    Book  MATH  Google Scholar 

  19. L.K. Saul, S.T. Roweis, Y. Singer, Think globally, fit locally: unsupervised learning of low dimensional manifolds, J. Mach. Learn. Res. 4, 119–155 (2003).

    Google Scholar 

  20. D. Shamsi, Y. Ye, N. Taheri, On sensor network localization using SDP relaxation. arXiv:1010.2262 (2010).

  21. A. Singer, A remark on global positioning from local distances, Proc. Natl. Acad. Sci. USA 105(28), 9507–9511 (2008).

    Article  MathSciNet  MATH  Google Scholar 

  22. A.M.-C. So, Y. Ye, Theory of semidefinite programming for sensor network localization, in Symposium on Discrete Algorithms (2005), pp. 405–414.

    Google Scholar 

  23. J.B. Tenenbaum, V. Silva, J.C. Langford, A global geometric framework for nonlinear dimensionality reduction, Science 290(5500), 2319–2323 (2000).

    Article  Google Scholar 

  24. K.Q. Weinberger, L.K. Saul, An introduction to nonlinear dimensionality reduction by maximum variance unfolding, in Proceedings of the 21st National Conference on Artificial Intelligence, 2 (AAAI Press, Menlo Park, 2006), pp. 1683–1686.

    Google Scholar 

  25. Z. Zhu, A.M.-C. So, Y. Ye, Universal rigidity: towards accurate and efficient localization of wireless networks, in IEEE International Conference on Computer Communications (2010), pp. 2312–2320.

    Google Scholar 

Download references

Acknowledgements

Adel Javanmard is supported by the Caroline and Fabian Pease Stanford Graduate Fellowship. This work was partially supported by NSF CAREER award CCF-0743978, NSF grant DMS-0806211, and AFOSR grant FA9550-10-1-0360. The authors thank the anonymous reviewers for their insightful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrea Montanari.

Additional information

Communicated by Herbert Edelsbrunner.

Appendices

Appendix A: Proof of Remark 2.1

For 1≤jn, let random variable z j be 1 if node j is in region \(\mathcal{R}\) and 0 otherwise. The variables {z j } are i.i.d. Bernoulli with probability \(V(\mathcal{R})\) of success. Also, \(n(\mathcal{R}) = \sum_{j =1}^{n} z_{j}\).

By application of the Chernoff bound we obtain

$$ \mathbb{P}\Biggl( \Biggl| \sum_{j =1}^n z_j - n V(\mathcal{R}) \Biggr| \geq \delta nV(\mathcal{R}) \Biggr) \geq2 \exp \biggl( {-} \frac{\delta^2 n V(\mathcal{R})}{2} \biggr). $$

Choosing \(\delta= \sqrt{\frac{2c\log n}{n V(\mathcal {R})}}\), the right-hand side becomes 2exp(−clogn)=2/n c. Therefore, with probability at least 1−2/n c,

$$ n(\mathcal{R}) \in nV(\mathcal{R}) + \bigl[- \sqrt{2c n V( \mathcal {R})\log n}, \sqrt{2c n V(\mathcal{R}) \log n}\,\bigr]. $$
(35)

Appendix B: Proof of Proposition 5.1

We apply the bin-covering technique. Cover the space [−0.5,0.5]d with a set of nonoverlapping hypercubes (bins) whose side lengths are δ. Thus, there is a total of m=⌈1/δd bins, each of volume δ d. In formula, bin (j 1,…,j d ) is the hypercube [(j 1−1)δ,j 1 δ)×⋯×[(j d −1)δ,j d δ), for j k ∈{1,…,⌈1/δ⌉} and k∈{1,…,d}. Denote the set of bins by {B k }1≤km . Assume n nodes are deployed uniformly at random in [−0.5,0.5]d. We claim that if δ≥(clogn/n)1/d, where c>1, then w.h.p., every bin contains at least d+1 nodes.

Fix k and let random variable ξ l be 1 if node l is in bin B k and 0 otherwise. The variables {ξ l }1≤ln are i.i.d. Bernoulli with probability 1/m of success. Also \(\xi= \sum_{l=1}^{n} \xi_{l}\) is the number of nodes in bin B k . By the Markov inequality, \(\mathbb{P}(\xi\le d) \le \mathbb{E}\{Z^{\xi-d}\}\), for any 0≤Z≤1. Choosing Z=md/n, we have

By applying a union bound over all the m bins, we get the desired result.

Now take \(\delta= r/(4\sqrt{d})\). Given that \(r \ge4c \sqrt{d} (\log n / n)^{1/d}\), for some c>1, every bin contains at least d+1 nodes, w.h.p. Note that for any two nodes x i ,x j ∈[−0.5,0.5]d with ∥x i x j ∥≤r/2, the point (x i +x j )/2 (the midpoint of the line segment between x i and x j ) is contained in one of the bins, say B k . For any point s in this bin,

$$ \|s - x_i\| \leq \biggl\|s - \frac{x_i + x_j}{2} \biggr\|+ \biggl\|\frac {x_i + x_j}{2} - x_i \biggr\| \leq \frac{r}{4} + \frac{r}{4} = \frac{r}{2}. $$

Similarly, ∥sx j ∥≤r/2. Since sB k was arbitrary, \(\mathcal{C}_{i}\cap \mathcal{C}_{j}\) contains all the nodes in B k . This implies the thesis, since B k contains at least d+1 nodes.

Appendix C: Proof of Proposition 5.2

Let \(m_{k} = |\mathcal{Q}_{k}|\) and define the matrix R k as follows:

$$ R_k = \bigl[ x_{\mathcal{Q}_k}^{(1)} \bigl| \cdots\bigr| x_{\mathcal{Q}_k}^{(d)} \bigl| u_{\mathcal{Q}_k} \bigr]^T \in {\mathbb{R}}^{(d+1) \times m_k}. $$

Compute an orthonormal basis \(w_{k,1},\ldots,w_{k,m_{k} - d-1} \in {\mathbb{R}}^{m_{k}}\) for the null space of R k . Then

$$ \varOmega_k = P^{\perp}_{ \langle u_{\mathcal{Q}_k},x^{(1)}_{\mathcal{Q}_k},\ldots ,x^{(d)}_{\mathcal{Q}_k}\rangle} = \sum _{l=1}^{m_k -d-1} w_{k,l} w_{k,l}^T. $$

Let \(\hat{w}_{k,l} \in {\mathbb{R}}^{n}\) be the vector obtained from w k,l by padding it with zeros. Then, \(\hat{\varOmega}_{k} = \sum_{l=1}^{m_{k} -d-1} \hat{w}_{k,l} \hat{w}_{k,l}^{T}\). In addition, the (i,j) entry of \(\hat{\varOmega}_{k}\) is nonzero only if \(i,j \in \mathcal{Q}_{k}\). Any two nodes in \(\mathcal{Q}_{k}\) are connected in G (recall that \(\mathcal{Q}_{k}\) is a clique of G). Hence, \(\hat{\varOmega}_{k}\) is zero outside E. Since \(\varOmega= \sum_{\mathcal{Q}_{k} \in \textsf{cliq}(G)} \hat{\varOmega}_{k}\), the matrix Ω is also zero outside E.

Notice that for any v∈〈x (1),…,x (d),u〉,

$$ \varOmega v = \biggl(\sum_{\mathcal{Q}_k \in \textsf{cliq}(G)} \hat{ \varOmega}_k\biggr) v =\sum_{\mathcal{Q}_k \in \textsf{cliq}(G)} \varOmega_k v_{\mathcal{Q}_k} = 0. $$

So far we have proved that Ω is a stress matrix for the framework. Clearly, Ω⪰0, since \(\hat{\varOmega}_{k} \succeq0\) for all k. We only need to show that \({\rm rank}(\varOmega) = n-d-1\). Since Ker(Ω)⊇〈x (1),…,x (d),u〉, we have rank(Ω)≤nd−1. Define

$$ \tilde{\varOmega} = \sum_{\mathcal{Q}_k \in\{\mathcal{C}_1,\ldots,\mathcal{C}_n\}} \hat { \varOmega}_k. $$

Since \(\varOmega\succeq\tilde{\varOmega} \succeq0\), it suffices to show that \({\rm rank}(\tilde{\varOmega}) \ge n -d -1\). For an arbitrary vector \(v \in\text{Ker}(\tilde{\varOmega})\),

$$ v^T \tilde{\varOmega} v = \sum_{i=1}^n \bigl\| P^{\perp}_{\langle u_{\mathcal{C}_i},x^{(1)}_{\mathcal{C}_i},\ldots,x^{(d)}_{\mathcal{C}_i}\rangle} v_{\mathcal{C}_i} \bigr\|^2 = 0, $$

which implies that \(v_{\mathcal{C}_{i}} \in\langle u_{\mathcal{C}_{i}},x^{(1)}_{\mathcal{C}_{i}},\ldots,x^{(d)}_{\mathcal{C}_{i}} \rangle\). Hence, the vector \(v_{\mathcal{C}_{i}}\) can be written as

$$ v_{\mathcal{C}_i} = \sum_{\ell=1}^d \beta^{(\ell)}_{i} x^{(\ell )}_{\mathcal{C}_i} + \beta^{(d+1)}_{i} u_{\mathcal{C}_{i}}, $$

or some scalars \(\beta^{(\ell)}_{i}\). Note that for any two nodes i and j, the vector \(v_{\mathcal{C}_{i} \cap\mathcal{C}_{j}}\) has the following two representations:

$$ v_{\mathcal{C}_i \cap\mathcal{C}_j} = \sum_{\ell=1}^d \beta^{(\ell)}_{i} x^{(\ell)}_{\mathcal{C}_i \cap \mathcal{C}_j} + \beta^{(d+1)}_{i} u_{\mathcal{C}_{i} \cap\mathcal{C}_j}= \sum _{\ell=1}^d \beta^{(\ell)}_{j} x^{(\ell)}_{\mathcal{C}_i \cap \mathcal{C}_j} + \beta^{(d+1)}_{j} u_{\mathcal{C}_{i} \cap\mathcal{C}_j}. $$

Therefore,

$$ \sum_{\ell=1}^d \bigl( \beta^{(\ell)}_{i} - \beta^{(\ell)}_{j}\bigr) x^{(\ell)}_{\mathcal{C}_i \cap\mathcal{C}_j} + \bigl(\beta^{(d+1)}_{i} - \beta^{(d+1)}_{j}\bigr) u_{\mathcal{C}_{i} \cap \mathcal{C}_j} = 0. $$
(36)

According to Proposition 5.1, w.h.p., for any two nodes i and j with ∥x i x j ∥≤r/2, we have \(|\mathcal {C}_{i} \cap\mathcal{C}_{j}| \geq d+1\). Thus, the vectors \(x^{(\ell )}_{\mathcal{C}_{i} \cap\mathcal{C}_{j}}\), \(u_{\mathcal{C}_{i} \cap \mathcal{C}_{j}}\), 1≤d are linearly independent, since the configuration is generic. More specifically, let Y be the matrix with d+1 columns \(\{x^{(\ell)}_{\mathcal{C}_{i} \cap\mathcal{C}_{j}}\}_{\ell=1}^{d}\), \(u_{\mathcal{C}_{i} \cap\mathcal{C}_{j}}\). Then, det(Y T Y) is a nonzero polynomial in the coordinates \(x^{(\ell)}_{k}\), \(k \in\mathcal{C}_{i} \cap\mathcal{C}_{j}\) with integer coefficients. Since the configuration of the points is generic, det(Y T Y)≠0, yielding the linear independence of the columns of Y. Consequently, Eq. (36) implies that \(\beta^{(\ell )}_{i} = \beta^{(\ell)}_{j}\) for any two adjacent nodes in G(n,r/2). Given that \(r > 10 \sqrt{d} (\log n / n)^{1/d}\), the graph G(n,r/2) is connected w.h.p., and thus the coefficients \(\beta^{(\ell)}_{i}\) are the same for all i. Dropping the subscript (i), we obtain

$$ v = \sum_{\ell=1}^d \beta^{(\ell)} x^{(\ell)} + \beta^{(d+1)} u, $$

proving that \(\text{Ker}(\tilde{\varOmega}) \subseteq\langle u, x^{(1)},\ldots, x^{(d)}\rangle\), and thus \(\text{rank}(\tilde {\varOmega}) \ge n -d -1\).

Appendix D: Proof of Claim 5.2

Let \(\tilde{G} = (V,\tilde{E})\), where \(\tilde{E} = \{(i,j): d_{ij} \leq r/2\}\). The Laplacian of \(\tilde{G}\) is denoted by \(\tilde {\mathcal{L}}\). We first show that, for some constant C,

$$ \tilde{\mathcal{L}} \preceq C \sum _{k=1}^n P_{u_{\mathcal{C}_k}}^{\perp}. $$
(37)

Note that

The inequality follows from the fact that M ij 0, ∀i,j. By applying Remark 2.1, we have \(|\mathcal{C}_{k}| \leq C_{1}(nr^{d})\) and \(|\mathcal{C}_{i} \cap \mathcal{C}_{j}| \geq C_{2}nr^{d}\), for some constants C 1 and C 2 (depending on d) and ∀k,i,j. Therefore,

$$ \sum_{k=1}^{n} P_{u_{\mathcal{C}_k}}^{\perp} \succeq\sum_{(i,j) \in\tilde {E}} \frac{C_2}{C_1} M_{ij} = \frac{C_2}{C_1} \tilde{\mathcal{L}}. $$

Next we prove that, for some constant C,

$$ \mathcal{L}\preceq C \tilde{\mathcal{L}}. $$
(38)

To this end, we use the Markov chain comparison technique.

A path between two nodes i and j, denoted by γ ij , is a sequence of nodes (i,v 1,…,v t−1,j), such that the consecutive pairs are connected in \(\tilde{G}\). Let γ=(γ ij )(i,j)∈E denote a collection of paths for all pairs connected in G, and let Γ be the collection of all possible γ. Consider the probability distribution induced on Γ by choosing paths between all connected pairs in G in the following way.

Cover the space [−0.5,0.5]d with bins of side length \(r/(4\sqrt {d})\) (similar to the proof of Proposition 5.1. As discussed there, w.h.p., every bin contains at least one node). Paths are selected independently for different node pairs. Consider a particular pair (i,j) connected in G. Select γ ij as follows. If i and j are in the same bin or in the neighboring bins, then γ ij =(i,j). Otherwise, consider all bins intersecting the line joining i and j. From each of these bins, choose a node v k uniformly at random. Then the path γ ij is (i,v 1,…,j).

In the following, we compute the average number of paths passing through each edge in \(\tilde{E}\). The total number of paths is |E|=Θ(n 2 r d). Also, since any connected pair in G are within distance r of each other and the side length of the bins is O(r), there are O(1) bins intersecting a straight line joining a pair (i,j)∈E. Consequently, each path contains O(1) edges. The total number of bins is Θ(r d). Hence, by symmetry, the number of paths passing through each bin is Θ(n 2 r 2d). Consider a particular bin B and the paths passing through it. All these paths are equally likely to choose any of the nodes in B. Therefore, the average number of paths containing a particular node in B, say i, is Θ(n 2 r 2d/nr d)=Θ(nr d). In addition, the average number of edges between i and the neighboring bins is Θ(nr d). Due to symmetry, the average number of paths containing an edge incident on i is Θ(1). Since this is true for all nodes i, the average number of paths containing an edge is Θ(1).

Now, let v∈ℝn be an arbitrary vector. For a directed edge \(e \in\tilde{E}\) from ij, define v(e)=v i v j . Also, let |γ ij | denote the length of the path γ ij . We have

(39)

where γ is the maximum path length and b(γ,e) denotes the number of paths passing through e under γ=(γ ij ). The first inequality follows from the Cauchy-Schwarz inequality. Since all paths have length O(1), we have γ =O(1). Also, note that in Eq. (39), b(γ,e) is the only term that depends on the paths. Therefore, we can replace b(γ,e) with its expectation under the distribution on Γ, i.e., b(e)=∑ γΓ ℙ(γ)b(γ,e). We proved above that the average number of paths passing through an edge is Θ(1). Hence, \(\max_{e \in\tilde{E}} b(\gamma,e) = \varTheta(1)\). Using these bounds in Eq. (39), we obtain

$$ v^T \mathcal{L}v \leq C \sum _{e \in\tilde{E}} v(e)^2 = C v^T \tilde{ \mathcal{L}}v, $$
(40)

for some constant C and all vectors v∈ℝn. Combining Eqs. (37) and (40) implies the thesis.

Appendix E: Proof of Claim 5.3

In Remark 2.1, let region \(\mathcal{R}\) be the r/2-neighborhood of node i, and take c=2. Then, with probability at least 1−2/n 2,

$$ |\mathcal{C}_i| \in np_d + [- \sqrt{4np_d \log n}, \sqrt{4np_d \log n}], $$
(41)

where p d =K d (r/2)d.

Similarly, with probability at least 1−2/n 2,

$$ |\tilde{\mathcal{C}_i}| \in n\tilde{p}_d + [- \sqrt{4n\tilde{p}_d \log n}, \sqrt{4n\tilde{p}_d \log n}], $$
(42)

where \(\tilde{p}_{d} = K_{d}(\frac{r}{2})^{d} (\frac{1}{2}+\frac {1}{100})^{d}\). By applying a union bound over all 1≤in, Eqs. (41) and (42) hold for any i, with probability at least 1−4/n. Given that \(r > 10 \sqrt{d} (\log n / n)^{\frac{1}{d}}\), the result follows after some algebraic manipulations.

Appendix F: Proof of Claim 5.4

Part (i)

Let \(\tilde{G} = (V,\tilde{E})\), where \(\tilde{E} = \{ (i,j): d_{ij} \leq r/2\}\). Also, let \(A_{\tilde{G}}\) and \(A_{G^{*}}\) respectively denote the adjacency matrices of the graphs \(\tilde{G}\) and G . Therefore, \(A_{\tilde{G}} \in {\mathbb{R}}^{n \times n}\) and \(A_{G^{*}} \in {\mathbb{R}}^{N \times N}\), where N=|V(G )|=n(m+1). From the definition of G , we have

(43)

where ⊗ stands for the Kronecker product. Hence,

$$ \max_{i \in V(G^*)} \text{deg}_{G^*}(i) = (m+1) \max_{i \in V(\tilde {G})} \text{deg}_{\tilde{G}}(i). $$

Since the degree of the nodes in \(\tilde{G}\) are bounded by C(nr d) for some constant C, and mC(nr d) (by definition of m in Claim 5.3), we have that the degree of the nodes in G is bounded by C(nr d)2, for some constant C.

Part (ii)

Let \(D_{\tilde{G}} \in {\mathbb{R}}^{n \times n}\) be the diagonal matrix with the degrees of the nodes in \(\tilde{G}\) on its diagonal. Define \(D_{G^{*}} \in {\mathbb{R}}^{N \times N}\) analogously. From Eq. (43), it is easy to see that

$$ \bigl(D_{\tilde{G}}^{-1/2} A_{\tilde{G}} D_{\tilde{G}}^{-1/2} \bigr) \otimes \biggl(\frac{1}{m+1} B\biggr) = D^{-1/2}_{G^*} A_{G^*} D^{-1/2}_{G^*}. $$

Now for any two matrices \(\mathcal{A}\) and \(\mathcal{B}\), the eigenvalues of \(\mathcal{A} \otimes\mathcal{B}\) are all products of eigenvalues of \(\mathcal{A}\) and \(\mathcal{B}\). The matrix 1/(m+1)B has eigenvalues 0, with multiplicity m, and 1, with multiplicity one. Thereby,

$$ \sigma_{\min}\bigl(I - D^{-1/2}_{G^*} A_{G^*}D^{-1/2}_{G^*}\bigr) \geq\min\bigl\{ \sigma_{\min}\bigl(I - D_{\tilde{G}}^{-1/2} A_{\tilde{G}} D_{\tilde {G}}^{-1/2}\bigr), 1\bigr\} \geq Cr^2, $$

where the last step follows from Remark 2.2. Due to the result of [8] (Theorem 4), we obtain

$$ \sigma_{\min} (\mathcal{L}_{G^*}) \geq d_{\min,G^*} \sigma_{\min}(\mathcal{L}_{n,G^*}), $$

where \(d_{\min,G^{*}}\) denotes the minimum degree of the nodes in G , and \(\mathcal{L}_{n,G^{*}} = I - D^{-1/2}_{G^{*}} A_{G^{*}}D^{-1/2}_{G^{*}}\) is the normalized Laplacian of G . Since \(d_{\min,G^{*}} = (m+1) d_{\min ,\tilde{G}} \geq C(nr^{d})^{2}\), for some constant C, we obtain

$$ \sigma_{\min}(\mathcal{L}_{G^*}) \geq C\bigl(nr^d \bigr)^2r^2, $$

for some constant C.

Appendix G: Proof of Claim 5.5

Fix a pair (i,j)∈E(G ). Let \(m_{ij} = |\mathcal{Q}_{i} \cap \mathcal{Q}_{j}|\), and without loss of generality assume that the nodes in \(\mathcal{Q}_{i} \cap \mathcal{Q}_{j}\) are labeled with {1,…,m ij }. Let \(z^{(\ell)} = \tilde{x}^{(\ell)}_{\mathcal{Q}_{i} \cap \mathcal{Q}_{j}}\), for 1≤d, and let \(z_{k} = (z_{k}^{(1)},\ldots, z_{k}^{(d)})\), for 1≤km ij . Define the matrix M (ij)∈ℝd×d as \(M^{(ij)}_{\ell,\ell'} = \langle z^{(\ell)}, z^{(\ell')} \rangle\), for 1≤′,d. Finally, let \(\beta_{ij} = (\beta^{(1)}_{j} - \beta^{(1)}_{i},\ldots,\beta^{(d)}_{j} - \beta^{(d)}_{i}) \in {\mathbb{R}}^{d}\). Then,

$$ \Biggl\|\sum_{\ell=1}^d \bigl(\beta^{(\ell)}_j - \beta^{(\ell)}_i \bigr) \tilde{x}^{(\ell)}_{\mathcal{Q}_i \cap \mathcal{Q}_j}\Biggr\|^2 = \beta_{ij}^T M^{(ij)} \beta_{ij} \ge\sigma_{\min} \bigl(M^{(ij)}\bigr) \| \beta_{ij}\|^2. $$
(44)

In the following, we lower bound σ min(M (i,j)). Notice that

$$ M^{(ij)} = \sum _{k=1}^{m_{ij}} z_k z_k^T = \sum_{k=1}^{m_{ij}} \bigl\{ z_kz_k^T - \mathbb{E}\bigl(z_kz_k^T \bigr)\bigr\} + \sum_{k=1}^{m_{ij}} \mathbb{E}\bigl(z_kz_k^T\bigr). $$
(45)

We first lower bound the quantity \(\sigma_{\min}(\sum_{k=1}^{m_{ij}} \mathbb{E}(z_{k} z_{k}^{T}))\). Let S∈ℝd×d be an orthogonal matrix that aligns the line segment between x i and x j with e 1. Now, let \(\hat{z_{k}} = S z_{k}\) for 1≤km ij . Then,

$$ \sum_{k=1}^{m_{ij}} \mathbb{E}\bigl(z_k z_k^T\bigr) = \sum_{k=1}^{m_{ij}} S^T \mathbb{E}\bigl(\hat {z}_k \hat{z}_k^T \bigr) S. $$

The matrix \(\mathbb{E}(\hat{z}_{k} \hat{z}_{k}^{T})\) is the same for all 1≤km ij . Further, it is a diagonal matrix whose diagonal entries are bounded from below by C 1 r 2, for some constant C 1. Therefore, \(\sigma_{\min}(\sum_{k=1}^{m_{ij}} \mathbb{E}(\hat{z}_{k} \hat{z}_{k}^{T})) \ge m_{ij} C_{1}r^{2}\). Consequently,

$$ \sigma_{\min} \Biggl(\sum _{k=1}^{m_{ij}} \mathbb{E}\bigl(z_kz_k^T \bigr)\Biggr) = \sigma_{\min }\Biggl(\sum_{k=1}^{m_{ij}} \mathbb{E}\bigl(\hat{z}_k \hat{z}_k^T\bigr) \Biggr) \ge m_{ij} C_1r^2. $$
(46)

Let \(Z^{(k)} = z_{k}z_{k}^{T} - \mathbb{E}(z_{k}z_{k}^{T})\), for 1≤km ij . Next, we upper bound the quantity \(\sigma_{\max}(\sum_{k=1}^{m_{ij}} Z^{(k)})\). Note that for any matrix A∈ℝd×d,

Taking \(A = \sum_{k=1}^{m_{ij}} Z^{(k)}\), we have

(47)

where the last inequality follows from the union bound. Take ϵ=C 1 m ij r 2/2. Note that \(\{Z^{(k)}_{pq}\}_{1\le k \le m_{ij}}\) is a sequence of independent random variables with \(\mathbb{E}(Z^{(k)}_{pq}) = 0\), and \(|Z^{(k)}_{pq}| \le r^{2}/4\), for 1≤km ij . Applying Hoeffding’s inequality,

(48)

Combining Eqs. (47) and (48), we obtain

(49)

Using Eqs. (45), (46), and (49), we have

with probability at least 1−2d 2 n −3. Applying a union bound over all pairs (i,j)∈E(G ), we obtain that, w.h.p., σ min(M (ij))≥C 1 m ij r 2/2≥C(nr d)r 2, for all (i,j)∈E(G ). Invoking Eq. (44),

$$ \Biggl\|\sum_{\ell=1}^d \bigl( \beta^{(\ell)}_j - \beta^{(\ell)}_i\bigr) \tilde{x}^{(\ell)}_{\mathcal{Q}_i \cap \mathcal{Q}_j}\Biggr\|^2 \ge C\bigl(nr^d\bigr) r^2 \|\beta_{ij}\|^2 = C\bigl(nr^d \bigr)r^2 \sum_{\ell=1}^d \bigl( \beta_j^{(\ell)} - \beta_i^{(\ell)} \bigr)^2. $$

Appendix H: Proof of Claim 5.6

Proof

Let N=|V(G )|=n(m+1). Define \(\bar{\beta}^{(\ell)} = (1/N) \sum_{i=1}^{N} \beta_{i}^{(\ell)}\) and let \(\tilde{v} = v - \sum_{\ell =1}^{d} \bar{\beta}^{(\ell)} x^{(\ell)}\). Then, the vector \(\tilde {v}\) has the following local decompositions.

$$ \tilde{v}_{\mathcal{Q}_i} = \sum_{\ell=1}^{d} \bigl(\beta^{(\ell)}_i - \bar{\beta}^{(\ell )}\bigr) \tilde{x}^{(\ell)}_{\mathcal{Q}_i} + \tilde{\gamma}_i u_{\mathcal{Q}_i} + w^{(i)}, $$

where \(\tilde{\gamma}_{i} = \gamma_{i} - \sum_{\ell=1}^{d} \bar{\beta}^{(\ell)} \frac{1}{|\mathcal{Q}_{i}|} \langle x^{(\ell)}_{\mathcal{Q}_{i}}, u_{\mathcal{Q}_{i}} \rangle\). For convenience, we establish the following definitions.

M∈ℝd×d is a matrix with M ,=〈x (),x (′)〉. Also, for any 1≤iN, define the matrix M (i)∈ℝd×d as \(M^{(i)}_{\ell , \ell'} = \langle \tilde{x}^{(\ell)}_{\mathcal{Q}_{i}},\tilde{x}^{(\ell')}_{\mathcal{Q}_{i}} \rangle \). Let \(\hat{\beta}^{(\ell)}_{i} := \beta^{(\ell)}_{i} - \bar{\beta}^{(\ell)}\) and \(\eta^{(\ell)}_{i} = \sum_{\ell'} M^{(i)}_{\ell,\ell'} \hat{\beta}^{(\ell')}_{i} \). Finally, for any 1≤d, define the matrix B ()∈ℝN×n as follows:

$$ B^{(\ell)}_{i,j} = \begin{cases} \tilde{x}^{(\ell)}_{\mathcal{Q}_i,j} & \text{if} \ j \in \mathcal{Q}_i,\\ 0 & \text{if}\ j \notin \mathcal{Q}_i. \end{cases} $$

Now, note that \(\langle\tilde{v}_{\mathcal{Q}_{i}}, \tilde{x}^{(\ell)}_{\mathcal{Q}_{i}}\rangle= \sum_{\ell'=1}^{d} M^{(i)}_{\ell,\ell'} \hat{\beta}^{(\ell')}_{i} = \eta^{(\ell )}_{i} \). Writing it in matrix form, we have \(B^{(\ell)} \tilde{v} = \eta^{(\ell)}\).

Our first lemma provides a lower bound for σ min(B ()). For its proof, we refer to Sect. H.1.

Lemma H.1

Let \(\tilde{G} = (V,\tilde{E})\), where \(\tilde{E} = \{(i,j):d_{ij} \leq r/2\}\) and denote by \(\tilde{\mathcal{L}}\) the Laplacian of \(\tilde{G}\). Then, there exists a constant C=C(d), such that, w.h.p.,

$$ B^{(\ell)} \bigl(B^{(\ell)}\bigr)^T \succeq C \bigl(nr^d\bigr)^{-1} r^2\tilde{\mathcal{L}}, \quad \forall1\leq\ell\leq d. $$

The next lemma establishes some properties of the spectra of the matrices M and M (i). Its proof is deferred to Sect. H.2.

Lemma H.2

There exist constants C 1 and C 2 such that, w.h.p.,

$$ \sigma_{\min}(M) \geq C_1 n, \qquad \sigma_{\max}\bigl(M^{(i)}\bigr) \leq C_2 \bigl(nr^d\bigr)r^2, \quad\forall\ 1\leq i \leq N. $$

Now, we are in position to prove Claim 5.6. Using Lemma H.1 and since \(\langle\tilde{v}, u\rangle = 0\),

$$ \bigl\|\eta^{(\ell)}\bigr\|^2 \geq\sigma_{\min} \bigl(B^{(\ell)}\bigl(B^{(\ell)}\bigr)^T\bigr) \|\tilde{v} \|^2 \ge C\bigl(nr^d\bigr)^{-1} r^2 \sigma_{\min}(\tilde{\mathcal{L}}) \geq C r^4 \|\tilde{v} \|^2, $$

for some constant C. The last inequality follows from the lower bound on \(\sigma_{\min}(\tilde{\mathcal{L}})\) provided by Remark 2.2. Moreover,

Summing both sides over and using ∥x ()2Cn, we obtain

$$ \sum_{\ell=1}^{d} \Biggl[ \sum _{\ell'=1}^d M_{\ell,\ell'} \bar { \beta}^{(\ell')} \Biggr]^2 \leq C \bigl(nr^{-4}\bigr) \sum_{\ell=1}^{d} \bigl\|\eta^{(\ell)} \bigr\|^2. $$

Equivalently,

$$ \sum_{\ell=1}^{d} \langle M_{\ell,\cdot}, \bar{\beta}\rangle^2 \leq C \bigl(nr^{-4}\bigr) \sum _{\ell=1}^{d} \sum _{i=1}^{N} \bigl\langle M^{(i)}_{\ell,\cdot}, \hat{\beta}_i\bigr\rangle^2. $$

Here, \(\bar{\beta} = (\bar{\beta}^{(1)},\ldots, \bar{\beta }^{(d)}) \in {\mathbb{R}}^{d}\) and \(\hat{\beta}_{i} = (\hat{\beta}_{i}^{(1)},\ldots,\hat{\beta}_{i}^{(d)}) \in {\mathbb{R}}^{d}\). Writing this in matrix form,

$$ \|M \bar{\beta}\|^2 \leq C \bigl(nr^{-4}\bigr) \sum _{i=1}^{N} \bigl\|M^{(i)} \hat{\beta}_i\bigr\|^2. $$

Therefore,

$$ \sigma_{\min}^2(M) \|\bar{\beta}\|^2 \leq C \bigl(nr^{-4}\bigr) \Bigl[\max_{1 \leq i \leq N} \sigma_{\max}^2 \bigl(M^{(i)}\bigr) \Bigr] \sum_{i=1}^N \|\hat{\beta}_i\|^2. $$

Using the bounds on σ min(M) and σ max(M (i)) provided in Lemma H.2, we obtain

$$ \|\bar{\beta} \|^2 \leq\frac{C}{n} \bigl(nr^d\bigr)^2 \sum_{i=1}^N \|\hat{\beta}_i \|^2. $$
(50)

Now, note that

(51)
(52)

Consequently,

Here, (a) follows from Eq. (51), (b) follows from Eq. (50), and (c) follows from Eq. (52). The result follows. □

8.1 H.1 Proof of Lemma H.1

Recall that e ij ∈ℝn is the vector with +1 at the ith position, −1 at the jth position, and zero everywhere else. For any two nodes i and j with ∥x i x j ∥≤r/2, choose a node \(k \in\tilde{\mathcal{C}}_{i} \cap\tilde{\mathcal{C}}_{j}\) uniformly at random and consider the cliques \(\mathcal{Q}_{1} = \mathcal{C}_{k}\), \(\mathcal{Q}_{2} = \mathcal{C}_{k} \backslash{i}\), and \(\mathcal{Q}_{3} = \mathcal{C}_{k} \backslash{j}\). Define \(S_{ij} = \{ \mathcal{Q}_{1},\mathcal{Q}_{2},\mathcal{Q}_{3}\}\). Note that S ij cliq (G).

Let a 1, a 2, and a 3 respectively denote the center of mass of the points in cliques \(\mathcal{Q}_{1}\), \(\mathcal{Q}_{2}\), and \(\mathcal{Q}_{3}\). Find scalars \(\xi^{(ij)}_{1}\), \(\xi^{(ij)}_{2}\), and \(\xi^{(ij)}_{3}\), such that

$$ \begin{cases} \xi^{(ij)}_1 + \xi^{(ij)}_2 + \xi^{(ij)}_3 = 0,\\[3pt] \xi^{(ij)}_1 a^{(\ell)}_1 + \xi^{(ij)}_2 a^{(\ell)}_2 + \xi^{(ij)}_3 a^{(\ell)}_3 = 0,\\[3pt] \xi^{(ij)}_1 \bigl(x^{(\ell)}_i - a^{(\ell)}_1\bigr) + \xi^{(ij)}_3 \bigl(x^{(\ell )}_i - a^{(\ell)}_3\bigr) = 1. \end{cases} $$
(53)

Note that the space of the solutions of this linear system of equations is invariant to translation of the points. Hence, without loss of generality, assume that \(\sum_{{l \in \mathcal{Q}_{1}, l \neq i,j}} x_{l} = 0\). Also, let \(m = |\mathcal{C}_{k}|\). Then, it is easy to see that

and the solution of equations (53) is given by

First, observe that

  • \(\xi^{(ij)}_{1} (x^{(\ell)}_{i} - a^{(\ell)}_{1}) + \xi^{(ij)}_{2} (x^{(\ell)}_{i} - a^{(\ell)}_{3}) = 1\).

  • \(\xi^{(ij)}_{1} (x^{(\ell)}_{j} - a^{(\ell)}_{1}) + \xi^{(ij)}_{2} (x^{(\ell)}_{j} - a^{(\ell)}_{2}) = -1\).

  • For \(t \in \mathcal{C}_{k}\) and ti,j:

Therefore,

$$ \xi^{(ij)}_1 \tilde{x}^{(\ell)}_{\mathcal{Q}_1,t} +\xi^{(ij)}_2 \tilde{x}^{(\ell)}_{\mathcal{Q}_2,t} + \xi^{(ij)}_3 \tilde{x}^{(\ell)}_{\mathcal{Q}_3,t} = \begin{cases} 1 & \text{if } t =i,\\ -1 & \text{if } t = j,\\ 0 & \text{if } t \in \mathcal{C}_k, t\neq i,j. \end{cases} $$
(54)

Let ξ (ij)∈ℝN be the vector with \(\xi^{(ij)}_{1}\), \(\xi^{(ij)}_{2}\), and \(\xi^{(ij)}_{3}\) at the positions corresponding to the cliques \(\mathcal{Q}_{1}\), \(\mathcal{Q}_{2}\), \(\mathcal{Q}_{3}\) and zero everywhere else. Then, Eq. (54) gives (B ())T ξ (ij)=e ij .

Second, note that \(\|\xi^{(ij)}\|^{2} = (\xi^{(ij)}_{1})^{2} + (\xi^{(ij)}_{2})^{2} +(\xi^{(ij)}_{3})^{2} \leq\frac{C}{r^{2}}\), for some constant C.

Now, we are in position to prove Lemma H.1.

For any vector z∈ℝn, we have

Hence, \(B^{(\ell)} (B^{(\ell)})^{T} \succeq C(nr^{d})^{-1} r^{2} \tilde{\mathcal{L}}\).

8.2 H.2 Proof of Lemma H.2

First, we prove that σ min(M)≥Cn, for some constant C.

By definition, \(M = \sum_{i=1}^{n} x_{i} x_{i}^{T}\). Let \(Z_{i} = x_{i} x_{i}^{T} \in {\mathbb{R}}^{d \times d}\), and \(\bar{Z} = 1/n \sum_{i=1}^{n} Z_{i}\). Note that {Z i }1≤in is a sequence of i.i.d. random matrices with \(Z = \mathbb{E}(Z_{i}) = 1/12 I_{d \times d}\). By the law of large numbers we have \(\bar{Z} \to Z\), almost surely. In addition, since σ max(.) is a continuous function of its argument, we obtain \(\sigma_{\max}(\bar{Z} - Z) \to0\), almost surely. Therefore,

whence we obtain σ min(M)≥n/12, w.h.p.

Now we prove the second part of the claim.

Let \(m_{i} = |\mathcal{Q}_{i}|\), for 1≤iN. Since M (i)⪰0, we have

$$ \sigma_{\max}\bigl(M^{(i)}\bigr) \le {\rm Tr}\bigl(M^{(i)} \bigr) = \sum_{\ell=1} \bigl\|\tilde{x}^{(\ell)}_{\mathcal{Q}_i} \bigr\|^2 \leq Cm_ir^2. $$

With high probability, m i C(nr d), ∀1≤iN, and for some constant C. Hence,

$$ \max_{1 \leq i \leq N} \sigma_{\max}\bigl(M^{(i)}\bigr) \leq C \bigl(nr^d\bigr)r^2, $$

w.h.p. The result follows.

Appendix I: Proof of Proposition 6.1

Proof

Recall that \(\tilde{R}= XY^{T} + Y X^{T}\) with X,Y∈ℝn×d and Y T u=0. By the triangle inequality, we have

Therefore,

$$ \sum_{i,j} \bigl|\langle x_i - x_j,y_i -y_j \rangle\bigr| \geq\sum_{i,j} |\tilde{R}_{ij}| - n \sum_{i} |\tilde{R}_{ii}|. $$
(55)

Again, by the triangle inequality,

(56)

where the last equality follows from Y T u=0 and X T u=0.

Remark 8.1

For any n real values ξ 1,…,ξ n , we have

$$ \sum_{i} |\xi_i + \bar{\xi}| \geq \frac{1}{2} \sum_{i} |\xi_i|, $$

where \(\bar{\xi} = (1/n) \sum_{i} \xi_{i}\).

Proof of Remark 17.1

Without loss of generality, we assume \(\bar{\xi} \geq0\). Then,

$$ \sum_{i} |\xi_i + \bar{\xi}| \geq \sum_{i: \xi_i \geq0} \xi_i \geq \frac{1}{2} \biggl(\sum_{i: \xi_i \geq0} \xi_i - \sum _{i: \xi_i < 0} \xi_i\biggr) = \frac{1}{2} \sum _{i} |\xi_i|, $$

where the second inequality follows from \(\sum_{i} \xi_{i} = n \bar {\xi} \geq0\). □

Using Remark 17.1 with ξ i =〈x i ,y i 〉, Eq. (56) yields

$$ \sum_{ij} \bigl|\langle x_i -x_j, y_i -y_j \rangle\bigr| \geq \frac{n}{2} \sum_{i} \bigl|\langle x_i,y_i\rangle\bigr| = \frac{n}{4} \sum _{i} |\tilde{R}_{ii}|. $$
(57)

Combining Eqs. (55) and (57), we obtain

$$ \bigl\|\mathcal{R}_{K_n,X}(Y)\bigr\|_1 = \sum_{ij} \bigl|\langle x_i -x_j, y_i -y_j \rangle\bigr| \geq\frac{1}{5} \|\tilde{R}\|_1. $$
(58)

which proves the desired result. □

Appendix J: Proof of Lemma 6.1

We will compute the average number of chains passing through a particular edge in the order notation. Notice that the total number of chains is Θ(n 2) since there are \(n \choose2\) node pairs. Each chain has O(1/r) vertices and thus intersects O(1/r) bins. The total number of bins is Θ(1/r d). Hence, by symmetry, the number of chains intersecting each bin is Θ(n 2 r d−1). Consider a particular bin B, and the chains intersecting it. Such chains are equally likely to select any of the nodes in B. Since the expected number of nodes in B is Θ(nr d), the average number of chains containing a particular node, say i, in B, is Θ(n 2 r d−1/nr d)=Θ(nr −1). Now consider node i and one of its neighbors in the chain, say j. Denote by B the bin containing node j. The number of edges between i and B is Θ(nr d). Hence, by symmetry, the average number of chains containing an edge incident on i will be Θ(nr −1/nr d)=Θ(r d−1). This is true for all nodes. Therefore, the average number of chains containing any particular edge is O(r d−1). In other words, on average, no edge belongs to more than O(r d−1) chains.

Appendix K: The Two-Part Procedure for General d

In the proof of Lemma 6.2, we stated a two-part procedure to find the values \(\{\lambda_{lk}\}_{(l,k) \in E(G_{ij})}\) that satisfy Eq. (21). Part (i) of the procedure was demonstrated for the special case d=2. Here, we discuss this part for general d.

Let G ij ={i}∪{j}∪H 1∪⋯∪H k be the chain between nodes i and j. Let \(\mathcal{F}_{p} = H_{p} \cap H_{p+1}\). Without loss of generality, assume \(V(\mathcal{F}_{p}) = \{ 1,2,\ldots,q\}\), where q=2d−1. The goal is to find a set of forces, namely f 1,…,f q , such that

(59)

It is more convenient to write this problem in matrix form. Let X=[x 1,x 2,…,x q ]∈ℝd×q and Φ=[f 1,f 2,…,f q ]∈ℝd×q. Then, the problem can be recast as finding a matrix Φ∈ℝd×d such that

(60)

Define \(\tilde{X} = X (I - 1/q u u^{T})\), where I∈ℝq×q is the identity matrix and u∈ℝq is the all-ones vector. Let

$$ \varPhi= \frac{1}{q} x_m u^T + \biggl(\frac{1}{q} Xux_m^T + S\biggr) \bigl(\tilde {X}\tilde{X}^T \bigr)^{-1} \tilde{X}, $$
(61)

where S∈ℝd×d is an arbitrary symmetric matrix. Observe that

$$ \varPhi u = x_m, \qquad X \varPhi^T = \varPhi X^T. $$
(62)

Now, we only need to find a symmetric matrix S∈ℝd×d such that the matrix Φ given by Eq. (61) satisfies ∥Φ F Cx m ∥. Without loss of generality, assume that the vector x m is in the direction e 1=(1,0,…,0)∈ℝd. Let \(x_{c} = \frac {1}{q} X u\) be the center of the nodes \(\{x_{i}\}_{i=1}^{q}\), and let \(x_{c} = (x_{c}^{(1)},\ldots,x_{c}^{(d)})\). Take \(S = - \|x_{m}\| x_{c}^{(1)} e_{1} e_{1}^{T}\). From the construction of the chain G ij , the nodes \(\{ x_{i}\}_{i=1}^{q}\) are obtained by wiggling the vertices of a hypercube aligned in the direction x m /∥x m ∥=e 1, and with side length \(\tilde{r} = 3r/4\sqrt{2}\) (each node wiggles by at most \(\frac {r}{8}\)). Therefore, x c is almost aligned with e 1, and has small components in the other directions. Formally, \(|x_{c}^{(i)}| \leq\frac {r}{8}\), for 2≤id. Therefore,

Hence, \(\frac{1}{q} Xu x_{m}^{T} + S \in {\mathbb{R}}^{d \times d}\) has entries bounded by \(\frac{r}{8} \|x_{m}\|\). In the following we show that there exists a constant C=C(d) such that all entries of \((\tilde{X}\tilde {X}^{T})^{-1} \tilde{X}\) are bounded by C/r. Once we show this, it follows that

$$ \biggl\|\biggl(\frac{1}{q} X u x_m^T + S\biggr) \bigl(\tilde{X}\tilde{X}^T\bigr)^{-1} \tilde {X} \biggr\|_F \leq C \|x_m\|, $$

for some constant C=C(d). Therefore,

$$ \|\varPhi\|_F \leq\biggl\|\frac{1}{q} x_m u^T\biggr\|_F + \biggl\| \biggl(\frac{1}{q} X u x_m^T + S\biggr) \bigl(\tilde{X}\tilde{X}^T \bigr)^{-1} \tilde{X}\biggr\|_F \leq C\|x_m\|, $$

for some constant C.

We are now left with the task of showing that all entries of \((\tilde {X}\tilde{X}^{T})^{-1} \tilde{X}\) are bounded by C/r, for some constant C.

The nodes x i were obtained by wiggling the vertices of a hypercube of side length \(\tilde{r} = 3r / 4 \sqrt{2}\) (each node wiggles by at most r/8). Let \(\{z_{i}\}_{i=1}^{q}\) denote the vertices of this hypercube, and thus \(\|x_{i} - z_{i}\| \leq\frac{r}{8}\). Define

$$ Z = \frac{1}{\tilde{r}} [z_1,\ldots, z_q], \quad \delta Z = \frac{1}{\tilde{r}}\tilde{X} - Z. $$

Then, \(\tilde{X} \tilde{X}^{T} = \tilde{r}^{2} (Z+ \delta Z) (Z +\delta Z )^{T} = \tilde{r}^{2} (ZZ^{T}+ \bar{Z})\), where \(\bar{Z} = Z (\delta Z)^{T} + (\delta Z) Z^{T} + (\delta Z)(\delta Z)^{T}\). Consequently,

$$ \bigl(\tilde{X} \tilde{X}^T\bigr)^{-1} \tilde{X}= \frac{1}{\tilde{r}} \bigl(ZZ^T+\bar{Z}\bigr)^{-1} (Z+ \delta Z). $$

Now notice that the columns of Z represent the vertices of a unit (d−1)-dimensional hypercube. Also, the norm of each column of δZ is bounded by \(\frac{r}{8\tilde{r}} < \frac{1}{4}\). Therefore, \(\sigma_{\min} (ZZ^{T} + \bar{Z}) \geq C\), for some constant C=C(d). Hence, for every 1≤iq,

$$ \bigl\|\bigl(\tilde{X} \tilde{X}^T\bigr)^{-1} \tilde{X} e_i\bigr\| \leq\frac{1}{\tilde {r}} \sigma_{\min}^{-1} \bigl(ZZ^T+ \bar{Z}\bigr) \bigl\|(Z + \delta Z)e_i\bigr\| \leq \frac{C}{r}, $$

for some constant C. Therefore, all entries of \((\tilde{X} \tilde {X}^{T})^{-1} \tilde{X}\) are bounded by C/r.

Appendix L: Proof of Remark 7.1

Let θ be the angle between a and b and define \(a_{\perp} = \frac{b - \cos(\theta)a}{\|b - \cos(\theta)a\|}\). Therefore, b=cos(θ)a+sin(θ)a . In the basis (a,a ), we have

Therefore,

Appendix M: Proof of Remark 7.2

Proof

Let \(\{\tilde{\lambda}_{i}\}\) be the eigenvalues of \(\tilde{A}\) such that \(\tilde{\lambda}_{1} \geq\tilde{\lambda}_{2} \geq\cdots\geq \tilde{\lambda}_{p}\). Notice that

Therefore,

$$ \bigl(v^T \tilde{v}\bigr)^2 \geq\frac{\tilde{\lambda}_{p-1} - \lambda_p - \| A -\tilde{A}\|_2}{\tilde{\lambda}_{p-1} - \tilde{\lambda}_p}. $$

Furthermore, due to Weyl’s inequality, \(|\tilde{\lambda}_{i} - \lambda_{i}| \leq\|A - \tilde{A}\|_{2}\). Therefore,

$$ \bigl(v^T \tilde{v}\bigr)^2 \geq\frac{\lambda_{p-1} - \lambda_p - 2 \|A -\tilde{A}\|_2}{\lambda_{p-1} - \lambda_p + 2 \|A - \tilde{A}\|_2}, $$
(63)

which implies the thesis after some algebraic manipulations. □

Appendix N: Table of Symbols

Table 1 Table of symbols

Rights and permissions

Reprints and permissions

About this article

Cite this article

Javanmard, A., Montanari, A. Localization from Incomplete Noisy Distance Measurements. Found Comput Math 13, 297–345 (2013). https://doi.org/10.1007/s10208-012-9129-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10208-012-9129-5

Keywords

Mathematics Subject Classification (2010)

Navigation