Skip to main content
Log in

Boundary Estimation from Point Clouds: Algorithms, Guarantees and Applications

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

We investigate identifying the boundary of a domain from sample points in the domain. We introduce new estimators for the normal vector to the boundary, distance of a point to the boundary, and a test for whether a point lies within a boundary strip. The estimators can be efficiently computed and are more accurate than the ones present in the literature. We provide rigorous error estimates for the estimators. Furthermore we use the detected boundary points to solve boundary-value problems for PDE on point clouds. We prove error estimates for the Laplace and eikonal equations on point clouds. Finally we provide a range of numerical experiments illustrating the performance of our boundary estimators, applications to PDE on point clouds, and tests on image data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Data Availability

The datasets generated during and/or analyzed during the current study are available in the BoundaryTest repository, https://github.com/sangmin-park0/BoundaryTest.

Notes

  1. https://github.com/sangmin-park0/BoundaryTest.

Abbreviations

\(\Omega \) :

bounded domain in \(\mathbb {R}^d\). We denote the volume of \(\Omega \) by \(|\Omega |\).

R :

lower bound for the reach of \(\partial \Omega \).

\(d_{\Omega }\) :

the distance function \(d_{\Omega }={{\,\mathrm{dist}\,}}(x,\partial \Omega ):\Omega \rightarrow \mathbb {R}_+\;\).

\(\partial _{a}\Omega \) :

boundary region \(\partial _a\Omega = \{x\in \Omega {{\,\mathrm{dist}\,}}(x,\partial \Omega )\le a\}\) for \(a>0\).

\(\omega _{d}\) :

volume of the unit ball in \(\mathbb {R}^{d}\).

\(\rho \) :

probability density function \(\rho :\Omega \rightarrow [\rho _{\min },\rho _{\max }]\) where \(\rho _{\min }\) and \(\rho _{\max }\) satisfy \(0<\rho _{\min }\le \rho _{\max }<\infty \).

L :

Upper bound for the Lipschitz constant of \(\rho \).

\(\mathcal {X}\) :

set \(X =\{x^1,\cdots ,x^n\}\) of i.i.d. sample points drawn from density \(\rho \).

n :

total number of sample points considered.

\( r \) :

neighborhood radius.

\(\varepsilon \) :

thickness of the boundary region we seek to identify.

\(\nu \) :

inward unit normal vector to \(\partial \Omega \), extended to \(\partial _R \Omega \) by (1.1).

\({\bar{v}}_{ r },\,{\bar{\nu }}_{ r }\) :

population-based estimator of the normal vector, and its unit normalization, (1.3).

\({\hat{v}}_{ r },\,{\hat{\nu }}_{ r }\) :

first-order empirical estimator of the normal vector, and its unit normalization, (1.2).

\({\hat{v}}^{2}_{ r },\,{\hat{\nu }}^{2}_{ r }\) :

second-order empirical estimator of the normal vector, and its unit normalization, (1.5).

\({\hat{d}}_ r ^1(x^0), {\hat{d}}_ r ^2(x^0)\) :

first and second-order estimators of the distance to boundary of \(\Omega \), (1.12) and (1.17).

\(C_x, C_y, C_r\) :

dimensionless constants explicitly stated in Appendix D.

References

  1. Aamari, E., Aaron, C., Levrard, C.: Minimax boundary estimation and estimation with boundary, arXiv preprint arXiv:2108.03135 (2021)

  2. Aamari, E., Levrard, C.: Nonasymptotic rates for manifold, tangent space and curvature estimation. Ann. Stat. 47, 177–204 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  3. Aaron, C., Cholaquidis, A.: On boundary detection. Ann. Inst. Henri Poincaré Probab. Stat. 56, 2028–2050 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  4. Adela DePavia, S.S.: Spectral clustering revisited: Information hidden in the Fiedler vector. Found. Data Sci. 3, 225–249 (2021)

    Article  MATH  Google Scholar 

  5. Bardi, M., Capuzzo-Dolcetta, I.: Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations, Springer Science & Business Media (2008)

  6. Barnett, V.: The ordering of multivariate data. J. Royal Stat. Soc.: Ser. (General) 139, 318–344 (1976)

    Article  MathSciNet  Google Scholar 

  7. Bellock, K.: Alpha shape toolbox https://github.com/bellockk/alphashape. (accessed 2021/10/22) (2021)

  8. Bentley, J.L.: Multidimensional divide-and-conquer. Commun. ACM 23, 214–229 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  9. Bentley, J.L.: Multidimensional divide-and-conquer. Discrete and Comp. Geom. 4, 101–115 (1989)

    Google Scholar 

  10. Bernhardsson, E.: Annoy: Approximate nearest neighbors in c++/python https://pypi.org/project/annoy/ (accessed 2020/10/19) (2018)

  11. Berry, T., Sauer, T.: Density estimation on manifolds with boundary. Comput. Stat. Data Anal. 107, 1–17 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  12. Birbrair, L., Denkowski, M.P.: Medial axis and singularities. J. Geom. Anal. 27, 2339–2380 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  13. Bou-Rabee, A., Morfe, P. S.: Hamilton-Jacobi scaling limits of pareto peeling in 2d, arXiv preprint arXiv:2110.06016, (2021)

  14. Boucheron, S., Lugosi, G., Massart, P.: Concentration inequalities: A nonasymptotic theory of independence, Oxford university press (2013)

  15. Calder, J.: The game theoretic p-Laplacian and semi-supervised learning with few labels. Nonlinearity 32, 301–330 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  16. Calder, J.: Lecture notes on viscosity solutions, Online Lecture Notes http://www-users.math.umn.edu/~jwcalder/viscosity_solutions.pdf (2018)

  17. Calder, J.: Consistency of Lipschitz learning with infinite unlabeled data and finite labeled data. SIAM J. Math. Data Sci. 1, 780–812 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  18. Calder, J.: The calculus of variations, Online Lecture Notes http://www-users.math.umn.edu/~jwcalder/CalculusOfVariations.pdf (2020)

  19. Calder, J.: Graph-based clustering and semi-supervised learning, https://github.com/jwcalder/GraphLearning. (accessed 2020/10/19) (2020)

  20. Calder, J., Esedoḡlu, S., Hero, A.O.: A Hamilton-Jacobi equation for the continuum limit of non-dominated sorting. SIAM J. Math. Anal. 46, 603–638 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  21. Calder, J., Ettehad, M.: Hamilton-Jacobi equations on graphs with applications to semi-supervised learning and data depth, In preparation (2021)

  22. Calder, J., Trillos, N García: Improved spectral convergence rates for graph Laplacians on \(\varepsilon \)-graphs and k-NN graphs, arXiv:1910.13476 (2019)

  23. Calder, J., Trillos, N. García, Lewicka, M.: Lipschitz regularity of graph Laplacians on random data clouds, arXiv:2007.06679 (2020)

  24. Calder, J., Slepčev, D., Thorpe, M.: Rates of convergence for Laplacian semi-supervised learning with low labeling rates, arXiv:2006.02765 (2020)

  25. Calder, J., Smart, C.K.: The limit shape of convex hull peeling. Duke Math. J. 169, 2079–2124 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  26. Cannarsa, P., Sinestrari, C.: Semiconcave functions, Hamilton-Jacobi equations, and optimal control, vol. 58, Springer Science & Business Media (2004)

  27. Carrizosa, E.: A characterization of halfspace depth. J. Multivar. Anal. 58, 21–26 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  28. Chen, J.-S., Hillman, M., Chi, S.-W.: Meshfree methods: progress made after 20 years. J. Eng. Mech. 143, 04017001 (2017)

    Google Scholar 

  29. Chen, Y.-C., Genovese, C.R., Wasserman, L.: Density level sets: asymptotics, inference, and visualization. J. Amer. Statist. Assoc. 112, 1684–1696 (2017)

    Article  MathSciNet  Google Scholar 

  30. Xia, Chenyi, Hsu, W., Lee, M. L., Ooi, B.C.: Border: efficient computation of boundary points. IEEE Trans. Knowl. Data Eng. 18, 289–303 (2006)

    Article  Google Scholar 

  31. Chernozhukov, V., Galichon, A., Hallin, M., Henry, M.: Monge-kantorovich depth, quantiles, ranks and signs. Ann. Stat. 45, 223–256 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  32. Costa, J.A., Hero, A. O.: Determining intrinsic dimension and entropy of high-dimensional shape spaces. In: Statistics and Analysis of Shapes, Springer, pp. 231–252 (2006)

  33. Cuevas, A., Fraiman, R., et al.: A plug-in approach to support estimation. Ann. Stat. 25, 2300–2312 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  34. Cuevas, A., Fraiman, R., Györfi, L.: Towards a universally consistent estimator of the Minkowski content. ESAIM Probab. Stat. 17, 359–369 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  35. Cuevas, A., Fraiman, R., Rodríguez-Casal, A.: A nonparametric approach to the estimation of lengths and surface areas. Ann. Statist. 35, 1031–1051 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  36. Cuevas, A., Rodríguez-Casal, A.: On boundary estimation. Adv. in Appl. Probab. 36, 340–354 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  37. de Micheaux, P. L., Mozharovskyi, P., Vimond, M.: Depth for curve data and applications, Journal of the American Statistical Association, pp. 1–17 (2020)

  38. Devroye, L., Wise, G.L.: Detection of abnormal behavior via nonparametric estimation of the support. SIAM J. Appl. Math. 38, 480–488 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  39. Dong, W., Moses, C., Li, K.: Efficient k-nearest neighbor graph construction for generic similarity measures. In: Proceedings of the 20th International Conference on World Wide Web, WWW ’11, New York, NY, USA, Association for Computing Machinery, p. 577–586 (2011)

  40. Edelsbrunner, H.: Alpha shapes-a survey, Tessellations in the Sciences (2010)

  41. Edelsbrunner, H., Kirkpatrick, D., Seidel, R.: On the shape of a set of points in the plane. IEEE Trans. Inf. Theory 29, 551–559 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  42. Edelsbrunner, H., Mücke, E.P.: Three-dimensional alpha shapes. ACM Trans. Graph. 13, 43–72 (1994)

    Article  MATH  Google Scholar 

  43. Finlay, C., Oberman, A.: Improved accuracy of monotone finite difference schemes on point clouds and regular grids. SIAM J. Sci. Comput. 41, A3097–A3117 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  44. Flores, M., Calder, J., Lerman, G.: Analysis and algorithms for Lp-based semi-supervised learning on graphs, arXiv:1901.05031 (2019)

  45. Flyer, N., Wright, G.B.: A radial basis function method for the shallow water equations on a sphere. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 465, 1949–1976 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  46. Foote, R.L.: Regularity of the distance function. Proceedings American Math. Soc. 92, 153–155 (1984)

    MathSciNet  MATH  Google Scholar 

  47. Froese, B.D.: Meshfree finite difference approximations for functions of the eigenvalues of the Hessian. Numer. Math. 138, 75–99 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  48. Fuselier, E., Wright, G.B.: Scattered data interpolation on embedded submanifolds with restricted positive definite kernels: Sobolev error estimates. SIAM J. Numer. Anal. 50, 1753–1776 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  49. García Trillos, N., Gerlach, M., Hein, M., Slepčev, D.: Error estimates for spectral convergence of the graph Laplacian on random geometric graphs toward the Laplace-Beltrami operator. Found. Comput. Math. 20, 827–887 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  50. García Trillos, N., Murray, R.W.: A maximum principle argument for the uniform convergence of graph Laplacian regressors. SIAM J. Math. Data Sci. 2, 705–739 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  51. Hein, M., Audibert, J.-Y.: Intrinsic dimensionality estimation of submanifolds in rd. In: Proceedings of the 22nd international conference on Machine learning, pp. 289–296 (2005)

  52. Lachièze-Rey, R., Vega, S.: Boundary density and Voronoi set estimation for irregular sets. Trans. Amer. Math. Soc. 369, 4953–4976 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  53. Lai, R., Liang, J., Zhao, H.-K.: A local mesh method for solving pdes on point clouds. Inverse Probl. Imaging 7, 737 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  54. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998)

    Article  Google Scholar 

  55. Li, Z., Shi, Z., Sun, J.: Point integral method for solving poisson-type equations on manifolds from point clouds with convergence guarantees. Commun. Comput. Phys. 22, 228–258 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  56. Liang, J., Zhao, H.: Solving partial differential equations on point clouds. SIAM J. Sci. Comput. 35, A1461–A1486 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  57. Liang, S., Jiang, S. W. , Harlim, J., Yang, H.: Solving pdes on unknown manifolds with machine learning, arXiv:2106.06682 (2021)

  58. Liu, R.Y., Parelius, J.M., Singh, K.: Multivariate analysis by data depth: descriptive statistics, graphics and inference,(with discussion and a rejoinder by liu and singh). Ann. Stat. 27, 783–858 (1999)

    Article  MATH  Google Scholar 

  59. McMullen, P.: The maximum numbers of faces of a convex polytope. Mathematika 17, 179–184 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  60. Molina-Fructuoso, M., Murray, R.: Eikonal depth: an optimal control approach to statistical depths, In preparation (2021)

  61. Molina-Fructuoso, M., Murray, R.: Tukey depths and Hamilton-Jacobi differential equations, arXiv:2104.01648 (2021)

  62. Oberman, A.M.: Wide stencil finite difference schemes for the elliptic Monge-Ampere equation and functions of the eigenvalues of the Hessian. Discrete & Continuous Dynamical Systems-B 10, 221 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  63. Piret, C.: The orthogonal gradients method: A radial basis functions method for solving partial differential equations on arbitrary surfaces. J. Comput. Phys. 231, 4662–4675 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  64. Piret, C., Dunn, J.: Fast rbf ogr for solving pdes on arbitrary surfaces. In: AIP Conference Proceedings, vol. 1776, AIP Publishing LLC, pp. 070005 (2016)

  65. Qiao, W., Polonik, W.: Nonparametric confidence regions for level sets: statistical properties and geometry. Electron. J Stat. 13, 985–1030 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  66. Qiu, B.-Z., Yue, F., Shen, J.-Y.: Brim: An efficient boundary points detecting algorithm, in Advances in Knowledge Discovery and Data Mining, Zhou, Z.-H., Li,H., Yang, Q. (eds.) Berlin, Heidelberg, Springer Berlin Heidelberg, pp. 761–768 (2007)

  67. Rodríguez Casal, A.: Set estimation under convexity type assumptions. Annales de l’I.H.P. Probabilités et statistiques 43, 763–774 (2007)

    MathSciNet  MATH  Google Scholar 

  68. Sethian, J.A., Vladimirsky, A.: Fast methods for the eikonal and related Hamilton-Jacobi equations on unstructured meshes. Proc. Natl. Acad. Sci. 97, 5699–5703 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  69. Shi, Z.: Enforce the Dirichlet boundary condition by volume constraint in point integral method. Commun. Math. Sci. 15, 1743–1769 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  70. Small, C. G.: Multidimensional medians arising from geodesics on graphs, The Annals of Statistics, pp. 478–494 (1997)

  71. Suchde, P., Kuhnert, J.: A fully lagrangian meshfree framework for pdes on evolving surfaces. J. Comput. Phys. 395, 38–59 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  72. Suchde, P., Kuhnert, J.: A meshfree generalized finite difference method for surface pdes. Comput. Math. Appl. 78, 2789–2805 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  73. The MathWorks Inc., alphashape: Matlab documentation. https://www.mathworks.com/help/matlab/ref/alphashape.html. Accessed: 2021-10-17

  74. Wu, H. tieng, Wu, N.: When locally linear embedding hits boundary, arXiv:1811.04423 (2019)

  75. Trask, N., Kuberry, P.: Compatible meshfree discretization of surface pdes. Comput. Part. Mech. 7, 271–277 (2020)

    Article  Google Scholar 

  76. Tukey, J.W.: Mathematics and the picturing of data. Proc. Int. Congr. Math. Vanc. 2(1975), 523–531 (1975)

    MathSciNet  MATH  Google Scholar 

  77. Vaughn, R., Berry, T., Antil, H.: Diffusion maps for embedded manifolds with boundary with applications to pdes, arXiv preprint arXiv:1912.01391 (2019)

  78. Wang, M., Leung, S., Zhao, H.: Modified virtual grid difference for discretizing the Laplace-Beltrami operator on point clouds. SIAM J. Sci. Comput. 40, A1–A21 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  79. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms, arXiv:1708.07747 (2017)

  80. Yuan, A., Calder, J., Osting, B.: A continuum limit for the pagerank algorithm, European Journal of Applied Mathematics, pp. 1–33 (2020)

Download references

Acknowledgements

The authors would like to thank Eddie Aamari for valuable comments, and the anonymous referees for their helpful suggestions. The authors are also grateful to CNA of CMU, IMA of Univ. of Minnesota, and Simons Institute at UC Berkeley for hospitality.

Funding

JC was supported by NSF grant DMS 1944925, the Alfred P. Sloan Foundation, and a McKnight Presidential Fellowship. SP and DS were supported by NSF grant DMS 1814991.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sangmin Park.

Ethics declarations

Conflict of interest

The authors declare that they have no conflicts of interest/competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A. Proof of Lemma 3.1

The following lemma will be useful in proving Lemma 3.1.

Lemma A.1

(Covering with spherical segments) Let \( r \le 1\) and \(0<a<b\le r \). For \(u\in \mathbb {S}^{d-1}\) and \(0< a < b\le r\) define the spherical sector by

$$\begin{aligned}S_{a,b}^u = \{x\in B(0,r) \, : \, a \le x\cdot u\le b\}.\end{aligned}$$

Suppose \(\Sigma \subset \mathbb {S}^{d-1}\) is a finite set satisfying the following property:

$$\begin{aligned} \text { for all } u\in \mathbb {S}^{d-1} \text { there exists } v\in \Sigma \text { such that } |u-v|\le \delta . \end{aligned}$$
(A.1)

Then, for any \(u\in \mathbb {S}^{d-1}\) we can find \(v\in \Sigma \) such that

$$\begin{aligned}S^{v}_{a+\delta b,b-\delta b}\subset S^u_{a,b}.\end{aligned}$$

Proof

Let \(u\in \mathbb {S}^{d-1}\) and fix a \(v\in \Sigma \) satisfying (A.1). Suppose that \(x\in S_{a+\delta b,b-\delta b}^v\). Then we have

$$\begin{aligned}a+\delta b \le x\cdot v \le b-\delta b.\end{aligned}$$

We have

$$\begin{aligned}|x\cdot v - x\cdot u| =| x\cdot (v-u)| \le |x| |u-v| \le \delta |x| \le \delta b,\end{aligned}$$

since \(|x|\le b-\delta \le b\). Therefore

$$\begin{aligned}x \cdot u \le b-\delta b + \delta b=b \ \ \text {and} \ \ x\cdot u \ge a + \delta b - \delta b=a .\end{aligned}$$

Therefore \(x\in S_{a,b}^u\), which shows that for each \(u\in \mathbb {S}^{d-1}\) there exists \(v\in \Sigma \) such that

$$\begin{aligned}S_{a,b}^u \supset S_{a+\delta b,b-\delta b}^v.\end{aligned}$$

Hence, the event that \(S_{a,b}^u \) is empty for some \(u\in \mathbb {S}^{d-1}\) is contained in the event that \(S_{a+\delta b,b-\delta b}^v\) is empty for some \(v\in \Sigma \)—a finite collection of events.

\(\square \)

Remark A.2

(\(\varepsilon \)-nets and upper bound on \(|\Sigma |\)) Recall that an \(\varepsilon \)-net of \(\mathbb {S}^{d-1}\) is the set of points in \(\mathbb {S}^{d-1}\) such that the pairwise distance is at least \(\varepsilon \). Then we define a maximal \(\varepsilon \)-net of the sphere to be an \(\varepsilon \)-net such that no point on \(\mathbb {S}^{d-1}\) can be added while preserving the lower bound for the pairwise distance.

Then, observe that any maximal \(\varepsilon \)-net of the unit sphere satisfies the condition of Lemma A.1. If \(\Sigma _\varepsilon =\{x^1,\cdots ,x^{N_\varepsilon }\}\) is a maximal \(\varepsilon \)-net of \(\mathbb {S}^{d-1}\), then for each \(x\in \mathbb {S}^{d-1}\) there exists \(x^i\in \Sigma _{\varepsilon }\) such that \(|x-x^i|\le \varepsilon \). To see this, suppose \(|x^*- x^i|>\varepsilon \) for all \(i=1,\cdots ,N_\varepsilon \). Then

$$\begin{aligned}B(x^*,\varepsilon /2)\cap B(x^i,\varepsilon /2) =\emptyset \text { for all } x^i\in \Sigma _\varepsilon .\end{aligned}$$

Thus \(\Sigma _{\varepsilon }\cap \{x^*\}\) should also be an \(\varepsilon \)-net, which contradicts the maximality of \(\Sigma _\varepsilon \).

Now, let \(\Sigma _\delta \) be any \(\delta \)-net – i.e. \(\varepsilon \)-net with \(\varepsilon =\delta \). Then \(\{B(v^i,\delta /2):\,v^i\in \Sigma _\delta \}\) is a collection of disjoint balls, all contained in \(B(0,1+\delta /2)\setminus B(0,1-\delta /2)\). Thus, base on a simple volumetric argument, we can deduce

$$\begin{aligned} |\Sigma _\delta | \le 2d\left( 1+\frac{2}{\delta }\right) ^{d-1}, \end{aligned}$$
(A.2)

Proof of Lemma 3.1

  1. (1)

    Let \(\{v_i\}_{i=1}^M=\Sigma \subset \mathbb {S}^{d-1}\) be a maximal \(\delta \)-net. By Lemma A.1 and Remark A.2, for any \(u\in \mathbb {S}^{d-1}\) we can find \(v_k\in \Sigma \) such that

    $$\begin{aligned}S_{a+b\delta ,b-b\delta }^{v_k}\subset S_{a,b}^u.\end{aligned}$$

    This means that if all of \(S_{a+b\delta ,b-b\delta }^{v_i}\) are nonempty, all of \(S_{a,b}^u\) is nonempty for \(u\in \mathbb {S}^{d-1}\) hence

    $$\begin{aligned}{{\hat{d}}}_r^1(x^0)\ge a.\end{aligned}$$

    Without loss of generality, assume \(x^0\in \mathbb {R}^d\) is the origin, and let \(\alpha =d_\Omega (x^0)\wedge \frac{ r }{2}\). Denote by \(K_{a,b}^u\subset S_{a,b}^u\) the cone of maximal height sharing the base with \(S_{a,b}^u\). Note that \(b\le \alpha \) implies \(K_{a,b}^u\subset {{\overline{B}}}(x_0, r )\cap \Omega \). On the other hand, we need \(a\ge (1-\lambda )\alpha -t\) to deduce the desired lower bound on \({{\hat{d}}}_ r ^1\). Thus choose

    $$\begin{aligned}a=(1-\lambda )\alpha -t,\,b=\alpha .\end{aligned}$$

    Further, we need the height of \(S_{a+b\delta ,b-b\delta }^{v_i}\) to scale like t, in order to lower bound the volume. Thus we need

    $$\begin{aligned} b-b\delta -(a+b\delta )=(1-2\delta )b-\alpha =(1-2\delta )\alpha -(1-\lambda )\alpha -t =(\lambda -2\delta )\alpha +t. \end{aligned}$$

    As we are interested in \(t\lesssim r ^2\ll \alpha \sim \varepsilon \), we need \(\lambda -2\delta \ge 0\), hence

    $$\begin{aligned}\delta \le \frac{\lambda }{2}.\end{aligned}$$
  2. (2)

    Following the discussion in the previous step, let \(\Sigma =\{v^1,\cdots ,v^{N_\lambda }\}\) be a maximal \(\frac{\lambda }{2}\)-net of \(\mathbb {S}^{d-1}\), and write

    $$\begin{aligned}S^i = S^{v^i}_{a+b\lambda /2,b-b\lambda /2} \text { where } a=(1-\lambda ) \alpha ,\,b=\alpha ,\text { and }. \end{aligned}$$

    Thus, to show (3.2) holds with probability at least \(1-n^{-\gamma }\), it suffices to show

    $$\begin{aligned}\mathbb {P}(\text { No point in } S^i)\le (1-\rho _{\min }|S^i\cap \Omega |)^{n}\le N_\lambda ^{-1}n^{-\gamma } \text { for all } i=1,\cdots ,N_\lambda .\end{aligned}$$
  3. (3)

    We first compute the lower bound for \(|S^i\cap \Omega |\). Temporarily write \(a'=a+b\lambda /2,\,b'=b-b\lambda /2\). Let \(K_{a',b'}^i\) be the cone of height \(b'-a'=t\) sharing the base of \(S^i\). Note that \(K_{a',b'}^i\subset S^i\cap \Omega \) and its base has radius \(\sqrt{r^2-(a')^2}= r \sqrt{1-(a'/ r )^2}\). As the \(|K_{a',b'}^i|\) is independent of i, we may drop the superscript and deduce

    $$\begin{aligned}&|S^i\cap \Omega |\ge |K_{a',b'}|=\int _0^t \omega _{d-1}\left( r \sqrt{1-(a'/ r )^2}\frac{s}{t}\right) ^{d-1}\\ {}&\quad ds =\frac{1}{d}\omega _{d-1}t r ^{d-1}(1-(a'/ r )^2)^{\frac{d-1}{2}}. \end{aligned}$$

    As \(a'\le b\le \alpha \le r /2\), we have \((1-(a'/r)^2)^{(d-1)/2}\ge 2^{-(d-1)/2}\). Hence, for each \(i=1,\cdots ,N_\lambda \)

    $$\begin{aligned}\mathbb {P}(\text { No point in } S^i)\le (1-\rho _{\min }|K_{a',b'}|)^{n}\le \left( 1-\frac{\rho _{\min }}{d2^{(d-1)/2}}t r ^{d-1}\right) ^{n}.\end{aligned}$$

    The expression on the right is less than \(N_\lambda ^{-1}n^{-\gamma }\) if

    $$\begin{aligned}n\log \left( 1-\frac{\rho _{\min }}{d2^{(d-1)/2}}t r ^{d-1}\right) \le -\gamma \log n-\log N_\lambda ,\end{aligned}$$

    or equivalently

    $$\begin{aligned}t r ^{d-1}\ge \frac{d2^{(d-1)/2}(1-e^{-\frac{\gamma \log n+\log N_\lambda }{n}})}{\rho _{\min }\omega _{d-1}}.\end{aligned}$$

    As \(1-e^{-x}\le x\), it suffices for \(t, r \) to satisfy

    $$\begin{aligned}t r ^{d-1}\ge \frac{d2^{(d-1)/2}}{\rho _{\min }\omega _{d-1}}\left( \frac{\gamma \log n+\log N_\lambda }{n}\right) .\end{aligned}$$
  4. (4)

    We claim that \(\log N_\lambda \le \gamma (d-1)\log n\). By setting \(\delta =\frac{\lambda }{2}\) in (A.2), we know

    $$\begin{aligned}N\le 2d\left( 1+\frac{4}{\lambda }\right) ^{d-1}=2d\left( \frac{\lambda +4}{\lambda }\right) ^{d-1}.\end{aligned}$$

    By hypothesis \(n\ge d\vee \frac{\lambda +4}{\lambda }\) and \(\gamma >2\), we see

    $$\begin{aligned}n^{\gamma (d-1)}\ge n^{d-1} n^{d-1}\ge 2d \left( \frac{\lambda +4}{\lambda }\right) ^{d-1}\ge N_\lambda .\end{aligned}$$

    Thus \(\gamma \log n +\log N\le d\gamma \log n\), and it suffices for \(t, r \) to satisfy

    $$\begin{aligned}t r ^{d-1}\ge \frac{d^2 2^{(d-1)/2}\gamma }{\rho _{\min }\omega _{d-1}}\left( \frac{\log n}{n}\right) .\end{aligned}$$

    This completes the proof \(\square \)

Appendix B. Proof of Proposition 6.3

Proof

The proof is split into several steps.

1. Let \(y\in \partial \Omega \) satisfy \(d_\Omega (x_*)=|x_*-y|\). Let \(z\in \partial B(x^0,\varepsilon )\) be along the line from \(x_*\) to y. Then we have

$$\begin{aligned}d_{\Omega }(z) \le d_\Omega (x_*) - |x_*-z|\end{aligned}$$

and so by the property defining \(x_*\) we have \(x_*=z\); that is \(x_*\in \partial B(x^0,\varepsilon )\). Since \(d_\Omega \) is 1-Lipschitz, we have \(d_\Omega (x_*) \ge d_\Omega (x^0)-\varepsilon \). By a similar argument as above, we have \(d_\Omega (x_*) \le d_\Omega (x^0) - \varepsilon \), and so

$$\begin{aligned}d_\Omega (x_*) = d_\Omega (x^0) - \varepsilon .\end{aligned}$$

Now, note that the function

$$\begin{aligned}g(r) = d_\Omega (x_* + r p)\end{aligned}$$

is 1-Lipschitz and satisfies \(g(\varepsilon ) = d_\Omega (x^0) = g(0) + \varepsilon \). It follows that \(g(l) = g(0) + r\) for \(0\le r\le \varepsilon \), and so

$$\begin{aligned} d_\Omega (x_* + r p) = d_\Omega (x_*) + r \ \ \text { for }0 \le r \le \varepsilon . \end{aligned}$$
(B.1)

2. Since \(d_\Omega - \frac{1}{R}|x-x_*|^2\) is a concave function, there exists \(q\in \mathbb {R}^n\) such that

$$\begin{aligned}d_\Omega (x) - d_\Omega (x_*) \le q\cdot (x-x_*)+ \frac{1}{R}|x-x_*|^2. \end{aligned}$$

for all \(x\in \Omega \). By (B.1) we have

$$\begin{aligned}r = d_\Omega (x_* + rp) - d_\Omega (x_*) \le r q\cdot p + \frac{r^2}{R}\end{aligned}$$

for \(0 \le r \le \varepsilon \). Therefore

$$\begin{aligned}q\cdot p \ge 1 - \frac{r}{R}.\end{aligned}$$

Sending \(r\rightarrow 0^+\) we find that \(p\cdot q \ge 1\).

3. We now claim that \(|q|\le 1\), which combined with \(p\cdot q \ge 1\) from part 2 implies that \(p=q\) and completes the proof. To see this, since \(B(x^0,\varepsilon )\subset \Omega \), we have \(B(x_*,r)\subset \Omega \) for \(r>0\) sufficiently small. Now, the dynamic programming principle gives

$$\begin{aligned}0 = \min _{x\in B(x_*,r)}\left\{ d_\Omega (x) - d_\Omega (x_*)+|x-x_*| \right\} \le \min _{x\in B(x_*,r)}\left\{ q\cdot (x-x_*)+|x-x_*| \right\} + \frac{r^2}{R}.\end{aligned}$$

Setting \(x-x_*=-|x-x_*|q/|q|\) we have

$$\begin{aligned}0 \le \min _{x\in B(x_*,r)}\left\{ |x-x_*|(1-|q|) \right\} + \frac{r^2}{R} = -r(|q|-1)_+ + \frac{r^2}{R}.\end{aligned}$$

Sending \(r\rightarrow 0^+\) we obtain \(|q|\le 1\), which completes the proof. \(\square \)

Appendix C. Concentration Inequalities

For reference, we state the Chernoff bounds, Hoeffding inequality, and the Bernstein inequality, which are concentration of measure inequalities used to control the variance of our normal and distance estimators. We refer the reader to [14] for a general reference on concentration inequalties. Proofs of the exact inequalities below can also be found in [18, Chapter 5].

Theorem C.1

(Chernoff bounds) Let \(X_1,X_2\dots ,X_n\) be a sequence of i.i.d. Bernoulli random variables with parameter \(p\in [0,1]\) (i.e., \(\mathbb {P}(X_i=1)=p\) and \(\mathbb {P}(X_i=0)=1-p\)). Then for any \(\varepsilon >0\) we have

$$\begin{aligned} \mathbb {P}\left( \sum _{i=1}^n X_i \ge (1+\varepsilon )np\right) \le \exp \left( -\frac{np\,\varepsilon ^2}{2(1+\tfrac{1}{3} \varepsilon )} \right) , \end{aligned}$$
(C.1)

and for any \(0 \le \varepsilon < 1\) we have

$$\begin{aligned} \mathbb {P}\left( \sum _{i=1}^n X_i \le (1-\varepsilon )np\right) \le \exp \left( -\frac{1}{2}np\,\varepsilon ^2 \right) , \end{aligned}$$
(C.2)

Theorem C.2

(Hoeffding inequality) Let \(X_1,X_2\dots ,X_n\) be a sequence of i.i.d. real-valued random variables with finite expectation \(\mu =\mathbb {E}[X_i]\), and write \(S_n=\frac{1}{n}\sum _{i=1}^n X_i\). Assume there exists \(b>0\) such that \(|\mathcal {X}-\mu |\le b\) almost surely. Then for any \(t>0\) we have

$$\begin{aligned} \mathbb {P}(S_n-\mu \ge t)\le \exp \left( -\frac{nt^2}{2b^2}\right) . \end{aligned}$$
(C.3)

Theorem C.3

(Bernstein Inequality) Let \(X_1,X_2\dots ,X_n\) be a sequence of i.i.d. real-valued random variables with finite expectation \(\mu =\mathbb {E}[X_i]\) and variance \(\sigma ^2=\text {Var}(X_i)\), and write \(S_n=\frac{1}{n}\sum _{i=1}^n X_i\). Assume there exists \(b>0\) such that \(|\mathcal {X}-\mu |\le b\) almost surely. Then for any \(t>0\) we have

$$\begin{aligned} \mathbb {P}(S_n-\mu \ge t)\le \exp \left( -\frac{nt^2}{2(\sigma ^2 + \tfrac{1}{3} bt)} \right) . \end{aligned}$$
(C.4)

Appendix D. List of Constants

We list the explicit constants that appear in Sects. 2 and 3. Below \(\omega _d\) is the volume of unit ball in d dimensions, and \(\gamma >2\) is a parameter of choice related to the error rate in the following way: \(\mathbb {P}(\text { Boundary test fails })=O(n^{-\gamma })\).

$$\begin{aligned} C_{x}&= 2\omega _{d-1}+\frac{LR\omega _{d}}{\rho _{min}}, \\ C_{y}&= \frac{\omega _{d-1}}{2(d+1)}, \\ C_ r&= \frac{1}{R}\max \left[ \left( \frac{3\gamma \rho _{\max }d^2\omega _{d}R^2}{{C_{x}}^2 \rho _{\min }^2 }\right) ^{\frac{1}{d+2}},\left( \frac{4\gamma C_yd^2 2^{(d-1)/2}}{13\rho _{\min }\omega _{d-1}C_{x}}\right) ^{\frac{1}{d+1}}\right] . \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Calder, J., Park, S. & Slepčev, D. Boundary Estimation from Point Clouds: Algorithms, Guarantees and Applications. J Sci Comput 92, 56 (2022). https://doi.org/10.1007/s10915-022-01894-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-022-01894-9

Keywords

Mathematics Subject Classification

Navigation