Skip to main content
Log in

Kernel-Based Methods for Solving Time-Dependent Advection-Diffusion Equations on Manifolds

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

In this paper, we extend the class of kernel methods, the so-called diffusion maps (DM) and ghost point diffusion maps (GPDM), to solve the time-dependent advection-diffusion PDE on unknown smooth manifolds without and with boundaries. The core idea is to directly approximate the spatial components of the differential operator on the manifold with a local integral operator and combine it with the standard implicit time difference scheme. When the manifold has a boundary, a simplified version of the GPDM approach is used to overcome the bias of the integral approximation near the boundary. The Monte-Carlo discretization of the integral operator over the point cloud data gives rise to a mesh-free formulation that is natural for randomly distributed points, even when the manifold is embedded in high-dimensional ambient space. Here, we establish the convergence of the proposed solver on appropriate topologies, depending on the distribution of point cloud data and boundary type. We provide numerical results to validate the convergence results on various examples that involve simple geometry and an unknown manifold.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data Availability

Data sharing is not applicable to this article as no datasets were generated during this study.

References

  1. Ahlberg, J.H., Nilson, E.N.: Convergence properties of the spline fit. J. Soc. Ind. Appl. Math. 11(1), 95–104 (1963)

    Article  MathSciNet  MATH  Google Scholar 

  2. Berry, T., Giannakis, D.: Spectral exterior calculus. Commun. Pure Appl. Math. 73(4), 689–770 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  3. Berry, T., Harlim, J.: Variable bandwidth diffusion kernels. Appl. Comput. Harmon. Anal. 40, 68–96 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  4. Berry, T., Harlim, J.: Iterated diffusion maps for feature identification. Appl. Comput. Harmon. Anal. 45(1), 84–119 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  5. Berry, T., Harlim, J.: Iterated diffusion maps for feature identification. Appl. Comput. Harmon. Anal. 45(1), 84–119 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  6. Berry, T., Sauer, T.: Local kernels and the geometric structure of data. Appl. Comput. Harmon. Anal. 40(3), 439–469 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bertalmıo, M., Cheng, L.T., Osher, S., Sapiro, G.: Variational problems and partial differential equations on implicit surfaces. J. Comput. Phys. 174(2), 759–780 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bonito, A., Cascón, J.M., Mekchay, K., Morin, P., Nochetto, R.H.: High-order afem for the laplace-beltrami operator: convergence rates. Found. Comput. Math. 16(6), 1473–1539 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  9. Camacho, F., Demlow, A.: L2 and pointwise a posteriori error estimates for fem for elliptic pdes on surfaces. IMA J. Numer. Anal. 35(3), 1199–1227 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  10. Coifman, R.R., Lafon, S.: Diffusion maps. Appl. Comput. Harmon. Anal. 21(1), 5–30 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  11. Coifman, R.R., Shkolnisky, Y., Sigworth, F.J., Singer, A.: Graph laplacian tomography from unknown random projections. Image Process. IEEE Trans. 17(10), 1891–1899 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  12. Crane, K.: Keenan’s 3d model repository. http://www.cs.cmu.edu/~kmcrane/Projects/ModelRepository

  13. Dziuk, G., Elliott, C.M.: Finite element methods for surface pdes. Acta Numer 22, 289–396 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  14. Elliott, C.M., Stinner, B.: Modeling and computation of two phase geometric biomembranes using surface finite elements. J. Comput. Phys. 229(18), 6585–6612 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  15. Fuselier, E.J., Wright, G.B.: A high-order kernel method for diffusion and reaction-diffusion equations on surfaces. J. Sci. Comput. 56(3), 535–565 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  16. Gilani, F., Harlim, J.: Approximating solutions of linear elliptic pde’s on a smooth manifold using local kernel. J. Comput. Phys. 395, 563–582 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  17. Gross, B.J., Trask, N., Kuberry, P., Atzberger, P.J.: Meshfree methods on manifolds for hydrodynamic flows on curved surfaces: a generalized moving least-squares (gmls) approach. J. Comput. Phys. 409, 109340 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  18. Harlim, J.: Data-driven computational methods: Parameter and Operator Estimations. Cambridge University Press, Cambridge (2018). https://doi.org/10.1017/9781108562461

  19. Jiang, S.W., Harlim, J.: Ghost point diffusion maps for solving elliptic pdes on manifolds with classical boundary conditions. Comm. Pure Appl. Math. (in press), arXiv:2006.04002

  20. Krylov, N.: Lectures on elliptic and parabolic equations in Holder spaces. 12. American Mathematical Soc. (1996)

  21. Lehto, E., Shankar, V., Wright, G.B.: A radial basis function (rbf) compact finite difference (fd) scheme for reaction-diffusion equations on surfaces. SIAM J. Sci. Comput. 39(5), A2129–A2151 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  22. LeVeque, R.J.: Finite difference methods for ordinary and partial differential equations: steady-state and time-dependent problems, vol. 98. Siam (2007)

  23. Liang, J., Zhao, H.: Solving partial differential equations on point clouds. SIAM J. Sci. Comput. 35(3), A1461–A1486 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  24. Lieberman, G.M.: Second order parabolic differential equations. World scientific (1996)

  25. Macdonald, C.B., Ruuth, S.J.: The implicit closest point method for the numerical solution of partial differential equations on surfaces. SIAM J. Sci. Comput. 31(6), 4330–4350 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  26. Mémoli, F., Sapiro, G., Thompson, P.: Implicit brain imaging. Neuroimage 23, S179–S188 (2004)

    Article  Google Scholar 

  27. Morton, K.W., Mayers, D.F.: Numerical solution of partial differential equations (An introduction). Cambridge University Press (2005)

  28. Piret, C.: The orthogonal gradients method: a radial basis functions method for solving partial differential equations on arbitrary surfaces. J. Comput. Phys. 231(14), 4662–4675 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  29. Rauter, M., Tuković, Ž: A finite area scheme for shallow granular flows on three-dimensional surfaces. Comput. Fluids 166, 184–199 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  30. Ruuth, S.J., Merriman, B.: A simple embedding method for solving partial differential equations on surfaces. J. Comput. Phys. 227(3), 1943–1961 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  31. Shankar, V., Wright, G.B., Kirby, R.M., Fogelson, A.L.: A radial basis function (rbf)-finite difference (fd) method for diffusion and reaction-diffusion equations on surfaces. J. Sci. Comput. 63(3), 745–768 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  32. Shi, Z.: Enforce the dirichlet boundary condition by volume constraint in point integral method. Commun. Math. Sci. 15(6), 1743–1769 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  33. Singer, A., Wu, H.t.: Orientability and diffusion maps. Applied and computational harmonic analysis 31(1), 44–58 (2011)

  34. Suchde, P., Kuhnert, J.: A meshfree generalized finite difference method for surface pdes. Comput. Math. Appl. 78(8), 2789–2805 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  35. Thomas, J.W.: Numerical Partial Differential Equations: Finite Difference Methods. Springer-Verlag, New York, Inc (1995)

    Book  MATH  Google Scholar 

  36. Varah, J.M.: A lower bound for the smallest singular value of a matrix. Linear Algebra Appl. 11(1), 3–5 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  37. Vaughn, R., Berry, T., Antil, H.: Diffusion maps for embedded manifolds with boundary with applications to pdes. arXiv preprint arXiv:1912.01391 (2019)

  38. Virga, E.G.: Variational theories for liquid crystals. CRC Press (2018)

  39. Von Luxburg, U., Belkin, M., Bousquet, O.: Consistency of spectral clustering. The Annals of Statistics pp. 555–586 (2008)

  40. Walker, S.W.: Felicity: A matlab/c++ toolbox for developing finite element methods and simulation modeling. SIAM J. Sci. Comput. 40(2), C234–C257 (2018)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the editor and the referees for their constructive comments and suggestions which improved our paper. The authors also thank Faheem Gilani for providing an initial sample code of FELICITY FEM.

Funding

The research of JH was partially supported under the NSF grants DMS-1854299, DMS-2207328, DMS-2229435, and the ONR grant N00014-22-1-2193. This research was supported in part by a Seed Grant award from the Institute for Computational and Data Sciences at the Pennsylvania State University. The research of SJ was supported by the NSFC Grant No. 12101408.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shixiao W. Jiang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

A Kernel Method for Approximating Normal Vectors at the Boundary

Assume that randomly sampled point clouds, \(\{x_{i}\}_{i=1}^{N}\), lie on a d-dimensional manifold \({M\subseteq }{\mathbb {R}}^{m}\). Among these N data points, B boundary points, \(\{x_{b}\}_{b=1}^{B}\), lie on the \( {(d-1)}\)-dimensional boundary \(\partial {M}\). Our goal here is to approximate normal vectors at these boundary points using a kernel-based weighted linear regression method as introduced in Corollary 3.2. of [5].

Algorithm A1 Kernel method for approximating normal vectors:

  1. 1.

    For a boundary point \(x\in \) \(\{x_{b}\}_{b=1}^{B}\subseteq \partial M\) , define \({\textbf{X}}\) to be the \(m\times k\) matrix with each column \({\textbf{X}}_{j}=D(x)^{-1/2}\exp (-\Vert x-x_{j}\Vert ^{2}/4\epsilon )(x_{j}-x)\) where \(D(x)=\sum _{j=1}^{k}\exp \left( -\Vert x-x_{j}\Vert ^{2}/2\epsilon \right) \) . Here, \(x_{j}\) (\(j=1,\ldots ,k\)) are the \(k>d\) nearest neighbors of the boundary point x chosen from the \(\{x_{i}\}_{i=1}^{N}\subseteq M\). The bandwidth \(\epsilon \) is specified using these k neighbors based on the automated tuned method as in (12).

  2. 2.

    Compute the left-singular vectors of \({\textbf{X}}\) using the singular value decomposition (SVD) to obtain \(\varvec{{\tilde{t}}}_{1},\varvec{{\tilde{t}}}_{2},\ldots ,\varvec{{\tilde{t}}}_{d}\in \)\({\mathbb {R}}^{m}\) that span the tangent space of M at each boundary point. Here, the leading largest d singular values of matrix \({\textbf{X}}\) will be of order-\( \sqrt{\epsilon }\) with the associated left-singular vectors \(\varvec{{\tilde{t}}}_{1},\varvec{{\tilde{t}}}_{2},\ldots ,\varvec{{\tilde{t}}}_{d}\) parallel to the tangent space of M. Thus, the error estimates of the leading singular vectors \(\varvec{{\tilde{t}}}_{1},\varvec{{\tilde{t}}} _{2},\ldots ,\varvec{{\tilde{t}}}_{d}\) for approximating the tangent vectors of M at boundary point x are also of order-\(\sqrt{ \epsilon }\) (see Appendix A of [5] for detailed discussion). The remaining \(\min \{m,k\}-d\) smaller singular values will be of order-\( \epsilon \) with the left-singular vectors orthogonal to the tangent space of M.

  3. 3.

    Repeat Steps 1 and 2 to obtain the \({(d-1)}\) tangent vectors, \( \varvec{{\tilde{s}}}_{1},\ldots ,\varvec{{\tilde{s}}}_{d-1}\), associated with the leading largest \((d-1)\) singular values but using k nearest neighbors of x chosen from the \(\{x_{b}\}_{b=1}^{B}\subseteq \partial M\) for each \(x\in \) \(\partial M\). These \({(d-1)}\) tangent vectors are approximated to span the \({(d-1)}\) dimensional boundary \( \partial M\). The bandwidth \(\epsilon _{0}\) is also specified using these k nearest neighbors from \(\partial M\).

  4. 4.

    Calculate the normal direction \(\varvec{{\tilde{\nu }}}\) by subtracting the orthonormal projection of \(\varvec{{\tilde{t}}}_{p}\) onto \(Span\{\varvec{{\varvec{{\tilde{s}}}}}_{1}\varvec{{,\ldots , \varvec{{\tilde{s}}}}}_{d-1}\}\) from the tangent vector \(\varvec{{\tilde{t}}}_{p}\) for some \(p\in \{1,\ldots ,d\}\) using the Gram–Schmidt process or QR decomposition,

    $$\begin{aligned} \varvec{{\tilde{\nu }}}=\frac{\varvec{{\tilde{t}}}_{p}-\sum _{i=1}^{d-1} \left\langle \varvec{{\tilde{t}}}_{p},\varvec{{\tilde{s}}} _{i}\right\rangle \varvec{{\tilde{s}}}_{i}}{\Vert \varvec{{\tilde{t}}} _{p}-\sum _{i=1}^{d-1}\left\langle \varvec{{\tilde{t}}}_{p},\varvec{{\tilde{s}}}_{i}\right\rangle \varvec{{\tilde{s}}}_{i}\Vert }, \end{aligned}$$

    where \(\left\langle {\varvec{a}},{\varvec{b}}\right\rangle \) denotes the inner product between vectors \({\varvec{a}},{\varvec{b}}\in {\mathbb {R}} ^{m}\) and all tangent vectors \(\{{\varvec{{\tilde{t}}}_{1},\varvec{ {\tilde{t}}}_{2},\ldots ,\varvec{{\tilde{t}}}_{d}}\}\) and \(\{\varvec{{ \varvec{{\tilde{s}}}}}_{1}\varvec{{,\ldots ,\varvec{{\tilde{s}}}}} _{d-1}\}\) are unit vectors generated from SVD algorithm. The error estimate for the normal direction \(\varvec{{\tilde{\nu }}}\) is thereafter \(\mathcal {O }(\sqrt{\epsilon },\sqrt{\epsilon _{0}})\), that is, \(\Vert \varvec{\nu }- \varvec{{\tilde{\nu }}}\Vert =O(\sqrt{\epsilon },\sqrt{\epsilon _{0}})\). Here, basically \(\varvec{{\tilde{\nu }}}\) is a vector in the tangent space of M, \({Span}\{{\varvec{{\tilde{t}}}_{1},\varvec{ {\tilde{t}}}_{2},\ldots ,\varvec{{\tilde{t}}}_{d}}\}\), but also perpendicular to the boundary \(Span\{\varvec{{\varvec{{\tilde{s}}}}} _{1}\varvec{{,\ldots ,\varvec{{\tilde{s}}}}}_{d-1}\}\) as well.

  5. 5.

    Determine the sign of \(\varvec{{\tilde{\nu }}}\) from the orientation of the manifold M by comparing \(\varvec{{\tilde{\nu }}}\) with the mean of vectors connecting x and its k-nearest neighbors.

B Proof of Lemma 3.6

First, we prove that \(\Vert ({\textbf{I}}-\Delta t{\textbf{N}})^{-1}\Vert _{\infty }\le 1\). To obtain this result, we need the following two properties of the matrix \({\textbf{N}}\):

  1. 1)

    The diagonal entries are negative \(N_{ii}<0\) and non-diagonal entries are non-negative \(N_{ij}\ge 0\) for \(j\ne i\);

  2. 2)

    The row sum of \({\textbf{N}}\) is zero , that is, \(\sum _{j=1}^{N-B}N_{ij}=0\) for \( i=1,\ldots ,N-B\).

For the convenience of discussion, we define an \((N-B)\times {\bar{N}}\) sub-matrix of \({\textbf{L}}^{h}\) in (25) for its \({N-B}\) rows corresponding to interior points,

$$\begin{aligned} ({\textbf{L}}^{h})_{(N-B)\times {\bar{N}}}:= & {} \left( {\textbf{L}}^{I,I},{\textbf{L}} ^{I,B},{\textbf{L}}^{I,G}\right) \in ({\mathbb {R}}^{(N-B)\times (N-B)},{\mathbb {R}} ^{(N-B)\times B},{\mathbb {R}}^{(N-B)\times BK})\nonumber \\= & {} {\mathbb {R}}^{(N-B)\times {\bar{N}}}, \end{aligned}$$
(58)

where each column of \({\textbf{L}}^{I,I},{\textbf{L}}^{I,B},\)\({\textbf{L}}^{I,G}\) corresponds to an interior point, a boundary point, and an exterior ghost point, respectively. We also separate matrix \({\textbf{G}}\in {\mathbb {R}}^{{B}{ K}\times N}\) in (27) into two parts,

$$\begin{aligned} \mathbf {G:=(G}^{G,I},{\textbf{G}}^{G,B}\mathbf {)}\in ({\mathbb {R}}^{BK\times (N-B)},{\mathbb {R}}^{BK\times B}), \end{aligned}$$

where each column of \({\textbf{G}}^{G,I}\) and \({\textbf{G}}^{G,B}\) corresponds to an interior point and a boundary point, respectively. From the extrapolation formula (22), we have

$$\begin{aligned} U_{b,l}=\left( l+1\right) u(x_{b})-lu({\tilde{x}}_{b,0}). \end{aligned}$$

Then the submatrices \({\textbf{G}}^{G,I}\) and \({\textbf{G}}^{G,B}\) can be expressed as

$$\begin{aligned} {\textbf{G}}^{G,I}= & {} \left( \begin{array}{ccc} &{} \vdots &{} \\ \cdots &{} G_{\left\{ \left( b,l\right) ,(b,0)\right\} }=-l &{} \cdots \\ &{} \vdots &{} \end{array} \right) _{BK\times (N-B)}, \ \ \nonumber \\ {\textbf{G}}^{G,B}= & {} \left( \begin{array}{ccc} &{} \vdots &{} \\ \cdots &{} G_{\left\{ \left( b,l\right) ,b\right\} }=l+1 &{} \cdots \\ &{} \vdots &{} \end{array} \right) _{BK\times B}, \end{aligned}$$
(59)

where \(G_{\left\{ \left( b,l\right) ,(b,0)\right\} }\) is the only nonzero entry lies in the row of ghost \({\tilde{x}}_{b,l}\) and the column of interior \( {\tilde{x}}_{b,0}\), \(G_{\left\{ \left( b,l\right) ,b\right\} }\) is the only nonzero entry lies in the row of ghost \({\tilde{x}}_{b,l}\) and the column of boundary \(x_{b}\). With these definitions, we can use (27) to rewrite matrix \({{\textbf{N}}}\),

$$\begin{aligned} {{\textbf{N}}}= & {} \varvec{{\tilde{L}}}^{I,I}+\varvec{{\tilde{L}}}^{I,B}{\textbf{E}} ^{B,I}=({\textbf{L}}^{I,I}+{\textbf{L}}{^{I,G}{\textbf{G}}^{G,I})}+({\textbf{L}} ^{I,B}+{\textbf{L}}^{I,G}{\textbf{G}}^{G,B}){\textbf{E}}^{B,I} \nonumber \\= & {} {\textbf{L}}^{I,I}+{\textbf{L}}^{I,B}{\textbf{E}}^{B,I}+{\textbf{L}}{^{I,G}( {\textbf{G}}^{G,I}+{\textbf{G}}^{G,B}{\textbf{E}}^{B,I}):=}{\textbf{L}}^{I,I}+ {\textbf{L}}^{I,B}{\textbf{E}}^{B,I}+{\textbf{L}}{^{I,G}\varvec{{\tilde{G}}}^{G,I},} \end{aligned}$$
(60)

where we have defined \(\varvec{{\tilde{G}}}^{G,I}:= {\textbf{G}}^{G,I}+{\textbf{G}}^{G,B}{\textbf{E}}^{B,I}\). Using the definitions of \({{\textbf{G}}^{G,I}}\) and \({{\textbf{G}}^{G,B}}\) in (59) and \({{\textbf{E}}^{B,I}}\) in (45), one can calculate \({\varvec{{\tilde{G}}}^{G,I}}\) to obtain

$$\begin{aligned} {\varvec{{\tilde{G}}}^{G,I}=}\left( \begin{array}{ccc} &{} \vdots &{} \\ \cdots &{} G_{\left\{ \left( b,l\right) ,(b,0)\right\} }=1 &{} \cdots \\ &{} \vdots &{} \end{array} \right) _{BK\times (N-B)}. \end{aligned}$$

For \({\varvec{{\tilde{G}}}^{G,I}}\), the only nonzero entry is \(G_{\left\{ \left( b,l\right) ,(b,0)\right\} }=1\) for \(b=1,\ldots ,B\) and \(l=1,\ldots ,K\) . Thus, for \({\textbf{L}}^{I,B}{\textbf{E}}^{B,I}+{\textbf{L}}{^{I,G}\varvec{{\tilde{G}}}^{G,I}}\) in (60), all of its entris are non-negative, in particular, all its diagonal entries are zero. Also notice that the diagonal entries of \({\textbf{L}}^{I,I}\) are negative and all its non-diagonal entries are non-negative. So far, we have proved the first property, that is, for all \({i=1,\ldots ,N-B}\), \({\textbf{N}}\) has negative diagonal \(N_{ii}<0\) and non-negative non-diagonal \(N_{ij}\ge 0\) for \(j\ne i\).

We now verify the second property of \({\textbf{N}}\), its every row sum is zero. Using the definition in (60) and define all one vector \({\textbf{1}}_{s}\) with length s, one can show that

$$\begin{aligned} \textbf{N1}_{N-B}= & {} ({\textbf{L}}^{I,I}+{\textbf{L}}^{I,B}{\textbf{E}}^{B,I}+ {\textbf{L}}{^{I,G}\varvec{{\tilde{G}}}^{G,I})}{\textbf{1}}_{N-B} \nonumber \\= & {} {\textbf{L}}^{I,I}{\textbf{1}}_{N-B}+{\textbf{L}}^{I,B}{\textbf{1}}_{B}+{\textbf{L}} {^{I,G}}{\textbf{1}}_{BK}\mathbf {.} \nonumber \\= & {} {\textbf{0}}, \end{aligned}$$
(61)

where we have used the fact that the row sum of \({\textbf{L}}^{h}\) in (58) is zero. Based on these two properties of \({\textbf{N}}\), one can immediately see that \({\textbf{I}}-\Delta t{\textbf{N}}\) is strictly diagonally dominant (SDD) for any \(\Delta t\). Using the Ahlberg-Nilson-Varah bound for SDD matrices, one obtains

$$\begin{aligned} \Vert ({\textbf{I}}-\Delta t{\textbf{N}})^{-1}\Vert _{\infty }\le \frac{1}{ \min _{i}(|1-\Delta tN_{ii}|-\Delta t\sum _{j\ne i}|N_{ij}|)}=\frac{1}{ \min _{i}(1-\Delta t\sum _{j}N_{ij})}=1. \end{aligned}$$

This completes the first part of Lemma 3.6.

Next, we prove that \(\Vert ({\textbf{I}}-\Delta t{\textbf{N}})^{-1}\Vert _{2}\le 1+C\Delta t\). To obtain this result, we need to show that \( {\textbf{I}}-\Delta t{\textbf{N}}^{\top }\) is SDD for sufficiently large N and sufficiently small \(\Delta t\). We can compute

$$\begin{aligned}{} & {} |1-\Delta tN_{ii}|-\Delta t\sum _{j\ne i}|N_{ji}| \nonumber \\{} & {} \quad =1-\Delta tN_{ii}-\Delta t\sum _{j\ne i}N_{ji}=1-\Delta t\sum _{j}N_{ji}\nonumber \\{} & {} \quad =1-\Delta t\sum _{j}N_{ij}+\Delta t(\sum _{j}N_{ij}-\sum _{j}N_{ji}) \nonumber \\{} & {} \quad =1+\Delta t[({\textbf{L}}^{I,I}+{\textbf{L}}^{I,B}{\textbf{E}}^{B,I}+{\textbf{L}}{ ^{I,G}\varvec{{\tilde{G}}}^{G,I}){\textbf{1}}}_{N-B}-({\textbf{L}}^{I,I}\nonumber \\{} & {} \qquad +{\textbf{L}} ^{I,B}{\textbf{E}}^{B,I}+{\textbf{L}}{^{I,G}\varvec{{\tilde{G}}}^{G,I})^{\top } {\textbf{1}}}_{N-B}]_{i} \nonumber \\{} & {} \quad =1+\Delta t\frac{\epsilon ^{-d/2-1}}{m_{0}(N+BK)}[({\textbf{K}}^{I,I}+ {\textbf{K}}^{I,B}{\textbf{E}}^{B,I}+{\textbf{K}}^{I,G}{\varvec{{\tilde{G}}}^{G,I}}) {\textbf{1}}\nonumber \\{} & {} \qquad -\left( {\textbf{K}}^{I,I}+{\textbf{K}}^{I,B}{\textbf{E}}^{B,I}+\textbf{K }^{I,G}{\varvec{{\tilde{G}}}^{G,I}}\right) ^{\top }\mathbf {1]}_{i} \nonumber \\{} & {} \quad =1+\Delta t\frac{\epsilon ^{-d/2-1}}{m_{0}(N+BK)}\Big [\underbrace{\big ( {\textbf{K}}^{I,I}\mathbf {1-(K}^{I,I})^{\top }{\textbf{1}}\big )}_{(I)}\nonumber \\{} & {} \qquad + \underbrace{\big ({\textbf{K}}^{I,B}{\textbf{E}}^{B,I}\mathbf {1-(K}^{I,B}\textbf{E }^{B,I}\mathbf {)}^{\top }{\textbf{1}}\big )}_{(II)}+\underbrace{\big ({\textbf{K}} ^{I,G}{\varvec{{\tilde{G}}}^{G,I}}{\textbf{1}}-({\textbf{K}}^{I,G}{\varvec{\tilde{G }}^{G,I}})^{\top }{\textbf{1}}\big )}_{(III)}\Big ]_{i}, \end{aligned}$$
(62)

where \({\textbf{K}}^{I,I},{\textbf{K}}^{I,B},{\textbf{K}}^{I,G}\) are the kernels of \({\textbf{L}}^{I,I},{\textbf{L}}^{I,B},{\textbf{L}}^{I,G}\) in (58), respectively, related to each other with respect to (9 ). The second line follows from the two properties 1) and 2) of \({\textbf{N}}\), the third line follows from the calculation in (60), and the fourth line follows from the construction of DM matrix in (9).

Bounding the term (I) in (62). Let \(H_{j}(x_{i}):= \epsilon ^{-d/2}K_{\epsilon }(x_{j},x_{i})\) such that,

$$\begin{aligned} {\mathbb {E}}[H_{j}]=\epsilon ^{-d/2}\int _{M}K_{\epsilon }(x_{j},y)dV_{y}=G_{\epsilon }1(x_{j})=m_0+\epsilon \omega (x_{j})+ O(\epsilon ^{2}). \end{aligned}$$

It is easy to show that,

$$\begin{aligned} \frac{1}{N}\sum _{i=1}^{N-B}H_{j}(x_{i})-\frac{1}{N-1}\sum _{i\ne j}H_{j}(x_{i})=O((N-B)^{-1}\epsilon ^{-d/2},(N-B)^{-2}). \end{aligned}$$

Letting \(\Big |\frac{1}{N-B-1}\sum _{i\ne j}H_{j}(x_{i})-{\mathbb {E}}[H_{j}] \Big |=O(\epsilon ^{2})\), we obtain,

$$\begin{aligned} \Big |\frac{1}{N-B}\sum _{i=1}^{N-B}H_{j}(x_{i})-{\mathbb {E}}[H_{j}]\Big |= O(\epsilon ^{2},(N-B)^{-1}\epsilon ^{-d/2},(N-B)^{-2}). \end{aligned}$$

Balancing the first and second error bounds, the bias is given as \(\epsilon = O((N-B)^{-\frac{2}{4+d}})\).

Let \(Y_j(x_i) := H_j(x_i) - {\mathbb {E}}[H_j]\) and compute the variance,

$$\begin{aligned} \text{ Var }[Y_j] = {\mathbb {E}}[H_j^2] - {\mathbb {E}}[H_j]^2 = {\hat{m}}_0 \epsilon ^{-d/2} - m_0^2 + O(\epsilon ^{1-d/2}) = {\hat{m}}_0 \epsilon ^{-d/2} + O(1), \end{aligned}$$

where \({\hat{m}}_0 = \int _{{\mathbb {R}}^d} \exp (-\frac{ |z-\sqrt{\epsilon } B(x) |^2}{2})^2 dz\).

Using the Chernoff bound, we obtain

$$\begin{aligned} P\Big (\Big |\frac{1}{N-B-1}\sum _{i\ne j}H_{j}(x_{i})-{\mathbb {E}}[H_{j}]\Big |>c\epsilon ^{2}\Big )= & {} P\Big (\Big |\sum _{i\ne j}Y_{j}(x_{i})\Big |>c\epsilon ^{2}(N-B-1)\Big )\\\le & {} \exp \Big (-\frac{c^{2}\epsilon ^{4}(N-B-1)}{4{\hat{m}}_0 \epsilon ^{-d/2}}\Big ), \end{aligned}$$

where \(c,{\hat{m}}_0=O(1)\). To have an order one exponent (this is equivalent to balancing square of the bias and variance), we obtain \( \epsilon =O(N^{-\frac{2}{8+d}})\) which is slower than the bias.

So far, we have shown that

$$\begin{aligned} \frac{1}{N-B}\sum _{i=1}^{N-B}H_{j}(x_{i})= & {} {\mathbb {E}}[H_{j}]+O (\epsilon ^{2})\\= & {} m_0+\epsilon \omega (x_{j})+O(\epsilon ^{2})=m_0+\epsilon \omega (x_{j})+O((N-B)^{-\frac{4}{8+d}}) \end{aligned}$$

as \(N-B\rightarrow \infty \) in high probability. Repeating the same argument with \(H_{j}^{*}(x_{i}):=\epsilon ^{-d/2}K_{\epsilon }(x_{j},x_{i})^{\top }\), one can conclude also that,

$$\begin{aligned} \frac{1}{N-B}\sum _{i=1}^{N-B}H_{j}^{*}(x_{i})=m_0+\epsilon \omega (x_{j})-\epsilon m_0 \text{ div }b(x_{j})+O((N-B)^{-\frac{4}{8+d}}) \end{aligned}$$

Based on these two expansions, we have,

$$\begin{aligned} \frac{\epsilon ^{-d/2-1}}{2m_{0}(N-B)}\big (({\textbf{K}}^{I,I}\mathbf {)}^{\top }{\textbf{1}}-{\textbf{K}}^{I,I}{\textbf{1}}\big )_{j}= & {} \frac{\epsilon ^{-1}}{2m_0(N-B)} \sum _{i=1}^{N-B}\Big (H_{j}^{*}(x_{i})-H_{j}(x_{i})\Big )\\= & {} -\frac{1}{2} \text{ div }b(x_{j})+O((N-B)^{-\frac{2}{8+d}}). \end{aligned}$$

Thus, the first term can be bounded by

$$\begin{aligned} \frac{\epsilon ^{-d/2-1}}{m_{0}(N+BK)}\left| ({\textbf{K}}^{I,I}\mathbf { 1-(K}^{I,I})^{\top }\mathbf {1)}_{i}\right| =\frac{N-B}{N+BK}\left| \text{ div }b(x_{i})\right| +O(N^{-\frac{2}{8+d}})=O(1), \end{aligned}$$

for sufficiently large N.

Bounding the term (II) in (62). For the second term, \( {\textbf{K}}^{I,B}\in {\mathbb {R}}^{(N-B)\times B}\) and \({\textbf{E}}^{B,I}\in {\mathbb {R}}^{B\times (N-B)}\) is a matrix whose bth row equals to one on the column corresponding to the \({\tilde{x}}_{b,0}\) and zero everywhere else. Then, we have

$$\begin{aligned} ({\textbf{K}}^{I,B}\mathbf {E1)}_{i}\mathbf {=(K}^{I,B}\mathbf {1)}_{i}\le B=\left\{ \begin{array}{cc} O(1), &{} d=1 \\ O(\sqrt{N}), &{} d=2 \end{array} \right. . \end{aligned}$$

and

$$\begin{aligned} (\mathbf {(K}^{I,B}\mathbf {E)}^{\top }\mathbf {1)}_{i}\mathbf {=(({\textbf{E}}} ^{B,I}\mathbf {\mathbf {)}^{\top }(K}^{I,B})^{\top }\mathbf {1)}_{i}\le R(( {\textbf{E}}^{B,I}\mathbf {)^{\top }1)}_{i}\le R=\left\{ \begin{array}{cc} O(\sqrt{N}), &{} d=1 \\ O(\sqrt{N}), &{} d=2 \end{array} \right. , \end{aligned}$$

where we have used the assumption that each boundary point has at most \(R=O(\sqrt{N})\) neighboring interior points for both \(d=1\) and \(d=2\). Thus, for second term,

$$\begin{aligned} \frac{\epsilon ^{-d/2-1}}{m_{0}(N+BK)}\left| ({\textbf{K}}^{I,B}{\textbf{E}} ^{B,I}\mathbf {1-(K}^{I,B}{\textbf{E}}^{B,I}\mathbf {)}^{\top }\mathbf {1)} _{i}\right|\le & {} \frac{\epsilon ^{-d/2-1}\left( B+R\right) }{m_{0}(N+BK)}\\= & {} \left\{ \begin{array}{cc} O(N^{-1/14}), &{} d=1 \\ O(1), &{} d=2 \end{array} \right. , \end{aligned}$$

where we have used,

$$\begin{aligned} \epsilon \sim N^{-\frac{2}{d+6}}= {\left\{ \begin{array}{ll} N^{-2/7}, &{} d=1 \\ N^{-1/4}, &{} d=2 \end{array}\right. }, \end{aligned}$$

obtained by balancing the two error terms in (11).

Bounding the term (III) in (62). For the third term, \( {\textbf{K}}^{I,G}\in {\mathbb {R}}^{(N-B)\times BK}\) and \({\varvec{{\tilde{G}}} ^{G,I}}\in {\mathbb {R}}^{BK\times (N-B)}\) is a matrix whose row corresponding to \({\tilde{x}}_{b,{l}}\) equals to one on the column corresponding to the \( {\tilde{x}}_{b,0}\) and zero everywhere else. Following the same argument above, we can show that,

$$\begin{aligned} ({\textbf{K}}^{I,G}{\varvec{{\tilde{G}}}^{G,I}}\mathbf {1)}_{i}&\mathbf {=}&\mathbf {(K}^{I,G}\mathbf {1)}_{i}\le BK, \\ (({\textbf{K}}^{I,G}{\varvec{{\tilde{G}}}^{G,I}})^{\top }\mathbf {1)}_{i}&\mathbf {=}&\mathbf {(({\varvec{{\tilde{G}}}}}^{G,I}\mathbf {{)}^{\top }(K} ^{I,G})^{\top }\mathbf {1)}_{i}\le R(\mathbf {({\varvec{{\tilde{G}}}}}^{G,I} \mathbf {{)}^{\top }1)}_{i}\le RK. \end{aligned}$$

The third term can be bounded by,

$$\begin{aligned} \frac{\epsilon ^{-d/2-1}}{m_{0}(N+BK)}\left| ({\textbf{K}}^{I,G}{\varvec{{\tilde{G}}}^{G,I}}{\textbf{1}}-({\textbf{K}}^{I,G}{\varvec{{\tilde{G}}}^{G,I}} )^{\top }\mathbf {1)}_{i}\right|\le & {} \frac{\epsilon ^{-d/2-1}\left( B+R\right) K}{m_{0}(N+BK)}\\= & {} \left\{ \begin{array}{cc} O(N^{-1/14}), &{} d=1 \\ O(1), &{} d=2 \end{array} \right. . \end{aligned}$$

To summarize, there exists a constant \(C_{0}\) such that

$$\begin{aligned}{} & {} \left| \frac{\epsilon ^{-d/2-1}}{m_{0}(N+BK)}[({\textbf{K}}^{I,I}+\textbf{K }^{I,B}{\textbf{E}}^{B,I}+{\textbf{K}}^{I,G}{\varvec{{\tilde{G}}}^{G,I}}){\textbf{1}}\right. \nonumber \\{} & {} \qquad \left. -\left( {\textbf{K}}^{I,I}+{\textbf{K}}^{I,B}{\textbf{E}}^{B,I}+{\textbf{K}}^{I,G}{ \varvec{{\tilde{G}}}^{G,I}}\right) ^{\top }\mathbf {1]}_{i}\right| \nonumber \\{} & {} \quad \le C_{0}, \end{aligned}$$
(63)

for sufficiently large N. From (62), we have

$$\begin{aligned} |1-\Delta tN_{ii}|-\Delta t\sum _{j\ne i}|N_{ji}|\ge 1-C_{0}\Delta t>0, \end{aligned}$$

for sufficiently small \(\Delta t\), which means that \({\textbf{I}}-\Delta t {\textbf{N}}^{\top }\) is a SDD matrix. Using the Ahlberg-Nilson-Varah bound, we have

$$\begin{aligned} \Vert ({\textbf{I}}-\Delta t{\textbf{N}})^{-1}\Vert _{1}=\Vert ({\textbf{I}} -\Delta t{\textbf{N}}^{\top })^{-1}\Vert _{\infty }\le \frac{1}{1-C_{0}\Delta t}. \end{aligned}$$

Therefore, there exists a constant C such that the spectral norm can be bounded by

$$\begin{aligned} \Vert ({\textbf{I}}-\Delta t{\textbf{N}})^{-1}\Vert _{2}\le \left( \Vert ( {\textbf{I}}-\Delta t{\textbf{N}})^{-1}\Vert _{\infty }\Vert ({\textbf{I}}-\Delta t {\textbf{N}})^{-1}\Vert _{1}\right) ^{1/2}\le (\frac{1}{1-C_{0}\Delta t} )^{1/2}\le 1+C\Delta t.\nonumber \\ \end{aligned}$$
(64)

C Proof of Lemma 3.7

For the stability of the Dirichlet problem (42), basically one can follow similar steps as in the above Neumann case. Following the calculation in (42), we have

$$\begin{aligned} \varvec{H={\tilde{L}}}^{I,I}={\textbf{L}}^{I,I}+{\textbf{L}}{^{I,G}{\textbf{G}} ^{G,I}.} \end{aligned}$$
(65)

Now, we show that \({\textbf{I}}-\Delta t{\textbf{H}}\) is SDD for sufficiently large N and sufficiently small \(\Delta t\). Denote the entries of \({\textbf{H}}\ \)and \({\textbf{L}}^{I,I}\) with \({\textbf{H}}=(H_{ij})_{i,j=1}^{N-B}\) and \( {\textbf{L}}^{I,I}=(L_{ij}^{I,I})_{i,j=1}^{N-B}\), where \(H_{ii}=L_{ii}^{I,I}<0\) and \(L_{ij}^{I,I}>0\) for \(j\ne i\) by noticing the structure of matrices in (65). We can compute

$$\begin{aligned}{} & {} |1-\Delta tH_{ii}|-\Delta t\sum _{j\ne i}|H_{ij}|=|1-\Delta tL_{ii}^{I,I}|-\Delta t\sum _{j\ne i}|L_{ij}^{I,I}+({\textbf{L}}{^{I,G}{\textbf{G}}^{G,I})}_{ij}|\nonumber \\{} & {} \quad \ge |1-\Delta tL_{ii}^{I,I}|-\Delta t\sum _{j\ne i}(|L_{ij}^{I,I}|+|({\textbf{L}}{^{I,G}{\textbf{G}}^{G,I})}_{ij}|) \nonumber \\{} & {} \quad =1-\Delta tL_{ii}^{I,I}-\Delta t\sum _{j\ne i}L_{ij}^{I,I}-\Delta t\sum _{j\ne i}|({\textbf{L}}{^{I,G}{\textbf{G}}^{G,I})}_{ij}|\nonumber \\{} & {} \quad =1-\Delta t\sum _{j=1}^{N-B}L_{ij}^{I,I}-\Delta t\sum _{j\ne i}|({\textbf{L}}{^{I,G} {\textbf{G}}^{G,I})}_{ij}| \nonumber \\{} & {} \quad \ge 1-\Delta t\sum _{j\ne i}|({\textbf{L}}{^{I,G}{\textbf{G}}^{G,I})} _{ij}|=1-\Delta t\frac{\epsilon ^{-d/2-1}}{m_{0}(N+BK)}\sum _{j\ne i}|( {\textbf{K}}{^{I,G}{\textbf{G}}^{G,I})}_{ij}|, \nonumber \\ \end{aligned}$$
(66)

where in the last line, the equality follows from the definition of \({\textbf{L}}\) in (9) and the last inequality follows from the zero row sum property of \({\textbf{L}}\) in (61), that is, \( \sum _{j=1}^{N-B}L_{ij}^{I,I}=-\sum _{j=N-B+1}^{N+BK}L_{ij}^{I,I}<0\). By noticing all entries of \({\textbf{K}}{^{I,G}}\) are nonnegative and all entries of \({{\textbf{G}}^{G,I}}\) are nonpositive as in (59), we can show that

$$\begin{aligned} \sum _{j\ne i}|({\textbf{K}}{^{I,G}{\textbf{G}}^{G,I})}_{ij}|=-({\textbf{K}}^{I,G}{ {\textbf{G}}^{G,I}}\mathbf {1)}_{i}\mathbf {\le }K\mathbf {(K}^{I,G}\mathbf {1)} _{i}\le BK^{2}. \end{aligned}$$

Following the same argument in (63), there exists a constant \(C_{0} \) such that

$$\begin{aligned} \frac{\epsilon ^{-d/2-1}}{m_{0}(N+BK)}\sum _{j\ne i}|({\textbf{K}}{^{I,G} {\textbf{G}}^{G,I})}_{ij}|\le C_{0}. \end{aligned}$$

Thus, from (66), we obtain that \({\textbf{I}}-\Delta t{\textbf{H}}\) is SDD for sufficiently large N and sufficiently small \(\Delta t\),

$$\begin{aligned} |1-\Delta tH_{ii}|-\Delta t\sum _{j\ne i}|H_{ij}|\ge 1-C_{0}\Delta t>0. \end{aligned}$$

Using the Ahlberg-Nilson-Varah bound, we have

$$\begin{aligned} \Vert ({\textbf{I}}-\Delta t{\textbf{H}})^{-1}\Vert _{\infty }\le \frac{1}{ \min _{i}(|1-\Delta tH_{ii}|-\Delta t\sum _{j\ne i}|H_{ij}|)}\le \frac{1}{ 1-C_{0}\Delta t}\le 1+C_{1}\Delta t. \end{aligned}$$

Following almost the same steps as in the above Neumann case, we can show that \(\Vert ({\textbf{I}}-\Delta t{\textbf{H}})^{-1}\Vert _{2}\le 1+C_{2}\Delta t\).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yan, Q., Jiang, S.W. & Harlim, J. Kernel-Based Methods for Solving Time-Dependent Advection-Diffusion Equations on Manifolds. J Sci Comput 94, 5 (2023). https://doi.org/10.1007/s10915-022-02045-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-022-02045-w

Keywords

Mathematics Subject Classification

Navigation