Skip to main content
Log in

A Splitting Algorithm for Image Segmentation on Manifolds Represented by the Grid Based Particle Method

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

We propose a numerical approach to solve variational problems on manifolds represented by the grid based particle method (GBPM) recently developed in Leung et al. (J. Comput. Phys. 230(7):2540–2561, 2011), Leung and Zhao (J. Comput. Phys. 228:7706–7728, 2009a, J. Comput. Phys. 228:2993–3024, 2009b, Commun. Comput. Phys. 8:758–796, 2010). In particular, we propose a splitting algorithm for image segmentation on manifolds represented by unconnected sampling particles. To develop a fast minimization algorithm, we propose a new splitting method by generalizing the augmented Lagrangian method. To efficiently implement the resulting method, we incorporate with the local polynomial approximations of the manifold in the GBPM. The resulting method is flexible for segmentation on various manifolds including closed or open or even surfaces which are not orientable.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Bae, E., Yuan, J., Tai, X.: Global minimization for continuous multiphase partitioning problems using a dual approach. Int. J. Comput. Vis. 92(1), 112–129 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bae, E., Yuan, J., Tai, X.: Simultaneous Convex Optimization of Regions and Region Parameters in Image Segmentation Models, pp. 11–83. Technical Report, UCLA CAM Report (2011)

  3. Bresson, X., Esedoglu, S., Vandergheynst, P., Thiran, J., Osher, S.: Fast global minimization of the active contour/snake model. J. Math. Imaging Vis. 28(2), 151–167 (2007)

    Article  MathSciNet  Google Scholar 

  4. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. Chan, T., Golub, G., Mulet, P.: A nonlinear primal-dual method for total variation-based image restoration. SIAM J. Sci. Comput. 20(6), 1964–1977 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  6. Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. Image Process. 10(2), 266–277 (2001). doi:10.1109/83.902291

    Article  MATH  Google Scholar 

  7. Cheng, L., Karagozian, A., Chan, T.: The Level Set Method Applied to Geometrically Based Motion, Materials Science, and Image Processing. Ph.D. thesis, University of California, Los Angeles (2000)

  8. Cremers, D., Pock, T., Kolev, K., Chambolle, A.: Convex relaxation techniques for segmentation, stereo and multiview reconstruction. Technical Report, in Advances in Markov Random Fields for Vision and Image Processing, MIT Press (2011)

  9. Delaunoy, A., Fundana, K., Prados, E., Heyden, A.: Convex multi-region segmentation on manifolds. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 662–669 (2009)

  10. Do Carmo, M.P.: Differential Geometry of Curves and Surfaces. Prentice Hall, Englewood Cliffs (1976)

    MATH  Google Scholar 

  11. Engquist, B., Tornberg, A., Tsai, R.: Discretization of dirac delta functions in level set methods. J. Comput. Phys. 207, 28–51 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  12. Goldstein, T., Bresson, X., Osher, S.: Geometric applications of the split bregman method: segmentation and surface reconstruction. J. Sci. Comput. 45, 272–293 (2009)

    Article  MathSciNet  Google Scholar 

  13. Goldstein, T., Osher, S.: The split bregman method for l1 regularized problems. SIAM J. Imaging Sci. 2, 323–343 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  14. Kazufumi, I., Karl, K.: Augmented lagrangian methods for nonsmooth, convex optimization in hilbert spaces. Nonlinear Anal. 41(5–6), 591–616 (2000). doi:10.1016/S0362-546X(98)00299-5

    Google Scholar 

  15. Kimmel, R.: Intrinsic scale space for images on surfaces: the geodesic curvature flow. Gr. Models Image Process. 59(5), 365–372 (1997)

    Article  Google Scholar 

  16. Krueger, M., Delmas, P., GimelFarb, G.: Active contour based segmentation of 3d surfaces. In: In European Conference on Computer Vision, pp. 350–363 (2008)

  17. Lai, R., Liang, J., Zhao, H.: A local mesh method for solving pdes on point clouds, pp. 12–60. Technical Report, UCLA Report (2012)

  18. Leung, S., Lowengrub, J., Zhao, H.: A grid based particle method for solving partial differential equations on evolving surfaces and modeling high order geometrical motion. J. Comput. Phys. 230(7), 2540–2561 (2011). doi:10.1016/j.jcp.010.12.029 http://www.sciencedirect.com/science/article/pii/S0021999110007035

  19. Leung, S., Zhao, H.: A grid based particle method for evolution of open curves and surfaces. J. Comput. Phys. 228, 7706–7728 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  20. Leung, S., Zhao, H.: A grid based particle method for moving interface problems. J. Comput. Phys. 228, 2993–3024 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  21. Leung, S., Zhao, H.: Gaussian beam summation for diffraction in inhomogeneous media based on the grid based particle method. Commun. Comput. Phys. 8, 758–796 (2010)

    MathSciNet  Google Scholar 

  22. Liang, J., Lai, R., Wong, T., Zhao, H.: Geometric understanding of point clouds using laplace-beltrami operator. IEEE Conference on Computer Vision and Pattern Recognition (2012)

  23. Liang, J., Zhao, H.: Solving partial differential equations on point clouds, pp. 12–25. Technical Report, UCLA Report (2012)

  24. Lie, J., Lysaker, M., Tai, X.C.: A binary level set model and some applications to mumford-shah image segmentation. IEEE Trans. Image Process. 15(5), 1171–1181 (2006)

    Article  Google Scholar 

  25. Liu, J., Ku, Y., Leung, S.: Expectation-maximization algorithm with total variation regularization for vector-valued image segmentation. J. Vis. Commun. Image Represent. 23, 1234–1244 (2012)

    Article  Google Scholar 

  26. Macdonald, C., Ruuth, S.: The implicit closest point method for the numerical solution of partial differential equations on surfaces. SIAM J. Sci. Comput. 31(6), 4330–4350 (2009). doi:10.1137/080740003 http://people.maths.ox.ac.uk/~macdonald/icpm.pdf

  27. Meyer, M., Desbrun, M., Schroder, P., Barr, A.: Discrete differential-geometry operator for triangulated 2-manifolds. In: Hege, H.-C., Polthier, K. (eds.) Visualization and Mathematics III. Springer, Berlin (2002)

    Google Scholar 

  28. Mumford, D., Shah, J.: Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 42, 577–685 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  29. Osher, S., Sethian, J.: Fronts propagating with curvature dependent speed: algorithms based on hamiltonjacobi formulations. J. Comput. Phys. 79, 12–49 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  30. Peskin, C.: Numerical analysis of blood flow in the heart. J. Comput. Phys. 25, 220–252 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  31. Rockafellar, R.: A dual approach to solving nonlinear programming problems by unconstrained optimization. Math. Program. 5, 354–373 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  32. Setzer, S.: Operator splittings, bregman methods and frame shrinkage in image processing. Int. J. Comput. Vis. 92(3), 265–280 (2010)

    Article  MathSciNet  Google Scholar 

  33. Smereka, P.: Spiral crystal growth. Phys. D 128, 282–301 (2000)

    Article  MathSciNet  Google Scholar 

  34. Tian, L., Macdonald, C., Ruuth, S.: Segmentation on surfaces with the closest point method. In: Proc. ICIP09, 16th IEEE International Conference on Image Processing, pp. 3009–3012. Cairo (2009). doi:10.1109/ICIP.2009.5414447 http://people.maths.ox.ac.uk/~macdonald/TianMacdonaldRuuth.pdf

  35. Tornberg, A., Engquist, B.: Numerical approximations of singular source terms in differential equations. J. Comput. Phys. 200, 462–488 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  36. Wan, M., Wang, Y., Bae, E., Tai, X., Wang, D.: Reconstructing open surfaces via graph-cuts. IEEE Trans. Vis. Comput. Gr. (2012). doi:10.1109/TVCG.2012.119

  37. Wang, Y., Yang, J., Yin, W., Zhang, Y.: A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci. 1(3), 248–272 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  38. Wu, C., Tai, X.: Augmented lagrangian method, dual methods, and split bregman iteration for rof, vectorial tv, and high order models. SIAM J. Imaging Sci. 3(3), 300–339 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  39. Wu, C., Tai, X.: A level set formulation of geodesic curvature flow on simplicial surfaces. IEEE Trans. Vis. Comput. Gr. 16(4), 647–662 (2010)

    Article  Google Scholar 

  40. Wu, C., Zhang, J., Duan, Y., Tai, X.: Augmented lagrangian method for total variation based image restoration and segmentation over triangulated surfaces. J. Sci. Comput. 50(1), 145–166 (2012)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

Leung would like to thank Prof. Ronald LM Lui for providing a conformal map of the Stanford bunny data to a sphere. The work of Leung was supported in part by the Hong Kong RGC under Grant GRF602210 and the HKUST grant RPC11SC06. The work of Liu was supported by National Natural Science Foundation of China (No. 11201032).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shingyu Leung.

Appendices

Appendix 1. Proof of Theorem 1

We first prove the following lemma

Lemma 1

Suppose \(\mathcal J _1(\mathbf{v })\) is continuous and convex on a Hilbert space \(\mathbb V \), and

$$\begin{aligned} \mathcal J =\mathcal J _1(\mathbf{v })+<\mathbf{p },\mathbf{g }(\mathbf{v }-\mathbf{b })>+\frac{\eta }{2}<\mathbf{v }-\mathbf{b },\mathbf{g }(\mathbf{v }-\mathbf{b })> \end{aligned}$$

where \(\eta >0\) and \(\mathbf{g }\) is a positive definite and linear symmetric operator with bounded inverse, then the sequence \(\{\mathbf{v }^n\}\) produced by the iteration scheme

$$\begin{aligned} \mathbf{v }^{n+1}=&\underset{\mathbf{v }}{\arg \min } ~~\mathcal J (\mathbf{v },\mathbf{p }^n),\end{aligned}$$
(19)
$$\begin{aligned} \mathbf{p }^{n+1}=&\mathbf{p }^{n}+\tau \mathbf{g }(\mathbf{v }^{n+1}-\mathbf{b }), \end{aligned}$$
(20)

converges, i.e. \(\mathbf{v }^n\rightarrow \mathbf{v }^{*}\) when \(0<\tau <\frac{2\eta }{\Lambda _{\max }}\), where \(\mathbf{v }^{*}\) is the saddle point \((\mathbf{v }^*,\mathbf{p }^*)\) of \(\mathcal J (\mathbf{v },\mathbf{p })\). Here \(\Lambda _{\max }\) is the largest eigenvalue of \(\mathbf{g }\).

Proof

Since \((\mathbf{v }^*,\mathbf{p }^*)\) is a saddle of \(\mathcal J \), we have \(\mathbf{v }^*=\mathbf{b }\) by \(\frac{\partial \mathcal J }{\partial \mathbf{p }}|_{(\mathbf{p }^*,\mathbf{v }^*)}=0\). Let \(\partial \mathcal J _1(\mathbf{v })\) be the subgradient of \(\mathcal J _1\) at \(\mathbf{v }\), i.e. \(\partial \mathcal J _1(\mathbf{v })=\{\bar{\mathbf{v }}\in \bar{\mathbb{V }}:\mathcal J _1(\mathbf q )-\mathcal J _1(\mathbf v )\geqslant <\bar{\mathbf{v }},\mathbf q -\mathbf v >, \forall \mathbf q \in \mathbb V \}\), where \(\bar{\mathbb{V }}\) is the conjugate space of \(\mathbb V \). According to the first order optimization conditions of (19), we have

$$\begin{aligned} \begin{array}{rl} \mathbf d ^{n+1}&:=-\mathbf g \mathbf p ^{n}-\eta \mathbf g (\mathbf v ^{n+1}-\mathbf b )\in \partial \mathcal J _1(\mathbf v ^{n+1}),\\ \mathbf d ^{*}&:=-\mathbf g \mathbf p ^{*}-\eta \mathbf g (\mathbf v ^{*}-\mathbf b )\in \partial \mathcal J _1(\mathbf v ^{*}),\\ \end{array} \end{aligned}$$

thus

$$\begin{aligned} \mathbf d ^{n+1}-\mathbf d ^{*}=-\mathbf g (\mathbf p ^{n}-\mathbf p ^{*})-\eta \mathbf g (\mathbf v ^{n+1}-\mathbf v ^{*}). \end{aligned}$$

Taking the inner product with \(\mathbf v ^{n+1}-\mathbf v ^*\) for both sides of the above equation, it becomes

$$\begin{aligned} <\mathbf p ^{n}\!-\!\mathbf p ^{*},\mathbf g (\mathbf v ^{n\!+\!1}\!-\!\mathbf v ^*)>\!=\!-<\mathbf d ^{n\!+\!1}\!-\!\mathbf d ^{*},\mathbf v ^{n\!+\!1}\!-\!\mathbf v ^*>\!-\!\eta <\mathbf v ^{n\!+\!1}\!-\!\mathbf v ^{*},\mathbf g (\mathbf v ^{n\!+\!1}\!-\!\mathbf v ^{*})>.\nonumber \\ \end{aligned}$$
(21)

By the iteration Eq. (20) and the fact that \(\mathbf v ^*-\mathbf b =0\), we have

$$\begin{aligned} \mathbf p ^{n+1}-\mathbf p ^{*} =\mathbf p ^{n}-\mathbf p ^{*}+\tau \mathbf g (\mathbf v ^{n+1}-\mathbf v ^{*}). \end{aligned}$$

Taking the norm for the both sides of the above equation, we get

$$\begin{aligned} ||\mathbf p ^{n+1}-\mathbf p ^{*}||^2 =||\mathbf p ^{n}-\mathbf p ^{*}||^2+\tau ^2||\mathbf g (\mathbf v ^{n+1}-\mathbf v ^{*})||^2+2\tau <\mathbf p ^{n}-\mathbf p ^{*},\mathbf g (\mathbf v ^{n+1}-\mathbf v ^{*})>.\nonumber \\ \end{aligned}$$
(22)

Substituting (21) into (22), we have

$$\begin{aligned}&||\mathbf p ^{n+1}-\mathbf p ^{*}||^2 -||\mathbf p ^{n}-\mathbf p ^{*}||^2\nonumber \\&\quad \quad =-2\tau <\mathbf d ^{n+1}-\mathbf d ^{*},\mathbf v ^{n+1}-\mathbf v ^*>-<\mathbf v ^{n+1}-\mathbf v ^{*},\left(-\tau ^2\mathbf g ^2+2\tau \eta \mathbf g \right)(\mathbf v ^{n+1}-\mathbf v ^{*})>.\nonumber \\ \end{aligned}$$
(23)

Since \(\mathcal J _1\) is convex, \(\mathbf d ^{n+1}\in \partial \mathcal J _1(\mathbf v ^{n+1})\) and \(\mathbf d ^{*}\in \partial \mathcal J _1(\mathbf v ^{*})\), thus

$$\begin{aligned} <\mathbf d ^{n+1}-\mathbf d ^{*},\mathbf v ^{n+1}-\mathbf v ^*>\geqslant 0. \end{aligned}$$
(24)

With the condition \(0<\tau <\frac{2\eta }{\Lambda _{\max }}\), we conclude that the operator \(-\tau ^2\mathbf g ^2+2\tau \eta \mathbf g \) is positive definite.

Now, using both (23) and (24), we have \(||\mathbf p ^{n+1}-\mathbf p ^{*}||^2 -||\mathbf p ^{n}-\mathbf p ^{*}||^2<0\), which implies that the sequence \(||\mathbf p ^{n}-\mathbf p ^{*}||^2\) is monotonic decreasing with a lower bound \(0\) and so it must be convergent.

Finally from (23), we conclude that \(\mathbf v ^{n}\rightarrow \mathbf v ^{*}\). \(\square \)

Now, we can use this Lemma to prove Theorem 1. It is easy to check that \(\lambda \int _{\mathcal{M }}\sqrt{\mathbf{v}^\mathrm{T }\mathbf{gv}} \,\mathrm d M\) is convex when \(\mathbf g \) is positive definite. For any fixed \(u\) and \(\mathbf c \), let \(\mathbf b =\nabla _s u\) and follow the lemma, one can show that \(\{\mathbf{v}^n\}\) produced by Eqs. (8) and (10) converges to the saddle point \((\cdot ,\mathbf v ^*,\mathbf{p}^*,\cdot )\) of \(L(\cdot ,\mathbf v ,\mathbf p ,\cdot )\), i.e. \(\mathbf v ^n\rightarrow \mathbf v ^*\). Since \((\cdot ,\mathbf v ^*,\mathbf p ^*,\cdot )\) is the saddle point, we have \(\mathbf v ^{*}=\nabla _s u\), which completes the proof.

Appendix 2. Derivation of (11)

The directional derivative

$$\begin{aligned} \begin{array}{rl} \frac{\mathrm{d }\tilde{L}(u+\tau w)}{\mathrm{d } \tau }|_{\tau =0}=&\frac{\mathrm{d }}{\mathrm{d } \tau }\left\{ \int _{\mathcal{M }}(f-c_1)^2(u+\tau w) \,\mathrm d M+ \int _{\mathcal{M }}(f-c_2)^2[1-(u+\tau w)] \,\mathrm d M\right.\\&\left.+\frac{\eta }{2}\int _{\mathcal{M }}\left(\mathbf v -\nabla _{\mathbf{s }}(u+\tau w)+\frac{1}{\eta }\mathbf{p }\right)^{\mathrm{T }} \mathbf{g }\left(\mathbf{v }-\nabla _{\mathbf{s }}(u+\tau w)+\frac{1}{\eta }\mathbf{p }\right)\,\mathrm d M\right\} |_{\tau =0}\\ =&\frac{\eta }{2}\frac{\mathrm{d }}{\mathrm{d }\tau }\left\{ {\int _{\smallint }}\sqrt{EG-F^2}\left(\mathbf{v }-\nabla _\mathbf{s }(u+\tau w)+\frac{1}{\eta }\mathbf{p }\right)^{\mathrm{T }}\mathbf{g }\left(\mathbf{v }-\nabla _\mathbf{s }(u+\tau w)+\frac{1}{\eta }\mathbf{p }\right)\,\mathrm d \mathbf{s }\right\} |_{\tau =0}\\&+ \int _{\mathcal{M }}\left[(f-c_1)^2-(f-c_2)^2\right]w\,\mathrm d M\\ =&-\eta {\int _{\smallint }}\sqrt{EG-F^2}<\mathbf{g }\mathbf{v }-\mathbf{g }\nabla _{\mathbf{s }}u+\frac{1}{\eta }\mathbf{g }\mathbf{p },\nabla _\mathbf{s }w>\,\mathrm d \mathbf s + \int _{\mathcal{M }}\left[(f-c_1)^2-(f-c_2)^2\right]\\&\quad w\,\mathrm d M\\ =&\eta {\int _{\smallint }}\nabla _\mathbf{s }\cdot \left(\sqrt{EG-F^2}(\mathbf g \mathbf v -\mathbf g \nabla _\mathbf{s }u+\frac{1}{\eta }\mathbf g \mathbf p )\right)w\,\mathrm d \mathbf s + \int _{\mathcal{M }}\left[(f-c_1)^2-(f-c_2)^2\right]w\,\mathrm d M\\ =&\eta \int _{\mathcal{M }}\frac{1}{\sqrt{EG-F^2}}\nabla _\mathbf{s }\cdot \left(\sqrt{EG-F^2}(\mathbf g \mathbf v -\mathbf g \nabla _\mathbf{s }u+\frac{1}{\eta }\mathbf g \mathbf p )\right)w\,\mathrm d M \\&+\int _{\mathcal{M }}\left[(f-c_1)^2-(f-c_2)^2\right]w\,\mathrm d M.\\ \end{array} \end{aligned}$$

By the variational formulation \(\displaystyle \biggl .\frac{\mathrm{d }\tilde{L}(u+\tau w)}{\mathrm{d } \tau }|_{\tau =0}=<\frac{\delta \tilde{L}}{\delta u},w>\), one gets

$$\begin{aligned} \frac{\delta \tilde{L}}{\delta u}&= -\frac{\eta }{\sqrt{EG-F^2}}\nabla _\mathbf{s }\cdot \left(\sqrt{EG-F^2}\mathbf{g }\nabla _{\mathbf{s }}u\right) +\frac{\eta }{\sqrt{EG-F^2}}\nabla _\mathbf{s }\cdot \\&\left(\sqrt{EG-F^2} \mathbf{g }\left(\mathbf{v }+\frac{1}{\eta }\mathbf{p }\right)\right) +(f-c_1)^2-(f-c_2)^2 \, , \end{aligned}$$

which leads to Eq. (11).

Appendix 3. Derivation of (12)

Denote \(\mathbf{q }^n=\nabla _\mathbf{s }u^{n+1}-\frac{\mathbf{p }^n}{\eta }\), we have

$$\begin{aligned} \frac{\delta \tilde{L}}{\delta \mathbf{v }}=\frac{\lambda }{\sqrt{\mathbf{v }^ \mathrm{T }\mathbf{g }\mathbf{v }}}\mathbf{g }\mathbf{v }+\eta \mathbf{g }(\mathbf{v }-\mathbf{q }^n). \end{aligned}$$

Solving \(\frac{\delta \tilde{L}}{\delta \mathbf{v }}=0\) for \(\mathbf{v }\), we get

$$\begin{aligned} \left(\frac{\lambda }{\sqrt{(\mathbf v ^{n+1})^ \mathrm T \mathbf g \mathbf v ^{n+1}}}+\eta \right)\mathbf v ^{n+1}=\eta \mathbf q ^n. \end{aligned}$$
(25)

Taking modulus \(||\cdot ||_\mathbf{g }\) for the two sides of (25), it becomes

$$\begin{aligned} ||\mathbf v ^{n+1}||_\mathbf{g }=||\mathbf q ^{n}||_\mathbf{g }-\frac{\lambda }{\eta } \end{aligned}$$

if \(||\mathbf q ^{n}||_\mathbf{g }\geqslant \frac{\lambda }{\eta }\). Plugging it back to the above (25), we get

$$\begin{aligned} \mathbf{v }^{n+1}=\frac{\mathbf{q }^n}{||\mathbf{q }^n||_\mathbf{g }}\left(||\mathbf{q }^n||_\mathbf{g }-\frac{\lambda }{\eta }\right). \end{aligned}$$
(26)

On the other hand, if \(||\mathbf q ^n||_\mathbf{g }<\frac{\lambda }{\eta }\), then

$$\begin{aligned} \tilde{L}(\mathbf{v },\cdot )&= \biggl .\lambda \int _{\mathcal{M }}\sqrt{\mathbf{v }^{\mathrm{T }}{\mathbf{g }}\mathbf{v }}\,\mathrm d M +\frac{\eta }{2}\int _{\mathcal{M }}(\mathbf{v }-\mathbf{q })^{\mathrm{T }}{\mathbf{g }}(\mathbf{v }-{\mathbf{q }}^n)\,\mathrm d M\\&= \biggl .\lambda \int _{\mathcal{M }} \sqrt{\mathbf{v }^{\mathrm{T }}{\mathbf{g }}{\mathbf{v }}}\,\mathrm d M +\frac{\eta }{2}\int _{\mathcal{M }}\left(|| \mathbf{v } ||_{\mathbf{g }}^2+||\mathbf{q }^n||_{\mathbf{g }}^2\right)\,\mathrm d M -\eta \int _{\mathcal{M }}\mathbf{v }^{\mathrm{T }}\mathbf{g }{\mathbf{q }}^n\,\mathrm d M\\&\geqslant \biggl .\lambda \int _{\mathcal{M }} \sqrt{\mathbf{v }^{\mathrm{T }}{\mathbf{g }}{\mathbf{v }}}\,\mathrm d M +\frac{\eta }{2}\int _{\mathcal{M }}\left(||\mathbf v ||_{\mathbf{g }}^2+||\mathbf q ^n||_{\mathbf{g }}^2\right)\,\mathrm d M -\eta \int _{\mathcal{M }}||\mathbf v ||_{\mathbf{g }}\cdot ||\mathbf q ^n||_{\mathbf{g }}\,\mathrm d M\\&> \biggl .\frac{\eta }{2}\int _{\mathcal{M }}\left(||\mathbf v ||_{\mathbf{g }}^2+||\mathbf{q }^n||_{\mathbf{g }}^2\right)\,\mathrm d M.\\ \end{aligned}$$

This means that \(\mathbf v ^{n+1}=0\) must be the minimizer of \(\tilde{L}\) with respect to \(\mathbf v \). Summarizing these two results, we get the \(\mathbf g \)-shrinkage operator (12).

Appendix 4. Minimizer of (14)

Let \(\mathbf H =\sum _{j=1}^m \left(\mathbf x ^j-\mathbf x ^i\right)\left(\mathbf x ^j-\mathbf x ^i\right)^\mathrm{T }=\mathbf U \begin{pmatrix} \lambda _1&\,&\\&\lambda _2&\\&\,&\lambda _3\\ \end{pmatrix} \mathbf U ^\mathrm{T } \), where \(\lambda _1\geqslant \lambda _2\geqslant \lambda _3\) and \(\mathbf U =\begin{pmatrix} U_1&U_2&U_3 \\ \end{pmatrix}\) is an orthogonal matrix. Then

$$\begin{aligned} \begin{array}{rl} E=&\sum _{j=1}^m\left(<\mathbf x ^j,\tilde{\mathbf{n }}>-<\mathbf x ^i,\tilde{\mathbf{n }}>\right)^2=\tilde{\mathbf{n }}^\mathrm{T }\mathbf H \tilde{\mathbf{n }} =\left(\mathbf U ^\mathrm{T }\tilde{\mathbf{n }}\right)^\mathrm{T } \begin{pmatrix} \lambda _1&\,&\\&\lambda _2&\\&\,&\lambda _3\\ \end{pmatrix} \left(\mathbf U ^\mathrm{T }\tilde{\mathbf{n }}\right).\\ \end{array} \end{aligned}$$

Since \(\left(\mathbf U ^\mathrm{T }\tilde{\mathbf{n }}\right)^\mathrm{T } \left(\mathbf U ^\mathrm{T }\tilde{\mathbf{n }}\right)=1\), thus we have \(E\geqslant \lambda _3\). If \(\tilde{\mathbf{n }}=U_3\), \(E=U_3^\mathrm T \mathbf H U_3=\lambda _3\). Thus we conclude that \(\mathbf n =U_3\) is a minimizer of \(E\) such that \(|\mathbf n |=1\), which completes the proof.

Appendix 5. Explicit formula for the gradient descent update \(\Delta u^i\)

In this Appendix, we state explicitly the formula to compute the gradient descent update of \(u^i\) as a function of the coefficients of the second order polynomial approximation in the GBPM representation. For convenience, let us denote \(h_j=v_j^n+\frac{1}{\eta }p_j^n\) and write the coefficients of the approximated second-degree polynomial at \(\varvec{x}^i\) for functions \(h_1\) and \(h_2\) as \(\gamma _{\tau _1\tau _2}^i, \delta _{\tau _1\tau _2}^i (0\leqslant \tau _1+\tau _2\leqslant 2,\tau _1,\tau _2\in \mathbb N )\), respectively.

The PDE (11) in the explicit form is given by

$$\begin{aligned} -\frac{\eta }{EG-F^2}\left( S_1 + S_2 S_3 + S_4 S_5 \right) +\frac{\eta }{EG-F^2}\left[ S_6 +S_7 +S_8 + \frac{1}{EG-F^2} S_9 \right]\\ +(f-c_1)^2-(f-c_2)^2&= 0 \, , \end{aligned}$$

where

$$\begin{aligned} S_1&= G\frac{\partial ^2 u}{\partial s_1^2}-2F\frac{\partial ^2 u}{\partial s_1 \partial s_2}+E\frac{\partial ^2 u}{\partial s_2^2} \, , \\ S_2&= <\frac{\partial ^2 \varvec{{x}}}{\partial s_1 \partial s_2}, \frac{\partial \varvec{{x}}}{\partial s_2}>-<\frac{\partial ^2 \varvec{{x}}}{\partial s_2^2},\frac{\partial \varvec{{x}}}{\partial s_1}>-\frac{1}{EG-F^2}(T_1G-T_2F) \, , \\ S_3&= \frac{\partial u}{\partial s_1} \, , \\ S_4&= <\frac{\partial ^2 \varvec{{x}}}{\partial s_1 \partial s_2}, \frac{\partial \varvec{{x}}}{\partial s_1}>-<\frac{\partial ^2 \varvec{{x}}}{\partial s_1^2},\frac{\partial \varvec{{x}}}{\partial s_2}>-\frac{1}{EG-F^2}(T_2E-T_1F) \, , \\ S_5&= \frac{\partial u}{\partial s_2} \, , \\ S_6&= h_1\left(<\frac{\partial ^2 \varvec{x}}{\partial s_1\partial s_2},\frac{\partial \varvec{x}}{\partial s_2}>-<\frac{\partial ^2 \varvec{x}}{\partial s_2^2},\frac{\partial \varvec{x}}{\partial s_1}>\right) \, , \\ S_7&= h_2\left(<\frac{\partial ^2 \varvec{x}}{\partial s_1\partial s_2},\frac{\partial \varvec{x}}{\partial s_1}>-<\frac{\partial ^2 \varvec{x}}{\partial s_1^2},\frac{\partial \varvec{x}}{\partial s_2}>\right) \, , \\ S_8&= G\frac{\partial h_1}{\partial s_1}-F\left(\frac{\partial h_2}{\partial s_1}+\frac{\partial h_1}{\partial s_2}\right) +E\frac{\partial h_2}{\partial s_2} \, , \\ S_9&= \left(Gh_1-Fh_2\right)T_1+\left(-Fh_1+Eh_2\right)T_2 \end{aligned}$$

and

$$\begin{aligned} T_1&= <\frac{\partial ^2 \varvec{x}}{\partial s_1^2},\frac{\partial \varvec{x}}{\partial s_1}>G+<\frac{\partial ^2 \varvec{x}}{\partial s_1\partial s_2},\frac{\partial \varvec{x}}{\partial s_2}>E-\left(<\frac{\partial ^2 \varvec{x}}{\partial s_1^2},\frac{\partial \varvec{x}}{\partial s_2}>+<\frac{\partial ^2 \varvec{x}}{\partial s_1\partial s_2},\frac{\partial \varvec{x}}{\partial s_1}>\right)F \, ,\\ T_2&= <\frac{\partial ^2 \varvec{x}}{\partial s_1\partial s_2},\frac{\partial \varvec{x}}{\partial s_1}>G+<\frac{\partial ^2 \varvec{x}}{\partial s_2^2},\frac{\partial \varvec{x}}{\partial s_2}>E-\left(<\frac{\partial ^2 \varvec{x}}{\partial s_1\partial s_2},\frac{\partial \varvec{x}}{\partial s_2}>+<\frac{\partial ^2 \varvec{x}}{\partial s_2^2},\frac{\partial \varvec{x}}{\partial s_1}>\right)F \, .\\ \end{aligned}$$

Now, replacing all local geometry by the local polynomial least square approximation, we have

$$\begin{aligned} \Delta u^i&= \frac{\eta }{E^iG^i-(F^{i})^2} \left( S_1^i + S_2^i S_3^i + S_4^i S_5^i\right) -\frac{\eta }{E^iG^i-(F^i)^2}\\&\left[ S_6^i + S_7^i + S_8^i + \frac{1}{E^iG^i-(F^i)^2} S_9^i \right] -(f-c_1)^2+(f-c_2)^2\, , \end{aligned}$$

where

$$\begin{aligned} S_1^i&= 2\beta _{20}^iG^i-2\beta _{11}^iF^i +2\beta _{02}^iE^i \, , \\ S_2^i&= \alpha _{11}^i\left(\alpha _{01}^i+\alpha _{11}^i\bar{x}_1^i+2\alpha _{02}^i\bar{x}_2^i\right) -2\alpha _{02}^i\left(\alpha _{10}^i+\alpha _{11}^i\bar{x}_2^i+2\alpha _{20}^i\bar{x}_1^i\right)\\&\quad - \frac{1}{E^iG^i-(F^i)^2}\left(T_1^iG^i-T_2^iF^i\right) \, , \\ S_3^i&= \beta _{10}^i+\beta _{11}^i\bar{x}_2^i+2\beta _{20}^i\bar{x}_1^i \, , \\ S_4^i&= \alpha _{11}^i \left(\alpha _{10}^i+\alpha _{11}^i\bar{x}_2^i+2\alpha _{20}^i\bar{x}_1^i\right) -2\alpha _{20}^i\left(\alpha _{01}^i+\alpha _{11}^i\bar{x}_1^i+2\alpha _{02}^i\bar{x}_2^i\right)\\&\quad - \frac{1}{E^iG^i-(F^i)^2}\left(T_2^iE^i-T_1^iF^i\right) \quad , \\ S_5^i&= \beta _{01}^i+\beta _{11}^i\bar{x}_1^i+2\beta _{02}^i\bar{x}_2^i \quad , \\ S_6^i&= h_1^i\left[\alpha _{11}^i\left(\alpha _{01}^i+\alpha _{11}^i\bar{x}_1^i+2\alpha _{02}^i\bar{x}_2^i\right)-2 \alpha _{02}^i\left(\alpha _{10}^i+\alpha _{11}^i\bar{x}_2^i+2\alpha _{20}^i\bar{x}_1^i\right) \right] \quad , \\ S_7^i&= h_2^i\left[\alpha _{11}^i\left(\alpha _{10}^i+\alpha _{11}^i\bar{x}_2^i+2\alpha _{20}^i\bar{x}_1^i\right)-2 \alpha _{20}^i\left(\alpha _{01}^i+\alpha _{11}^i\bar{x}_1^i+2\alpha _{02}^i\bar{x}_2^i\right) \right] \quad , \\ S_8^i&= G^i\left(\gamma _{10}^i+\gamma _{11}^i\bar{x}_2^i+2\gamma _{20}^i\bar{x}_1^i\right) +E^i\left(\delta _{01}^i+\delta _{11}^i\bar{x}_1^i+2\delta _{02}^i\bar{x}_2^i\right) \\&-F^i\left(\gamma _{01}^i+\gamma _{11}^i\bar{x}_1^i+2\gamma _{02}^i\bar{x}_2^i+\delta _{10}^i+\delta _{11}^i\bar{x}_2^i+2\delta _{20}^i\bar{x}_1^i\right) \quad , \\ S_9^i&= \left(G^ih_1^i-F^ih_2^i\right)T_1^i+\left(-F^ih_1^i+E^ih_2^i\right)T_2^i \end{aligned}$$

and

$$\begin{aligned} T_1^i&= 2\alpha _{20}^i\left(\alpha _{10}^i+\alpha _{11}^i\bar{x}_2^i+2\alpha _{20}^i\bar{x}_1^i\right)G^i+ \alpha _{11}^i\left(\alpha _{01}^i+\alpha _{11}^i\bar{x}_1^i+2\alpha _{02}^i\bar{x}_2^i\right)E^i \\&-\left[2\alpha _{20}^i\left(\alpha _{01}^i+\alpha _{11}^i\bar{x}_1^i+\displaystyle \biggl .2\alpha _{02}^i\bar{x}_2^i\right)+ \alpha _{11}^i\left(\alpha _{10}^i+\alpha _{11}^i\bar{x}_2^i+2\alpha _{20}^i\bar{x}_1^i\right)\right]F^i \, ,\\ T_2^i&= \alpha _{11}^i\left(\alpha _{10}^i+\alpha _{11}^i\bar{x}_2^i+2\alpha _{20}^i\bar{x}_1^i\right)G^i+ 2\alpha _{02}^i\left(\alpha _{01}^i+\alpha _{11}^i\bar{x}_1^i+2\alpha _{02}^i\bar{x}_2^i\right)E^i \\&-\left[2\alpha _{02}^i\left(\alpha _{10}^i+\alpha _{11}^i\bar{x}_2^i+\displaystyle \biggl .2\alpha _{20}^i\bar{x}_1^i\right)+ \alpha _{11}^i\left(\alpha _{01}^i+\alpha _{11}^i\bar{x}_1^i+2\alpha _{02}^i\bar{x}_2^i\right)\right]F^i,\\ h_1^i&= \sum _{\tau _1=0}^2\sum _{0\leqslant \tau _1+\tau _2\leqslant 2}\gamma _{\tau _1\tau _2}^i\left(\bar{x}_1^i\right)^{\tau _1}\left(\bar{x}_2^i\right)^{\tau _2} \, ,\\ h_2^i&= \sum _{\tau _1=0}^2\sum _{0\leqslant \tau _1+\tau _2\leqslant 2}\delta _{\tau _1\tau _2}^i\left(\bar{x}_1^i\right)^{\tau _1}\left(\bar{x}_2^i\right)^{\tau _2}. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Cite this article

Liu, J., Leung, S. A Splitting Algorithm for Image Segmentation on Manifolds Represented by the Grid Based Particle Method. J Sci Comput 56, 243–266 (2013). https://doi.org/10.1007/s10915-012-9675-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10915-012-9675-7

Keywords

Navigation