Skip to main content
Log in

Ordered Line Integral Methods for Computing the Quasi-Potential

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

The quasi-potential is a key function in the Large Deviation Theory. It characterizes the difficulty of the escape from the neighborhood of an attractor of a stochastic non-gradient dynamical system due to the influence of small white noise. It also gives an estimate of the invariant probability distribution in the neighborhood of the attractor up to the exponential order. We present a new family of methods for computing the quasi-potential on a regular mesh named the ordered line integral methods (OLIMs). In comparison with the first proposed quasi-potential finder based on the ordered upwind method (OUM) (Cameron in Phys D Nonlinear Phenom 241:1532–1550, 2012), the new methods are 1.5–4 times faster, can produce error two to three orders of magnitude smaller, and may exhibit faster convergence. Similar to the OUM, OLIMs employ the dynamical programming principle. Contrary to it, they (1) have an optimized strategy for the use of computationally expensive triangle updates leading to a notable speed-up, and (2) directly solve local minimization problems using quadrature rules instead of solving the corresponding Hamilton–Jacobi-type equation by the first order finite difference upwind scheme. The OLIM with the right-hand quadrature rule is equivalent to OUM. The use of higher order quadrature rules in local minimization problems dramatically boosts up the accuracy of OLIMs. We offer a detailed discussion on the origin of numerical errors in OLIMs and propose rules-of-thumb for the choice of the important parameter, the update factor, in the OUM and OLIMs. Our results are supported by extensive numerical tests on two challenging 2D examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. Actually, in our codes, the maximal update length for the one-point update is Kh, while it is \(Kh + \sqrt{h_1^2+h_2^2}\) for the triangle update.

  2. There is an error in Eq. (89) in [1]. It should be \(U=\tfrac{1}{2}(r^2-1)^2\).

References

  1. Cameron, M.K.: Finding the quasipotential for nongradient SDEs. Phys. D Nonlinear Phenom. 241, 1532–1550 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  2. Chen, Z., Freidlin, M.: Smoluchowski–Kramers approximation and exit problems. Stoch. Dyn. 5(4), 569–585 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  3. Conte, S.D., de Boor, Carl: Elementary Numerical Analysis: An Algorithmic Approach, 3rd edn. McGraw-Hill Book Company, New York (1980)

    MATH  Google Scholar 

  4. Crandall, M.G., Lions, P.L.: Viscosity solutions of Hamilton–Jacobi–Bellman equations. Trans. Am. Math. Soc. 277, 1–43 (1983)

    Article  MATH  Google Scholar 

  5. Freidlin, M.I., Wentzell, A.D.: Random Perturbations of Dynamical Systems, 3rd edn. Springer, Berlin (2012)

    Book  MATH  Google Scholar 

  6. Grafke, T., Grauer, R., Schaefer, T.: The instanton method and its numerical implementation in fluid mechanics. J. Phys. A Math. Theor. 48(33), 333001 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Heymann, M., Vanden-Eijnden, E.: Pathways of maximum likelihood for rare events in non-equilibrium systems, application to nucleation in the presence of shear. Phys. Rev. Lett. 100(14), 140601 (2007)

    Article  Google Scholar 

  8. Heymann, M., Vanden-Eijnden, E.: The geometric minimum action method: a least action principle on the space of curves. Commun. Pure Appl. Math. 61(8), 1052–1117 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  9. Hurewicz, W.: Lectures on Ordinary Differential Equations. Dover Publications, New York (1990). (Originally, this book was published by the M.I.T. Press, Cambridge, Mass, in 1958)

    MATH  Google Scholar 

  10. https://www.math.umd.edu/~mariakc/software-and-datasets.html

  11. https://cran.r-project.org/web/packages/QPot/index.html

  12. Ishii, H.: A simple direct proof of uniqueness for solutions of the Hamilton–Jacobi equations of eikonal type. Proc. Am. Math. Soc. 100(2), 247–251 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  13. Lv, Cheng, Li, Xiaoguang, Li, Fangting, Li, Tiejun: Constructing the energy landscape for genetic switching system driven by intrinsic noise. PLoS ONE 9(2), e88167 (2014)

    Article  Google Scholar 

  14. Maier, R.S., Stein, D.L.: A scaling theory of bifurcations in the symmetric weak-noise escape problem. J. Stat. Phys. 83(3–4), 291357 (1996)

    MathSciNet  Google Scholar 

  15. Nolting, B.C., Abbot, K.C.: Balls, cups, and quasi-potentials: quantifying stability in stochastic systems. Ecology 97(4), 850–864 (2016)

    Google Scholar 

  16. Nolting, B., Moore, C., Stieha, C., Cameron, M., Abbott, K.: QPot: an R package for stochastic differential equation quasi-potential analysis. R J. 8(2), 19–38 (2016)

    Google Scholar 

  17. Sethian, J.A., Vladimirsky, A.: Ordered upwind methods for static Hamilton–Jacobi equations. Proc. Natl. Acad. Sci. USA 98(20), 11069–11074 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  18. Sethian, J.A., Vladimirsky, A.: Ordered upwind methods for static Hamilton–Jacobi equations: theory and algorithms. SIAM J. Numer. Anal. 41(1), 325–363 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  19. Stewart, J.W.: Afternotes on Numerical Analysis. SIAM, Philadelphia (1996)

    Book  MATH  Google Scholar 

  20. Wilkinson, J.: Two algorithms based on successive linear interpolation. Computer Science, Stanford University, Technical Report CS-60 (1967)

  21. Zhou, Xiang, Ren, Weiqing, Weinan, E.: Adaptive minimum action method for the study of rare events. J. Chem. Phys. 128, 104111 (2008)

    Article  Google Scholar 

  22. Zhou, Xiang, Weinan, E.: Study of noise-induced transitions in the Lorenz system using the minimum action method. Commun. Math. Sci. 8(2), 341–355 (2010)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank Professor A. Vladimirsky for a valuable discussion. This work was supported in part by the NSF Grant DMS1554907.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maria Cameron.

Appendices

Appendix A: The Freidlin–Wentzell Action Versus the Geometric Action

The Freidlin-Wentzell action functional for SDE (1) is defined on the set of absolutely continuous paths \(\phi (t)\) by [5]

$$\begin{aligned} S_T(\phi ) = \frac{1}{2}\int _0^T\Vert \dot{\phi } - \mathbf {b}(\phi )\Vert ^2dt. \end{aligned}$$
(A-1)

The original definition of the quasi-potential [5] with respect to a compact set A (an attractor of \(\dot{\mathbf {x}}=\mathbf {b}(\mathbf {x})\)) at a point \(\mathbf {x}\) is

$$\begin{aligned} U_A(\mathbf {x}) = \inf _{T,\phi }\left\{ S_T(\phi )~|~\phi (0)\in A,~\phi (T)=\mathbf {x},~\phi ~\text {is absolutely continuous}\right\} . \end{aligned}$$
(A-2)

The minimization with respect to the travel-time T can be performed analytically [5, 7, 8] resulting at the geometric action \(S(\psi )\). Let \(\phi (t)\) be a fixed absolutely continuous path \(\phi (t)\). Expanding \(\Vert \cdot \Vert ^2\) in Eq. (A-1) and using the inequality \(y^2 + z^2 \ge 2yz\) for all nonnegative real numbers y and z, we get:

$$\begin{aligned} S_T(\phi )&= \frac{1}{2}\int _0^T\Vert \dot{\phi } - \mathbf {b}(\phi )\Vert ^2dt = \frac{1}{2}\int _0^T\left( \Vert \dot{\phi }\Vert ^2 - 2\dot{\phi }\cdot \mathbf {b}(\phi ) + \Vert \mathbf {b}(\phi )\Vert ^2\right) dt \nonumber \\&\ge \frac{1}{2}\int _0^T\left( 2\Vert \dot{\phi }\Vert \Vert \mathbf {b}(\phi )\Vert - 2\dot{\phi }\cdot \mathbf {b}(\phi ) \right) dt \nonumber \\&= \int _0^T\left( \Vert \dot{\phi }\Vert \Vert \mathbf {b}(\phi )\Vert - \dot{\phi }\cdot \mathbf {b}(\phi ) \right) dt. \end{aligned}$$
(A-3)

The inequality in Eq. (A-3) becomes an equality if and only if \(\Vert \dot{\phi }\Vert = \Vert \mathbf {b}(\phi )\Vert \). Let \(\chi \) be the path obtained from \(\phi \) by a reparametrization such that \(\Vert \dot{\chi }\Vert = \Vert \mathbf {b}(\chi )\Vert \). Then

$$\begin{aligned} S_T(\phi )\ge S_{T_{\chi }}(\chi ) = \int _0^{T_{\chi }}\left( \Vert \dot{\chi }\Vert \Vert \mathbf {b}(\chi )\Vert - \dot{\chi }\cdot \mathbf {b}(\chi ) \right) dt. \end{aligned}$$
(A-4)

Note that \(T_{\chi }\) can be infinite. The integral in right-hand side of Eq. (A-5) is invariant with respect to the parametrization of the path \(\chi \). Hence, we can pick the most convenient one, for example, the arclength parametrization, and denote the reparametrized path by \(\psi \). Hence,

$$\begin{aligned} S_{T_{\chi }}(\chi ) = \int _0^L\left( \Vert \psi _s(s)\Vert \Vert \mathbf {b}(\psi (s))\Vert - \psi _s(s)\cdot \mathbf {b}(\psi (s)) \right) ds =:S(\psi ), \end{aligned}$$
(A-5)

where L is the length of the paths \(\chi \) and \(\psi \) (corresponding to the same curve). For computation of the quasi-potential, it is more convenient to deal with the geometric action \(S(\psi )\) than with the Freidlin-Wentzell action \(S_T(\phi )\).

Appendix B: The Triangle Updates for the OLIMs

OLIM-R

OLIM-R performs the triangle update by solving the following minimization problem

$$\begin{aligned} u&= \min _{s\in [0,1]}\left[ su_0 + (1-s)u_1 + \Vert \mathbf {b}\Vert \Vert \mathbf {x}- \mathbf {x}_s\Vert - \mathbf {b}\cdot (\mathbf {x}- \mathbf {x}_s)\right] , \nonumber \\ \text {where}~~\mathbf {b}&\equiv \mathbf {b}(\mathbf {x}),~~\mathbf {x}_s = s\mathbf {x}_0 + (1-s)\mathbf {x}_1,~~u_0 \equiv U(\mathbf {x}_0),~~u_1 \equiv U(\mathbf {x}_1) . \end{aligned}$$
(B-1)

Taking the derivative of the function to be minimized

$$\begin{aligned} f(s): = s u_0 + (1-s) u_1 + \Vert \mathbf {b}\Vert \Vert \mathbf {x}- s\mathbf {x}_0 - (1-s) \mathbf {x}_1\Vert - \mathbf {b}\cdot (\mathbf {x}- s\mathbf {x}_0 - (1-s) \mathbf {x}_1) \end{aligned}$$

with respect to s and setting it to zero, we obtain the following equation for s:

$$\begin{aligned} u_0-u_1 + \Vert \mathbf {b}(\mathbf {x})\Vert \frac{ (\mathbf {x}-\mathbf {x}_s)\cdot (\mathbf {x}_1-\mathbf {x}_0) }{ \Vert \mathbf {x}- \mathbf {x}_s\Vert } - \mathbf {b}(\mathbf {x}) \cdot (\mathbf {x}_1- \mathbf {x}_0 )=0. \end{aligned}$$
(B-2)

Regrouping terms and taking squares we obtain the following quadratic equation for s:

$$\begin{aligned}&As^2 + 2Bs + C = 0,~~\mathrm{where} \end{aligned}$$
(B-3)
$$\begin{aligned}&A = \Vert \mathbf {x}_1- \mathbf {x}_0 \Vert ^2 \left( [\mathbf {b}(\mathbf {x}) \cdot (\mathbf {x}_1- \mathbf {x}_0 ) - (u_0-u_1)]^2 - \Vert \mathbf {b}(\mathbf {x})\Vert ^2 \Vert \mathbf {x}_1- \mathbf {x}_0 \Vert ^2\right) , \end{aligned}$$
(B-4)
$$\begin{aligned}&B = \left( [\mathbf {b}(\mathbf {x}) \cdot (\mathbf {x}_1- \mathbf {x}_0 ) - (u_0-u_1)]^2 - \Vert \mathbf {b}(\mathbf {x})\Vert ^2 \Vert \mathbf {x}_1- \mathbf {x}_0 \Vert ^2\right) \left[ (\mathbf {x}-\mathbf {x}_1)\cdot (\mathbf {x}_1-\mathbf {x}_0)\right] , \end{aligned}$$
(B-5)
$$\begin{aligned}&C= [\mathbf {b}(\mathbf {x}) \cdot (\mathbf {x}_1- \mathbf {x}_0 ) - (u_0-u_1)]^2\Vert (\mathbf {x}-\mathbf {x}_1)\Vert ^2 - \Vert \mathbf {b}(\mathbf {x})\Vert ^2 \left( (\mathbf {x}-\mathbf {x}_1)\cdot (\mathbf {x}_1-\mathbf {x}_0)\right) . \end{aligned}$$
(B-6)

We solve Eq. (B-3), select its root \(s^{*}\), if any, on the interval [0, 1], and verify that it is also the root of Eq. (B-2). In the case of success, the triangle update returns

$$\begin{aligned} \mathsf{Q}_{{\varDelta }}(\mathbf {x}_1,\mathbf {x}_0,\mathbf {x})= & {} s^{*} u_0 + (1-s^{*}) u_1 + \Vert \mathbf {b}\Vert \Vert \mathbf {x}- s^{*}\mathbf {x}_0 - (1-s^{*}) \mathbf {x}_1\Vert \nonumber \\&-\, \mathbf {b}\cdot (\mathbf {x}- s^{*}\mathbf {x}_0 - (1-s^{*}) \mathbf {x}_1). \end{aligned}$$

Otherwise, it returns \(\mathsf{Q}_{{\varDelta }}(\mathbf {x}_1,\mathbf {x}_0,\mathbf {x}) =+\infty \).

OLIM-MID

OLIM-MID performs the triangle update by solving the following minimization problem

$$\begin{aligned} u&= \min _{s\in [0,1]}\left[ su_0 + (1-s)u_1 + \Vert \mathbf {b}_{ms}\Vert \Vert \mathbf {x}- \mathbf {x}_s\Vert - \mathbf {b}_{ms}\cdot (\mathbf {x}- \mathbf {x}_s)\right] ,~~\text {where} \nonumber \\ \mathbf {x}_s&= s\mathbf {x}_0 + (1-s)\mathbf {x}_1,~~\mathbf {b}\equiv \mathbf {b}(\mathbf {x})\nonumber \\ \mathbf {b}_{ms}&= s\mathbf {b}_{m0} + (1-s)\mathbf {b}_{m1},~~ { \mathbf {b}_{m0}}\equiv \mathbf {b}\left( \frac{\mathbf {x}_0+\mathbf {x}}{2}\right) ,~~\mathbf {b}_{m1}\equiv \mathbf {b}\left( \frac{\mathbf {x}_1+\mathbf {x}}{2}\right) . \end{aligned}$$
(B-7)

Taking the derivative of

$$\begin{aligned} f(s)\,&{: =}&\, su_0 + (1-s)u_1 + \Vert \mathbf {b}_{ms}\Vert \Vert \mathbf {x}- \mathbf {x}_s\Vert \\&-\,\mathbf {b}_{ms}\cdot (\mathbf {x}- \mathbf {x}_s) \end{aligned}$$

with respect to s and setting it to zero, we obtain the following equation for s:

$$\begin{aligned}&u_0-u_1 + \Vert \mathbf {b}_{ms}\Vert \frac{ (\mathbf {x}-\mathbf {x}_s)\cdot (\mathbf {x}_1-\mathbf {x}_0) }{ \Vert \mathbf {x}- \mathbf {x}_s\Vert } + \Vert \mathbf {x}- \mathbf {x}_s\Vert \frac{\mathbf {b}_{ms}\cdot (\mathbf {b}_{m0}-\mathbf {b}_{m1}) }{\Vert \mathbf {b}_{ms}\Vert } \nonumber \\&\quad - \,\mathbf {b}_{ms} \cdot (\mathbf {x}_1- \mathbf {x}_0 ) - (\mathbf {x}-\mathbf {x}_s)\cdot (\mathbf {b}_{m0}-\mathbf {b}_{m1}) =0. \end{aligned}$$
(B-8)

The hybrid nonlinear solver [19, 20] is used for finding a root \(s^{*}\) of Eq. (B-8) in the interval [0, 1]. In the case of success, the triangle update returns

$$\begin{aligned} \mathsf{Q}_{{\varDelta }}(\mathbf {x}_1,\mathbf {x}_0,\mathbf {x})= & {} s^{*} u_0 + (1-s^{*}) u_1 + \Vert \mathbf {b}_{ms^{*}}\Vert \Vert \mathbf {x}- s^{*}\mathbf {x}_0 - (1-s^{*}) \mathbf {x}_1\Vert \\&-\, \mathbf {b}_{ms^{*}} \cdot (\mathbf {x}- s^{*}\mathbf {x}_0 - (1-s^{*}) \mathbf {x}_1). \end{aligned}$$

Otherwise, it returns \(\mathsf{Q}_{{\varDelta }}(\mathbf {x}_1,\mathbf {x}_0,\mathbf {x}) =+\infty \).

OLIM-TR

OLIM-TR performs the triangle update by solving the following minimization problem

$$\begin{aligned} u&= \min _{s\in [0,1]}\left[ su_0 + (1-s)u_1 +\frac{1}{2}\left\{ (\Vert \mathbf {b}_s\Vert +\Vert \mathbf {b}\Vert )\Vert \mathbf {x}- \mathbf {x}_s\Vert - (\mathbf {b}_s +\mathbf {b})\cdot (\mathbf {x}- \mathbf {x}_s)\right\} \right] , \nonumber \\&\text {where}\nonumber \\ \mathbf {x}_s&= s\mathbf {x}_0 + (1-s)\mathbf {x}_1\nonumber \\ \mathbf {b}_s&= s\mathbf {b}_0+ (1-s)\mathbf {b}_1,\quad \mathbf {b}_0\equiv \mathbf {b}(\mathbf {x}_0),~~\mathbf {b}_1\equiv \mathbf {b}(\mathbf {x}_1),~~\mathbf {b}\equiv \mathbf {b}(\mathbf {x}). \end{aligned}$$
(B-9)

Taking the derivative of

$$\begin{aligned} f(s)\,{: =}\, su_0 + (1-s)u_1 +\frac{1}{2}\left\{ (\Vert \mathbf {b}_{s}\Vert +\Vert \mathbf {b}\Vert )\Vert \mathbf {x}- \mathbf {x}_s\Vert -(\mathbf {b}_{s} +\mathbf {b})\cdot (\mathbf {x}- \mathbf {x}_s)\right\} \end{aligned}$$

with respect to s and setting it to zero we obtain the following equation for s:

$$\begin{aligned}&u_0-u_1 + \frac{1}{2}\{ (\Vert \mathbf {b}_s\Vert +\Vert \mathbf {b}\Vert ) \frac{ (\mathbf {x}-\mathbf {x}_s)\cdot (\mathbf {x}_1-\mathbf {x}_0) }{ \Vert \mathbf {x}- \mathbf {x}_s\Vert } + \Vert \mathbf {x}- \mathbf {x}_s\Vert \frac{\mathbf {b}_s\cdot (\mathbf {b}_0-\mathbf {b}_1) }{\Vert \mathbf {b}_s\Vert } \nonumber \\&- \,(\mathbf {b}_s +\mathbf {b}) \cdot (\mathbf {x}_1- \mathbf {x}_0 ) - (\mathbf {x}-\mathbf {x}_s)\cdot (\mathbf {b}_0-\mathbf {b}_1) \} =0. \end{aligned}$$
(B-10)

The hybrid nonlinear solver [19, 20] is used for finding a root \(s^{*}\) of Eq. (B-10) in the interval [0, 1]. In the case of success, the triangle update returns

$$\begin{aligned} \mathsf{Q}_{{\varDelta }}(\mathbf {x}_1,\mathbf {x}_0,\mathbf {x})= & {} s^{*} u_0 + (1-s^{*}) u_1 + \frac{1}{2}\left\{ (\Vert \mathbf {b}_{s^{*}} \Vert +\Vert \mathbf {b}\Vert )\Vert \mathbf {x}- \mathbf {x}_{s^{*}}\Vert \right. \\&\left. -\, (\mathbf {b}_{s^{*}} +\mathbf {b})\cdot (\mathbf {x}- \mathbf {x}_{s^{*}})\right\} . \end{aligned}$$

Otherwise, it returns \(\mathsf{Q}_{{\varDelta }}(\mathbf {x}_1,\mathbf {x}_0,\mathbf {x}) =+\infty \).

OLIM-SIM

OLIM-SIM performs the triangle update by solving the following minimization problem

$$\begin{aligned} u&= \min _{s\in [0,1]} [su_0 + (1-s)u_1 +\frac{1}{6} \{ (\Vert \mathbf {b}_s\Vert +4\Vert \mathbf {b}_{ms}\Vert +\Vert \mathbf {b}\Vert )\Vert \mathbf {x}- \mathbf {x}_s\Vert \nonumber \\&- (\mathbf {b}_s +4\mathbf {b}_{ms}+\mathbf {b})\cdot (\mathbf {x}- \mathbf {x}_s) \} ], \nonumber \\&\text {where}\nonumber \\ \mathbf {x}_s&= s\mathbf {x}_0 + (1-s)\mathbf {x}_1,~~\mathbf {b}\equiv \mathbf {b}(\mathbf {x}) \nonumber \\ \mathbf {b}_s&= s\mathbf {b}_0+ (1-s)\mathbf {b}_1,\quad \mathbf {b}_0\equiv \mathbf {b}(\mathbf {x}_0),~~\mathbf {b}_1\equiv \mathbf {b}(\mathbf {x}_1).\nonumber \\ \mathbf {b}_{ms}&= s\mathbf {b}_{m0} + (1-s)\mathbf {b}_{m1},~~ {\mathbf {b}_{m0}}\equiv \mathbf {b}\left( \frac{\mathbf {x}_0+\mathbf {x}}{2}\right) ,~~\mathbf {b}_{m1}\equiv \mathbf {b}\left( \frac{\mathbf {x}_1+\mathbf {x}}{2}\right) . \end{aligned}$$
(B-11)

Taking the derivative of

$$\begin{aligned}&f(s): = su_0 + (1-s)u_1 +\frac{1}{6} \{ (\Vert \mathbf {b}_s\Vert +4\Vert \mathbf {b}_{ms}\Vert +\Vert \mathbf {b}\Vert )\Vert \mathbf {x}- \mathbf {x}_s\Vert \\&\quad - (\mathbf {b}_s +4\mathbf {b}_{ms}+\mathbf {b})\cdot (\mathbf {x}- \mathbf {x}_s) \} \end{aligned}$$

with respect to s and setting it to zero, we obtain the following equation for s:

$$\begin{aligned}&u_0-u_1 + \frac{1}{6}\{ (\Vert \mathbf {b}_s\Vert +4\Vert \mathbf {b}_{ms}\Vert +\Vert \mathbf {b}\Vert ) \frac{ (\mathbf {x}-\mathbf {x}_s)\cdot (\mathbf {x}_1-\mathbf {x}_0) }{ \Vert \mathbf {x}- \mathbf {x}_s\Vert } + \nonumber \\&\quad \Vert \mathbf {x}- \mathbf {x}_s\Vert \left[ 4 \frac{\mathbf {b}_{ms}\cdot (\mathbf {b}_{m0}-\mathbf {b}_{m1}) }{\Vert \mathbf {b}_{ms}\Vert } + \frac{\mathbf {b}_{s}\cdot (\mathbf {b}_{0}-\mathbf {b}_{1}) }{\Vert \mathbf {b}_{s}\Vert }\right] \nonumber \\&\quad - (\mathbf {b}_s +4\mathbf {b}_{ms}+\mathbf {b}) \cdot (\mathbf {x}_1- \mathbf {x}_0 ) - (\mathbf {x}-\mathbf {x}_s)\cdot (4(\mathbf {b}_{m0}-\mathbf {b}_{m1}) + (\mathbf {b}_0-\mathbf {b}_1)) \} =0.\qquad \end{aligned}$$
(B-12)

The hybrid nonlinear solver [19, 20] is used for finding a root \(s^{*}\) of Eq. (B-12) in the interval [0, 1]. In the case of success, the triangle update returns

$$\begin{aligned}&\mathsf{Q}_{{\varDelta }}(\mathbf {x}_1,\mathbf {x}_0,\mathbf {x}) = s^{*} u_0 + (1-s^{*}) u_1 +\frac{1}{6} \{ (\Vert \mathbf {b}_{s^{*}}\Vert +4\Vert \mathbf {b}_{ms^{*}}\Vert +\Vert \mathbf {b}\Vert )\Vert \mathbf {x}- \mathbf {x}_{s^{*}}\Vert \nonumber \\&- (\mathbf {b}_{s^{*}} +4\mathbf {b}_{ms^{*}}+\mathbf {b})\cdot (\mathbf {x}- \mathbf {x}_{s^{*}}) \}. \end{aligned}$$

Otherwise, it returns \(\mathsf{Q}_{{\varDelta }}(\mathbf {x}_1,\mathbf {x}_0,\mathbf {x}) =+\infty \).

Appendix C: Proof of Theorem 1

Proof

Without the loss of generality we assume that \(\mathbf {x}_1\) is the origin.

Fig. 13
figure 13

An illustration for Sect. 2.3 and “Appendix C”. A geometrical interpretation of the solution of finite difference Eq. (17) and minimization problem (19)

Step 1. Show that u is a solution of Eq. (17) if and only if \(u - u_1= \Vert \mathbf {x}\Vert (U_{\xi }\cos (\alpha ) + U_{\eta }\sin (\alpha ))\) where (see Fig. 13) \(\alpha \) (\(0<\alpha <\pi \)) is the angle between the vectors \(\mathbf {x}_0\) and \(\mathbf {x}\), \(U_{\xi } = \Vert \mathbf {x}_0\Vert ^{-1}(u_0-u_1)\), and \(U_{\eta }\) is a solution of

$$\begin{aligned} U_{\xi }^2 + U_{\eta }^2 +2(b_{\xi }U_{\xi } + b_{\eta }U_{\eta }) = 0,\quad \mathbf {b}= \left[ \begin{array}{c}b_{\xi }\\ b_{\eta }\end{array}\right] \equiv \left[ \begin{array}{r}\Vert \mathbf {b}\Vert \cos (\beta ) \\ -\Vert \mathbf {b}\Vert \sin (\beta )\end{array}\right] , \end{aligned}$$
(C-1)

which is Eq. (3) written in the \((\xi ,\eta )\)-coordinates at the point \(\mathbf {x}\).

First observe that both Eqs. (17) and (19) are invariant with respect to translations. Therefore, we shift \(\mathbf {x}_1\) to the origin as shown in Fig. 13 without changing their solutions.

Second, Eq. (17) is invariant with respect to orthogonal transformations. Indeed, the multiplication of \(\mathbf {x}\) and \(\mathbf {x}_0\) by an orthogonal matrix O converts Eq. (16) to

$$\begin{aligned} \left[ \begin{array}{c}u - u_0\\ u-u_1\end{array}\right] = \left[ \begin{array}{c}(\mathbf {x}- \mathbf {x}_0)^T\\ (\mathbf {x}-\mathbf {x}_1)^T\end{array}\right] O^{T}\nabla U = PO^T\nabla U. \end{aligned}$$
(C-2)

Hence the matrix P in Eq. (17) changes to \(PO^T\) and \(\mathbf {b}\) becomes \(O\mathbf {b}\) leading to the equation

$$\begin{aligned} \left[ u - u_0, u-u_1\right] P^{-T}O^TOP^{-1}\left[ \begin{array}{c}u - u_0\\ u-u_1\end{array}\right] + 2\mathbf {b}^TO^TOP^{-1}\left[ \begin{array}{c}u - u_0\\ u-u_1\end{array}\right] = 0, \end{aligned}$$
(C-3)

which is equivalent to Eq. (17). Hence, we apply an orthogonal transformation to map the original coordinate system onto the \((\xi ,\eta )\) system in which \(\mathbf {x}_0 \) lies on the positive \(\xi \)-semiaxis and the \(\eta \)-coordinate of \(\mathbf {x}\) is positive:

$$\begin{aligned} \mathbf {x}_0 = \left[ \begin{array}{c}\Vert \mathbf {x}_0\Vert \\ 0\end{array}\right] ,\quad \mathbf {x}= \left[ \begin{array}{c}\Vert \mathbf {x}\Vert \cos (\alpha )\\ \Vert \mathbf {x}\Vert \sin (\alpha )\end{array}\right] , \end{aligned}$$

where \(\alpha \) (\(0<\alpha <\pi \)) is the angle between vectors \(\mathbf {x}_0\) and \(\mathbf {x}\) as shown in Fig. 13.

Finally, if u is a solution of Eq. (17) then

$$\begin{aligned} \nabla u&= \left[ \begin{array}{cc} \Vert \mathbf {x}\Vert \cos (\alpha ) -\Vert \mathbf {x}_0\Vert ~~~ &{}~~~ \Vert \mathbf {x}\Vert \sin (\alpha ) \\ \Vert \mathbf {x}\Vert \cos (\alpha )~~ &{}~~ \Vert \mathbf {x}\Vert \sin (\alpha ) \end{array}\right] ^{-1} \left[ \begin{array}{c}u - u_0\\ u-u_1\end{array}\right] \\&= \frac{1}{\Vert \mathbf {x}\Vert \Vert \mathbf {x}_0\Vert \sin (\alpha )} \left[ \begin{array}{cc} - \Vert \mathbf {x}\Vert \sin (\alpha ) ~~~&{}~~~ \Vert \mathbf {x}\Vert \sin (\alpha ) \\ \Vert \mathbf {x}\Vert \cos (\alpha ) ~~~&{}~~~ -\Vert \mathbf {x}\Vert \cos (\alpha ) +\Vert \mathbf {x}_0\Vert \end{array}\right] \left[ \begin{array}{c}u - u_0\\ u-u_1\end{array}\right] \\&= \left[ \begin{array}{c} \frac{u_0-u_1}{ \Vert \mathbf {x}_0\Vert } \\ \frac{(u_1 - u_0)\cos (\alpha )}{\Vert \mathbf {x}_0\Vert \sin (\alpha )} + \frac{u-u_1}{\Vert \mathbf {x}\Vert \sin (\alpha )} \end{array}\right] \equiv \left[ \begin{array}{c}U_{\xi }\\ U_{\eta }\end{array}\right] . \end{aligned}$$

Hence, if u is the solution of Eq. (17), then \(U_{\xi }\) is exactly \((u_0-u_1)/\Vert \mathbf {x}_0\Vert \) which shows that it is independent of u. Hence Eq. (17) can be rewritten as an equation Eq. (C-1) for \(U_{\eta }\).

Step 2. Find geometric conditions guaranteeing the existence of solution(s) of Eq. (C-1) satisfying the consistency check and determine the selection rule if it has two solutions.

Equation (3) implies that \(\nabla U\) is orthogonal to \(2\mathbf {b}+ \nabla U\). Therefore, the locus of the vectors \(\nabla U\) satisfying Eq. (3) is the circle [1] shown in Fig. 13. This circle passes through the origin and has center at the end of the vector \(-\mathbf {b}\) originating from the origin. Since \(\Vert \nabla U\Vert ^2 = U_{\xi }^2 + U_{\eta }^2\), Eq. (3) has a solution if and only if the line normal to the \(\xi \)-axis and passing through the point \((U_{\xi },0)\) (the red dashed line in Fig. 13) intersects the circle. The MAP is collinear to the vector \(\mathbf {b}+\nabla U\) [1]. The consistency condition requires that the MAP passing through the point \(\mathbf {x}\) crosses the interval \([\mathbf {x}_1,\mathbf {x}_0]\). This means that the angle between the vector \(\mathbf {b}+\nabla U\) and the positive \(\xi \)-semiaxis should be not less than the angle \(\alpha \) between the vector \(\mathbf {x}-\mathbf {x}_1\equiv \mathbf {x}\) and the positive \(\xi \)-semiaxis, and not greater than the angle between the vector \(\mathbf {x}-\mathbf {x}_0\) and the positive \(\xi \)-semiaxis. Drawing rays parallel to \(\mathbf {x}\) and \(\mathbf {x}-\mathbf {x}_0\) from the center of the circle and then dropping normals from their intersections with the circle to the \(\xi \)-axis as shown in Fig. 13, we obtain the interval on the \(\xi \)-axis where \(U_{\xi }\) should belong in order to make the solution \(U_{\eta }\) of Eq. (C-1) satisfy the consistency condition. This interval bounded by the endpoints of the thin brown and green-blue dashed lines in Fig. 13. Note that the consistency condition can be satisfied only by the larger root of Eq. (C-1), i.e., we should select the root

$$\begin{aligned} U_{\eta }&= -b_{\eta } +\sqrt{b_{\eta }^2 - 2b_{\xi }U_{\xi }-U_{\xi }^2}\nonumber \\&\equiv \Vert \mathbf {b}\Vert \sin (\beta )+\sqrt{\Vert \mathbf {b}\Vert ^2 \sin ^2(\beta )- 2\Vert \mathbf {b}\Vert \cos (\beta )U_{\xi }-U_{\xi }^2}. \end{aligned}$$
(C-4)

Step 3. Find the solution of the minimization problem (19) and show that, if the minimizer \(s^{*}\in (0,1)\) then it coincides with \(u = \Vert \mathbf {x}\Vert (U_{\xi }\cos (\alpha ) + U_{\eta }\sin (\alpha ))\), where \(U_{\xi } = (u_0-u_1)/\Vert \mathbf {x}\Vert \) and \(U_{\eta }\) is given by Eq. (C-4).

Consider the function to be minimized in Eq. (19) rewritten for \(\mathbf {x}_1\) shifted to the origin:

$$\begin{aligned} f(s)&:= u_1 + s(u_0-u_1) +\Vert \mathbf {b}\Vert \Vert \mathbf {x}- s\mathbf {x}_0\Vert - \mathbf {b}\cdot (\mathbf {x}-s\mathbf {x}_0) \nonumber \\&\equiv u_1 + U_{\xi }s\Vert \mathbf {x}_0\Vert + \Vert \mathbf {b}\Vert \Vert \mathbf {x}- s\mathbf {x}_0\Vert (1 - \cos (\gamma )), \end{aligned}$$
(C-5)

where \(\gamma \) is the angle between the vectors \(\mathbf {b}\) and \(\mathbf {x}-s\mathbf {x}_0\). The point \(s\mathbf {x}_0\), and hence the value of s, is uniquely determined by the angle \(\gamma \) (Fig. 13):

$$\begin{aligned} \Vert \mathbf {x}-s\mathbf {x}_0\Vert = \frac{\Vert \mathbf {x}\Vert \sin (\alpha )}{\sin (\gamma -\beta )},\quad s\Vert \mathbf {x}_0\Vert = \Vert \mathbf {x}\Vert \left( \cos (\alpha ) - \sin (\alpha )\cot (\gamma - \beta )\right) . \end{aligned}$$
(C-6)

Moreover, since \(\cot (\gamma -\beta )\) is a monotone function on the interval \(0<\gamma - \beta < \pi \), there is a one-to-one correspondence between \(-\infty< s<\infty \) and \(\beta< \gamma < \beta + \pi \). Therefore, the function \(f(s) =: F(\gamma (s))\), \(\beta< \gamma < \beta + \pi \), where

$$\begin{aligned} F(\gamma )&= u_1 + U_{\xi } \Vert \mathbf {x}\Vert \left( \cos (\alpha ) - \sin (\alpha )\cot (\gamma - \beta )\right) + \frac{\Vert \mathbf {b}\Vert \Vert \mathbf {x}\Vert \sin (\alpha )}{\sin (\gamma -\beta )}(1 - \cos (\gamma )) \nonumber \\&= \Vert \mathbf {x}\Vert \left( U_{\xi }\cos (\alpha ) +\left[ \frac{\Vert \mathbf {b}\Vert (1-\cos (\gamma ))}{\sin (\gamma -\beta )} -\frac{U_{\xi }\cos (\gamma -\beta )}{\sin (\gamma -\beta )}\right] \sin (\alpha )\right) . \end{aligned}$$
(C-7)

If \((s^{*},f(s^{*}))\) is a minimum of f(s), then there is a unique minimum \((\gamma ^{*},F(\gamma ^{*})=f(s^{*}))\) of \(F(\gamma )\).

Let us minimize \(F(\gamma )\). Its derivative is given by:

$$\begin{aligned} \frac{dF}{d\gamma }&= \Vert \mathbf {x}\Vert \sin (\alpha ) \frac{[\Vert \mathbf {b}\Vert \sin (\gamma ) +U_{\xi }\sin (\gamma - \beta )]}{\sin (\gamma - \beta )} \nonumber \\&\quad - \frac{[\Vert \mathbf {b}\Vert (1-\cos (\gamma )) - U_{\xi }\cos (\gamma -\beta )]\cos (\gamma - \beta )}{\sin ^2(\gamma - \beta )}. \end{aligned}$$

Setting it to zero, cancelling the positive constant \(\Vert \mathbf {x}\Vert \sin (\alpha )/\sin ^2(\gamma - \beta )\), regrouping the terms, and applying trigonometric formulas, we obtain the following equation for \(\gamma \):

$$\begin{aligned} U_{\xi } + \Vert \mathbf {b}\Vert \cos (\beta ) -\Vert \mathbf {b}\Vert \cos (\gamma - \beta ) = 0. \end{aligned}$$
(C-8)

Hence, the optimal angle \(\gamma \) satisfies:

$$\begin{aligned} \cos (\gamma - \beta ) = \frac{U_{\xi } +\Vert \mathbf {b}\Vert \cos (\beta )}{\Vert \mathbf {b}\Vert } = \frac{U_{\xi } +b_{\xi }}{\Vert \mathbf {b}\Vert }. \end{aligned}$$
(C-9)

Let us denote by \(\gamma ^{*}\) the solution of Eq. (C-9) lying in the interval \((\beta ,\beta + \pi )\). To check whether \(\gamma ^{*}\) is a maximizer or a minimizer, we evaluate the second derivative of \(F(\gamma )\) at \(\gamma ^{*}\) and find:

$$\begin{aligned} \frac{d^2F(\gamma ^{*})}{d\gamma ^2} = \Vert \mathbf {x}\Vert \sin (\alpha )\frac{\Vert \mathbf {b}\Vert }{\sin (\gamma -\beta )} >0, \end{aligned}$$
(C-10)

as the angle \(\gamma - \beta \in (0,\pi )\) by construction. Hence the optimal \(\gamma \) is the minimizer of F. Next, we recall Eq. (3): \(\Vert \nabla U\Vert + 2\mathbf {b}\cdot \nabla U=0\). Adding \(\Vert \mathbf {b}\Vert ^2\) to both sides, we obtain \(\Vert \nabla U + \mathbf {b}\Vert ^2 = \Vert \mathbf {b}\Vert ^2\). Then Eq. (C-9) and the equality \(\Vert \nabla U + \mathbf {b}\Vert =\Vert \mathbf {b}\Vert \) imply

$$\begin{aligned} \sin (\gamma - \beta ) = \frac{U_{\eta } +b_{\eta }}{\Vert \mathbf {b}\Vert }= \frac{U_{\eta } -\Vert \mathbf {b}\Vert \sin (\beta )}{\Vert \mathbf {b}\Vert } . \end{aligned}$$
(C-11)

Therefore,

$$\begin{aligned} U_{\eta } = -b_{\eta }+ \Vert \mathbf {b}\Vert \sin (\gamma - \beta ). \end{aligned}$$
(C-12)

On the other hand, from Eq. (C-9) we obtain:

$$\begin{aligned} \sin (\gamma - \beta )= \frac{\sqrt{\Vert \mathbf {b}\Vert \sin ^2(\beta ) - 2\Vert \mathbf {b}\Vert U_{\xi }\cos (\beta ) -U_{\xi }^2}}{\Vert \mathbf {b}\Vert }. \end{aligned}$$
(C-13)

Plugging Eq. (C-13) into Eq. (C-12) we get

$$\begin{aligned} U_{\eta } = \Vert \mathbf {b}\Vert \sin (\beta )+ \sqrt{\Vert \mathbf {b}\Vert \sin ^2(\beta ) - 2\Vert \mathbf {b}\Vert U_{\xi }\cos (\beta ) -U_{\xi }^2}, \end{aligned}$$
(C-14)

which coincides with Eq. (C-4).

Finally, the solution of the minimization problem (19)

$$\begin{aligned} u = { \min _{s\in [0,1]}f(s) } \end{aligned}$$

is achieved either at \(s^{*}\) if \(0\le s^{*}\le 1\), or at the endpoints \(s=0\) or \(s=1\). Hence, if \(0< s^{*} < 1\), then the solution of the minimization problem (19) coincides with the one of the finite difference scheme (17), and the latter meets the consistency conditions. Conversely, the solution of the finite difference scheme (17) satisfying the consistency conditions coincides with the one of the minimization problem (19), and the corresponding minimizer \(s^{*}\in [0,1]\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dahiya, D., Cameron, M. Ordered Line Integral Methods for Computing the Quasi-Potential. J Sci Comput 75, 1351–1384 (2018). https://doi.org/10.1007/s10915-017-0590-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10915-017-0590-9

Keywords

Mathematics Subject Classification

Navigation