Abstract
The Feynman-Kac formula provides a way to understand solutions to elliptic partial differential equations in terms of expectations of continuous time Markov processes. This connection allows for the creation of numerical schemes for solutions based on samples of these Markov processes which have advantages over traditional numerical methods in some cases. However, naïve numerical implementations suffer from issues related to statistical bias and sampling efficiency. We present methods to discretize the stochastic process appearing in the Feynman-Kac formula that reduce the bias of the numerical scheme. We also propose using temporal difference learning to assemble information from random samples in a way that is more efficient than the traditional Monte Carlo method.
This is a preview of subscription content, access via your institution.






Materials Availability
The datasets generated during and/or analysed during the current study, as well as Python 3 implementations of Algorithms 1, 2, and 3, are available from the corresponding author on reasonable request.
References
Battles Z, Trefethen L (2004) An extension of MATLAB to continuous functions and operators. SIAM J Sci Comput 25(5):1743–1770. https://doi.org/10.1137/S1064827503430126
Booth T E (1981) Exact Monte Carlo solution of elliptic partial differential equations. J Comput Phys 39(2):396–404. https://doi.org/10.1016/0021-9991(81)90159-5
Booth T E (1982) Regional Monte Carlo solution of elliptic partial differential equations. J Comput Phys 47(2):281–290. https://doi.org/10.1016/0021-9991(82)90079-1
Broadie M, Glasserman P, Kou S (1997) A continuity correction for discrete barrier options. Math Financ 7(4):325–349. https://doi.org/10.1111/1467-9965.00035
Buchmann F, Petersen W (2003) Solving Dirichlet problems numerically using the Feynman-Kac representation. BIT Numer Math 43:519–540. https://doi.org/10.1023/B:BITN.0000007060.39437.76
Delaurentis J, Romero L (1990) A Monte Carlo method for Poisson’s equation. J Comput Phys 90(1):123–140. https://doi.org/10.1016/0021-9991(90)90199-B
Dieker A, Lagos G (2017) On the Euler discretization error of Brownian motion about random times. arXiv:1708.04356
Driscoll T, Hale N, Trefethen L (2014) Chebfun Guide. Pafnuty Publications
Firth N (2005) High dimensional American options. PhD thesis, University of Oxford
Gobet E, Menozzi S (2010) Stopped diffusion processes: Boundary corrections and overshoot. Stochastic Processes Appl 120 (2):130–162. https://doi.org/10.1016/j.spa.2009.09.014
Han J, Jentzen A, Weinan E (2018) Solving high-dimensional partial differential equations using deep learning. Proc Natl Acad Sci USA 115(34):8505–8510. https://doi.org/10.1073/pnas.1718942115
Han J, Lu J, Zhou M (2020a) Solving high-dimensional eigenvalue problems using deep neural networks: A diffusion monte carlo like approach. J Comput Phys 423:109792. https://doi.org/10.1016/j.jcp.2020.109792
Han J, Nica M, Stinchcombe A (2020b) A derivative-free method for solving elliptic partial differential equations with deep neural networks. J Comput Phys 419:109672. https://doi.org/10.1016/j.jcp.2020.109672
Hieber P (2013) First-exit times and their applications in default risk management. PhD thesis, Technical University of Munich
Hwang C O, Mascagni M, Given J (2003) A Feynman-Kac path-integral implementation for Poisson’s equation using an h-conditioned Green’s function. Math Comput Simul 62(3-6):347–355. https://doi.org/10.1016/s0378-4754(02)00224-0
Janson S, Tysk J (2006) Feynman-Kac formulas for Black-Scholes-type operators. Bull London Math Soc 38(2):269–282. https://doi.org/10.1112/S0024609306018194
Karumuri S, Tripathy R, Bilionis I, Panchal J (2020) Simulator-free solution of high-dimensional stochastic elliptic partial differential equations using deep neural networks. J Comput Phys 404:109–120. https://doi.org/10.1016/j.jcp.2019.109120
Lagaris I, Likas A, Fotiadis D (1998) Artificial neural networks for solving ordinary and partial differential equations. IEEE Trans Neural Netw 9 (5):987–1000. https://doi.org/10.1109/72.712178
Lejay A, Maire S (2007) Computing the principal eigenvalue of the laplace operator by a stochastic method. Mathematics and Computers in Simulation 73 (6):351–363
Mörters P, Peres Y (2012) Brownian motion. Cambridge University Press, Cambridge
Nabian M, Meidani H (2019) A deep learning solution approach for high-dimensional random differential equations. Probabilistic Eng Mech 57:14–25. https://doi.org/10.1016/j.probengmech.2019.05.001
Pauli S, Gantner R, Arbenz P, Adelmann A (2015) Multilevel Monte Carlo for the Feynman-Kac formula for the Laplace equation. BIT Numer Math 55(4):1125–1143. https://doi.org/10.1007/s10543-014-0543-8
Pitman J (1999) The distribution of local times of a brownian bridge. Lecture Notes in Mathematics Sé,minaire de Probabilités XXXIII, pp 388–394. https://doi.org/10.1007/bfb0096528
Primožič T (2011) Estimating expected first passage times using multilevel Monte Carlo algorithm. Master’s thesis, University of Oxford
Raissi M (2018a) Deep hidden physics models: deep learning of nonlinear partial differential equations. arXiv:1801.06637
Raissi M (2018b) Forward-backward stochastic neural networks: deep learning of high-dimensional partial differential equations. arXiv:1804.07010
Raissi M, Karniadakis G (2018) Hidden physics models: machine learning of nonlinear partial differential equations. J Comput Phys 357:125–141. https://doi.org/10.1016/j.jcp.2017.11.039
Raissi M, Yazdani A, Karniadakis G (2018) Hidden fluid mechanics: A Navier-Stokes informed deep learning framework for assimilating flow visualization data. arXiv:1808.04327
Raissi M, Perdikaris P, Karniadakis G (2019) Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J Comput Phys 378:686–707. https://doi.org/10.1016/j.jcp.2018.10.045
Sirignano J, Spiliopoulos K (2018) DGM: A deep learning algorithm for solving partial differential equations. J Comput Phys 375:1339–1364. https://doi.org/10.1016/j.jcp.2018.08.029
Sutton R (1988) Learning to predict by the methods of temporal differences. Mach Learn 3(1):9–44. https://doi.org/10.1007/BF00115009
Sutton R, Barto A (2018) Reinforcement learning: An introduction. MIT Press, Cambridge
Trefethen L (2013) Approximation theory and approximation practice, vol 128. SIAM
Weinan E, Yu B (2018) The deep Ritz method: a deep learning-based numerical algorithm for solving variational problems. Commun Math Stat 6:1–12. https://doi.org/10.1007/s40304-018-0127-z
Weinan E, Han J, Jentzen A (2017) Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Commun Math Stat 5:349–380. https://doi.org/10.1007/s40304-017-0117-6
Zhou Y, Cai W (2016) Numerical solution of the Robin problem of Laplace equations with a Feynman-Kac formula and reflecting Brownian motions. J Sci Comput 69(1):107–121. https://doi.org/10.1007/s10915-016-0184-y
Zhou Y, Cai W (2019) A path integral Monte Carlo method based on Feynman-Kac formula for electrical impedance tomography. arXiv:1907.13147
Zhu Y, Zabaras N, Koutsourelakis P S, Perdikaris P (2019) Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data. J Comput Phys 394:56–81. https://doi.org/10.1016/j.jcp.2019.05.024
Acknowledgements
We gratefully acknowledge that this research was supported by the Fields Institute for Research in Mathematical Sciences. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the Institute. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC): RGPIN-2019-06946 for ARS and PDF-502287-2017 for MN. Thank you to the anonymous reviewer for the helpful comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Expected Exit Time of a Brownian Bridge: Proof of Eq. 7
Appendix: Expected Exit Time of a Brownian Bridge: Proof of Eq. 7
We establish the following inequality about the exit time of a Brownian bridge: Let 0 < a < x. Consider a Brownian bridge with initial position B(0) = 0 and final position B(Δτ) = x. Let \(T_{a} = \inf \{t>0: B(t) > a\}\) be the first time the Brownian bridge crosses a barrier at a. Then, we have that \(\mathbb {E}[T_{a}]\) obeys the inequality:
In our setting, Eq. 7 follows immediately from this fact by taking the barrier \(a = |\rho _{\partial {\varOmega }}(\vec B_{\text {old}})|\) and the final position \(x = {\varDelta }\rho = \rho _{\partial {\varOmega }}(\vec B_{\text {new}}) - \rho _{\partial {\varOmega }}(\vec B_{\text {old}})\).
To prove Eq. 13, we use the probability density of Ta from Eq. 5, to find that \(\mathbb {E}{\left [T_{a}\right ]}\) is given by
where \(\rho \left (t,a \right )\) denotes the probability density of the Brownian bridge to be at B(t) = a at time t, and La is the local time at a of this Brownian bridge. The probability density for this local time has an explicit formula from Equation (3) in Pitman (1999), namely,
For 0 < a < x, we have |a| + |x − a| = x, which yields
Finally, we can compute by a change of variable that
The Mill’s ratio inequality from Lemma 12.9 in Mörters and Peres (2012), which holds for all c > 0, gives
This gives the desired result of Eq. 13 by setting \(c = x/{\sqrt {{\varDelta } \tau }}\).
Rights and permissions
About this article
Cite this article
Martin, C., Zhang, H., Costacurta, J. et al. Solving Elliptic Equations with Brownian Motion: Bias Reduction and Temporal Difference Learning. Methodol Comput Appl Probab 24, 1603–1626 (2022). https://doi.org/10.1007/s11009-021-09871-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11009-021-09871-9