Monotone methods in counterparty risk models with nonlinear Black–Scholes-type equations

A nonlinear Black–Scholes-type equation is studied within counterparty risk models. The classical hypothesis on the uniform Lipschitz-continuity of the nonlinear reaction function allows for an equivalent transformation of the semilinear Black–Scholes equation into a standard parabolic problem with a monotone nonlinear reaction function and an inhomogeneous linear diffusion equation. This setting allows us to construct a scheme of monotone, increasing or decreasing, iterations that converge monotonically to the true solution. As typically any numerical solution of this problem uses most computational power for computing an approximate solution to the inhomogeneous linear diffusion equation, we discuss also this question and suggest several solution methods, including those based on Monte Carlo and finite differences/elements.

The parabolic partial differential equation (1.1) (PDE, for short) corresponds to the case when the nonlinearity F ( • ; S, t) : M → F (M ; S, t) : R → R on the right-hand side in eq.(1.1) is taken with the mark-to-market value M = V (S, t).This case corresponds to a derivative contract V on an asset (stock) S ∈ (0, ∞) between a seller B and a counterparty C that may both default.The asset price S is not affected by a default of either B or C, and is assumed to follow the Markov process with the (time-dependent) generator (the Black-Scholes operator) A t defined by (1.4) ∂V ∂S for V : (0, ∞) × (0, T ) → R : (S, t) → V (S, t) .
As usual, we take the volatility, σ, to be a positive constant, σ ∈ (0, ∞).The value of γ S (t) reflects the rate of dividend income and the value of q S (t) is the net share position financing cost which depends on the risk-free rate r(t) and the repo-rate of S(t)."Typical" values for the terminal condition (1.2) are h(S) ≡ V T (S) where V T (S) = (S − K) + = (e X − K) + for X = log S ∈ R (in case of the European call option) and V T (S) = (S − K) − = (K − S) + = (K − e X ) + (for the European put option).
A frequently used alternative to our choice M = V (S, t) of the mark-to-market value M in the nonlinearity F (M ; S, t) on the right-hand side in eq.(1.1) is M = V (S, t) where V denotes the same derivative between two parties that cannot default; see e.g.F. Baustian, M. Fencl, J. Pospíšil, and V. Švígler [6, Sect.2] for numerical treatment.This risk--free value, V , satisfies the classical (linear) Black-Scholes PDE (partial differential equation) with the prescribed terminal value V (S, T ) = h(S) for S ∈ (0, ∞) at maturity time t = T .Inserting this known value F (V (S, t); S, t) in eq.(1.1) in place of F ( V (S, t); S, t), we thus obtain an inhomogeneous linear equation for another (new) value of V (S, t).We refer to the works by C. Burgard and M. Kjaer [11], [12,Section 3], and [13] for details concerning modelling and to I. Arregui, B. Salvador, and C. Vázquez [3] for numerical results.We warn the reader that Refs.[3] and [11,12,13] use the convention V = V + + V − with V + def = max{V, 0} and V − def = min{V, 0} ( ≤ 0) for V ∈ R; nevertheless, we will stick with our notation V = V + − V − with V − def = max{−V, 0} ( ≥ 0).We will not worry about this alternative any more and focus entirely on the nonlinear equation (1.1).Making use of eq. ( 1.3), we arrive at the following equivalent form of eq.(1.1), frequently used, cf.[12, Section 2, Eq. ( 1)]: (1.5) for (S, t) ∈ (0, ∞) × (0, T ) .
This backward parabolic equation is supplemented by the terminal condition (1.2).
Models with nonlinearities are neither popular nor very frequent in Mathematical Finance.In the present article we treat a class of semilinear parabolic equations of type (1.5) with the standard linear diffusion operator ∂ ∂t + A t and the nonlinear reaction function that is more general that the one on the right-hand side of (1.5) (only uniformly Lipschitz-continuous).As far as we know, this class was introduced in the work by C. Burgard and M. Kjaer [12, Section 3] and [13].Another class of nonlinear models is based on a nonlinear Black-Scholes PDE with the quasilinear diffusion operator ∂ V ∂t + 1 2 σ 2 S 2 ∂ 2 V ∂S 2 + . . ., where the volatility σ ≡ σ ∂ 2 V ∂S 2 depends on the second partial derivative, and with a "typical" linear reaction function (sometimes including also transaction costs).This class can be traced to G. Barles and H. M. Soner [5, Eq. (1.2), p. 372] with some additional analytic studies (on explicit solutions) performed in L. A. Bordag and Y. Chmakova [8].Some additional references to related numerical studies and simulations will be added in Sections 4 and 5.
This article is organized as follows.We begin with a functional-analytic reformulation of the B-S equation (1.5) in the next section (Section 2).The terminal value problem (1.5), (1.2) will be transformed into an initial value Cauchy problem of parabolic type.This Cauchy problem is an initial value problem for the nonlinear (semilinear) B-S equation with a uniformly Lipschitz--continuous (nonlinear) reaction function, as well.In Section 3 we construct a monotone iteration scheme of supersolutions of this B-S equation that converge as a monotone decreasing (i.e., nonincreasing) sequence to the solution from above; see our main result, Theorem 3.4.A closely related ramification of this monotone iteration scheme provides an increasing sequence of subsolutions of the B-S equation that converge to the solution from below; see Remark 3.5.
Numerical methods play an important role in Mathematical Finance.In Section 4 we discuss applications of two most common methods to Mathematical Finance, finite differences/elements and Monte Carlo.We discuss their advantages and problems when compared to each other.Finally, in Section 5 we derive an explicit formula for the solution of the inhomogeneous linear parabolic initial value problem for the B-S equation that serves for computing the monotone iteration scheme in Section 3.This formula is obtained by variation-of-constants (with integrals over R 1 and [0, T ]) which makes it interesting for Monte Carlo computations.On the other hand, the solution of the inhomogeneous linear parabolic problem can be computed also by finite differences/elements.

Functional-analytic reformulation of the B-S equation
We wish to treat the terminal value problem (1.5) (or, equivalently, (1.1)) above, with the terminal condition (1.2), by standard analytic and numerical methods for semilinear parabolic initial value problems.To this end, we rewrite problem (1.5), (1.2) as the following general initial value problem for the unknown function v : R 1 × (0, T ) → R, where A(τ ) denotes the Black-Scholes operator defined by and the nonlinearity F ( • ; x, τ ) : R → R is given by Here, τ = T − t stands for the time to maturity and x = log S is the logarithmic asset (stock) price; we take (x, τ ) ∈ R 1 × (0, T ).In the sequel we will never use the real time t = T − τ ∈ (0, T ) any more, so we prefer to use the letter t in place of τ to denote the time to maturity , as it is usual in parabolic problems.According to this new notation, in eq. ( 2.3) we replace the time-dependent coefficients σ(T − τ ), q S (T − τ ), and γ S (T − τ ) by σ(t), q S (t), and γ S (t), respectively, and thus forget about the original terminal value problem (1.5), (1.2): Next, in order to make the initial value problem (2.1), (2.2) compatible with the monotone methods described in the article by David H. Sattinger [36], we rewrite this problem as follows: According to [12], s F ≡ r F − r, λ B ≡ r B − r, and λ C ≡ r C − r are some nonnegative constants and R B , R C ∈ [0, 1] are the recovery rates on the derivative positions of parties B and C, respectively.As a consequence, the function G( • ; x, t) : v → G(v; x, t) : R → R , defined by is monotone increasing (i.e., nondecreasing) on R. Notice that both functions, v → − v − and v → v + , are nondecreasing on R. Indeed, we have also In addition, the left-hand side of eq.(2.6) clearly satisfies the weak maximum principle.
We now specify our hypotheses on the general nonlinearity G : R × R 1 × (0, T ) on the right-hand side of eq.(2.6) treated in our present work.We assume that G satisfies the following hypotheses:

t).
(G4) There is a constant C 0 ∈ R + such that, at almost every time t ∈ (0, T ), the function for almost all x ∈ R 1 .
(G5) There are constants C 1 ∈ R + and ϑ G ∈ (0, 1) such that, for every v ∈ R and almost every From now on, let us consider the following generalization of the initial value problem (2.6), (2.2): with the initial condition v(x, 0) = v 0 (x) def = h(e x ) for x ∈ R 1 in eq.(2.2), where the constant r + L F in eq.(2.6) has been replaced by the new constant r G ∈ R + , owing to our monotonicity hypothesis (G3).Concerning hypotheses on the time-dependent coefficients that appear in the Black-Scholes operator A(t) defined in eq.(2.5) (recall that τ = t), we assume the following Hölder continuity: where C σ ∈ R + and ϑ σ ∈ (0, 1) are some constants independent from time t ∈ [0, T ].

⊓ ⊔
Clearly, from Hypothesis (BS1) we derive σ(t) ≥ σ 0 = min t∈[0,T ] σ(t) > 0 for all t ∈ [0, T ].This fact, combined with (BS2), guarantees the uniform ellipticity of the Black-Scholes operator Remark 2.2 (Risk-free interest rate.)One may also suggest to replace the multiplicative constant r G ∈ R on the left-hand side of eq.(2.12) by the time-dependent risk-free interest rate r : [0, T ] → R satisfying a Hölder continuity condition analogous to those in eqs.(2.14) and (2.15).However, this change would not make eq.(2.12) more general in that it could be reduced to the present form (2.12) with the term r G v as follows: First, define r G ∈ R by r G = max t∈[0,T ] r(t); then replace the function G(v; x, t) on the right-hand side of eq.(2.12) by the sum G r (v; x, t) : R 1 × (0, T ) → R satisfies all Hypotheses (G1) -(G5) imposed on the function G.We conclude that the interest rate difference, r G − r(t), can be included in the reaction function G.We thus keep eq.(2.12) in the present form with r G ∈ R being a given constant.
⊓ ⊔ Our last hypothesis in problem (2.12), (2.2) restricts the growth of the initial condition v(x, 0) = v 0 (x) def = h(e x ) for x ∈ R 1 in eq.(2.2) as follows: Hypothesis (v 0 ) The function v 0 : R → R is Lebesgue-measurable and there is a constant C h ∈ R + such that, for almost all x ∈ R 1 , we have As we have already indicated in our hypothesis (G4) on the exponential growth restriction of G, we are going to look for (strong, weak or mild) solutions v : R 1 × (0, T ) → R to the initial value problem (2.12), (2.2) satisfying an analogous exponential growth restriction of type v( • , t) ∈ H C at every time t ∈ (0, T ), where H C = L 2 (R; w) denotes the complex Hilbert space of all complex-valued Lebesgue-measurable functions f : R → C with the finite norm def = e −µ|x| is a weight function with some constant µ ∈ (2, ∞).This norm is induced by the inner product As usual, the symbol z denotes the complex conjugate of a complex number z ∈ C where C = R + iR is the complex plane.We consider the complex Hilbert space H C only for better understanding of our applications using holomorphic semigroups in H C generated by the (unbounded) Black-Scholes operator A(t) : H C → H C in eq.(2.12) above.Our solutions v(x, t) to the initial value problem (2.12), (2.2) will be always real-valued, i.e., v( • , t) ∈ H at every time t ∈ (0, T ), where H denotes the closed real vector subspace of all real-valued functions f : R → R from H C .The domain of the differential operator A(t), denoted by D(A(t)), is a complex vector subspace of H C which is independent from time t ∈ [0, T ], i.e., D(A(t)) ≡ D C ⊂ H C for every t ∈ [0, T ].The vector space D C consists of all functions f ∈ H C whose weak (distributional) derivatives f ′ = df dx and f ′′ = d 2 f dx 2 belong to H C , as well.We set D = D C ∩ H to denote the closed real vector subspace of all real-valued functions f : R → R from D C .The vector space D C becomes a Banach space under the norm This norm is equivalent with the stronger norm by a simple interpolation inequality.We refer to the monograph by K.-J.Engel and R. Nagel [14] for details concerning holomorphic semigroups and their (infinitesimal) generators, especially to [14, Chapt.II, Sect.4a, pp.96-109].
We denote by H 1 C the complex interpolation space between D C and H C that consist of all functions f : R → R from H C such that both f, f ′ ∈ H C .H 1 C is a vector space which becomes a Banach space under the norm Hence, we have the continuous imbeddings C is the domain of the sesquilinear form C is immediate, thanks to D C being a dense vector subspace of H 1 C .A few simple applications of the Cauchy-Schwartz inequality show that the (non-symmetric) sesquilinear form Q(t) on H 1 C is coercive.Indeed, with a help from Hypothesis (BS1) we have σ(t) ≥ σ 0 = min t∈[0,T ] σ(t) > 0 for all t ∈ [0, T ].Consequently, if the constant λ 0 ∈ (0, ∞) below is chosen sufficiently large, then we get the following more precise quantification of coercivity at every time t ∈ [0, T ]: thanks to Hypotheses (BS1) and (BS2).We note that the constant λ 0 depends neither on time t ∈ [0, T ] nor on the number λ ≥ λ 0 .

This is the case if the inhomogeneity
Recall that µ (µ > 2) is the constant in the weight function w(x) def = e −µ|x| in the Hilbert space H C = L 2 (R; w).We remark that for the inhomogeneous linear equation (2.19), the nonlinearity G on the right-hand side of eq.(2.12) becomes G(v; x, t) ≡ f (x, t); hence, we have ϑ G = ϑ f in Hypothesis (G5).Let us recall that, by Remark 2.1, we have replaced the Hölder exponents ϑ G , ϑ σ , ϑ q , and ϑ γ by their minimum ϑ 0 ; hence, we may include also the value of ϑ f in that minimum:

Monotone methods for the nonlinear B-S equation
We make use of the inhomogeneous linear problem (2.19), (2.2) in order to describe an iterative scheme for approximating the unique weak solution v : [0, T ] → H, v(0) = v 0 ∈ H, to the semilinear problem (2.12), (2.2).

Preliminary comparison results for parabolic problems
First, the so-called weak maximum principle for a classical solution v of the inhomogeneous linear problem (2.19), (2.2) is established e.g. in A. Friedman [17, Chapt.2, Sect.4, Theorem 9, p. 43].A standard approximation procedure of a weak solution v by a sequence of classical solutions yields the corresponding weak maximum principle also for the weak solution v.More precisely, if the inequalities v(x, 0) = v 0 (x) ≥ 0 and f (x, t) ≥ 0 are valid for almost all (x, t) ∈ R 1 × (0, T ), then also v(x, t) ≥ 0 holds for almost all (x, t) ∈ R 1 × (0, T ).
Observe that we have left the initial value v 0 ∈ H out of this lemma since we use it usually with either v(0) = v 0 or v(0) = v 0 as the initial condition attached to the differential equation Proof of Lemma 3.1.We subtract equation (3.5) from (3.6), thus obtaining an analogous equation for the difference w = v −v with the right-hand side equal to g(x, t) = f (x, t)−f (x, t) ≥ 0 for a.e.(x, t) ∈ R 1 × (0, T ).Then the desired result, w( • , t) ≥ 0 a.e. in R where κ ∈ R is a constant satisfying 1 ≤ κ < µ/2, and K, λ ∈ R with K ≥ 1 and λ ≥ 0 are some other constants (large enough) to be determined below: The left-hand side of eq.(2.12) with v = V becomes As usual, σ L ∞ (0,T ) stands for the supremum norm of a continuous function σ : [0, T ] → R.
On the other hand, the right-hand side of eq.(2.12) where and we have taken advantage of inequalities (2.9) and (2.10) in Hypotheses (G2) and (G4), respectively.Subtracting eq.(3.9) from (3.8) we arrive at Recalling our Hypothesis (v 0 ) with ineq.(2.16) on the growth of the initial condition, we take the constant K ∈ R such that K ≥ max{1, C h } which guarantees also It follows that the function V : R 1 × [0, T ] → R defined in eq.(3.7) is a supersolution of problem (2.12), (2.2).
Analogous arguments show that the function − V : R 1 × [0, T ] → R is a subsolution of problem (2.12), (2.2).Notice that in this case, ineq.(3.9) has to be replaced by
(c) For each j = 1, 2, 3, . . ., m, the function In our last step (induction on the index m ≥ 1) we construct the (m + 1)-st iterate, u m+1 : R 1 × [0, T ] → R, to be the (weak) solution u m+1 to the following problem; cf.problem (3.15), (3.16): By arguments analogous to those used in the construction of u 1 from u 0 above, we conclude that u m+1 exists and satisfies Here, we have used our monotonicity hypothesis (G3) to conclude that u m ≤ u m−1 a.e. in This concludes the construction of the desired iterates.As an obvious consequence of our construction we conclude that, in addition to u 0 , also each iterate u j ; j = 1, 2, 3, . . ., is a (weak) supersolution of problem (2.12), (2.2).
Two additional important results in Sattinger's work [36], Theorems 3.3 and 3.4 on p. 986, can be applied to our monotone iteration scheme, (3.14) -(3.23), as well.We remark that the monotonicity ("monotone") methods in [36] are formulated for nonlinear elliptic and parabolic boundary value problems in a bounded spatial domain Ω ⊂ R N .Nevertheless, they are applicable also to our terminal value problem (1.1), (1.2) (as evidenced by our proof of Theorem 3.4 above), even though we have no boundary conditions for the stock price S ∈ (0, ∞); more precisely, not for the logarithmic stock price x = log S ∈ R that we use throughout our present article; cf.[6, §3.3, p. 50].In analogy with this scheme, (3.14) -(3.23), for problem (2.12), (2.2), which has resulted in the monotone (pointwise) decreasing sequence of iterations in R 1 × (0, T ) in Theorem 3.4 above, we are able to construct another monotone iteration scheme for problem (2.12), (2.2) that is monotone (pointwise) increasing as follows: Remark 3.5 (Increasing monotone iterations.)We set and recall that w 0 = − u 0 is a subsolution of problem (2.12), (2.2), by Example 3.2 with κ = 1 and some constants K ≥ 1 and λ ≥ 0 large enough, as specified in this example.We define the first iteration w 1 : R 1 × [0, T ] → R by replacing the functions u 0 and u 1 in eqs.(3.15) and (3.16) by w 0 and w 1 , respectively.Thus, the inequalities in (3.17) have to be reversed: In order to construct the j-th iteration w j : R 1 × [0, T ] → R from w j−1 : R 1 × [0, T ] → R; j = 1, 2, 3, . . ., we replace the pair (u j−1 , u j ) by (w j−1 , w j ) in (3.18) -(3.23), thus arriving at the desired monotone (pointwise) increasing sequence of iterations The remaing part of Theorem 3.4 remains valid for w m in place of u m ; m = 1, 2, 3, . . ., both, with and without hypothesis (G1'), including the monotone convergence w m → v in the Lebesgue space L 2 ([0, T ] → H) to the function v : R 1 × (0, T ) described in Theorem 3.4 by formula (3.25) and u m → v in the norm of the Hölder space H 2+θ ′ ,1+(θ ′ /2) (Q ′ ) , respectively.⊓ ⊔ Numerical approximation methods for semilinear parabolic systems based on a standard finite difference discretization that produces a monotone iteration scheme can be found in the monographs by A. W. Leung [28,29].Accelerated Monotone Convergence of this iteration scheme is treated in [28], Chapt.VI, §6.3, pp.292-300, in the context of a two-point boundary value problem followed immediately by L 2 -Convergence for finite-difference solutions in two dimensional domains in §6.4,pp.300-323, which can be viewed as a numerical implementation of our L 2 -type norm convergence result in eq.(3.25).A full, large parabolic system is treated numerically in [28,  An important question in this "monotone" approximation method (cf.David H. Sattinger [36]) is the speed of convergence in eq.(3.25) above.A suitable norm for estimating this convergence in the Banach space C([0, T ] → H) ֒→ L 2 ((0, T ) → H) is the "weighted" supremum norm where ω > 0 is a sufficiently large positive real number that guarantees the estimate (2.9), respectively.Then also the estimate follows immediately by letting m → ∞ above.
4 Numerical methods: finite differences/elements compared to Monte Carlo The Monte Carlo method enjoys broad popularity especially among the specialists in Probability interested in numerical simulations.Since its introduction into the toolbox of computational finance in 1977 by P. Boyle, M. Broadie, and P. Glasserman [10], Monte Carlo simulation has continued to gain acceptance as a computational method of pricing more and more sophisticated options and as a risk management tool, as well.The Monte Carlo method is characterized by its very high flexibility and its ability to process a multidimensional problem.One of the main strengths of the Monte Carlo method is that it does not require discretization, and is therefore insensitive to space dimension.Consequently, the more dimensions the problem to be solved has, the more competitive the Monte Carlo method is when compared to methods based on space discretization.However, the rate of convergence of the method, apparently dependent on the space dimension of the problem (often referred to as "the curse of dimensionality" in case of a "high" space dimension, which is better known as degeneracy properties of likelihood ratios (LRs)), is known to be quite low; cf.P. W. Glynn and R. Y. Rubinstein [18].An important identified limitation of the Monte Carlo method is its inability to treat discretization of nonlinear integral equations directly.However, iteration methods that require the solution of a linear problem at each iteration step (such as ours in Section 3) have been studied intensively in the past, see e.g.M. Yu.Plotnikov [35].The Monte Carlo method is applied there to solve linear problems in order to construct a sequence of iterates that are expected to converge to the solution of the given nonlinear integral equation.This is the strategy we have followed also in the work reported here (in Section 3).
Some recent works open perspectives to overcome the difficulties with discretization of nonlinear integral equations.Significant progress has been made on the construction of methods based on a branching diffusion process.As the nonlinearities to be treated in numerical discretizations of PDEs in Finance are mostly Lipschitz-continuous (Lipschitzian, for short), often piecewise linear and continuous, the most efficient Monte Carlo algorithms focus on treating precisely such Lipschitzian nonlinearities.Among the most interesting earlier pioneering works, we would like to mention the article by P. Henry-Labordère in [20].Here, the author approximates a simple piecewise linear nonlinearity by a polynomial in order to be able to take advantage of the probability-based computational technique, a branching diffusion process.This technique was first introduced by H. P. McKean [33] and A. V. Skorokhod [37] to provide a probabilistic representation for the solution of the Kolmogorov-Petrovskii-Piskunov PDE (KPP equation, for short) and, more generally, of semilinear PDEs whose nonlinearity is a power series with nonnegative coefficients from the interval [0, 1].Since the KPP equation in [33,37] has only a quadratic or cubic nonlinearity, numerical approximation using branching diffusion process is quite efficient.However, for applications to the counterparty risk valuation model in finance treated in [20], the nonlinearity that one is interested in is not a polynomial.It is therefore interesting to investigate methods that can treat directly monotone nonlinearities that are not necessarily polynomial.As to the speed and precision of the PDE and Monte-Carlo methods, a brief comparison is given in the work by G. Loeper and O. Pironneau [32], together with a mixed PDE/Monte-Carlo method that provides better results for the Heston stochastic volatility model.An entirely different approach to S. L. Heston's model [23], based on "orthogonal series expansion", is employed in B. Alziary and P. Takáč [2, Sect.11 (Appendix), §11.1 - §11.2], pp.48-50, in P. Takáč [38], and in the numerical simulations by F. Baustian, K. Filipová, and J. Pospíšil [7].Replacing a piecewise linear nonlinearity by a polynomial introduces a significant error into the algorithm in [20].
These branching diffusion processes provide a generalization of the Feynman-Kac theory for nonlinear PDEs and lead to a representation of the solutions in the form of expectation and thus allow to use Monte Carlo methods for simulations.A significant progress in the application of branching diffusion process was made in P. Henry-Labordère, X. Tan, and N. Touzi [22].
Therefore, more recent studies prefer to work directly with an arbitrary Lipschitzian nonlinearity when applying branching diffusion process; cf.B. Bouchard, X. Tan, and X. Warin [9] and P. Henry-Labordère, N. Oudjane, X. Tan, N. Touzi, and X. Warin [21].More general (non-Lipschitzian) nonlinearities are approximated by (optimal) Lipschitzian nonlinearities on a compact set, if necessary.In this situation, a standard Picard's iteration scheme (from the proof of the Picard-Lindelöf theorem; see e.g.Ph.Hartman [19, Chapt.II, §1, pp.8-10]) can be used to approximate the desired solution v ∈ L 2 ([0, T ] → H) to problem (2.12), (2.2), i.e., the limit function v in our Theorem 3.4.Also the approximation error and the rate of convergence are estimated by classical arguments.
Before switching entirely to a discussion of typically "analytical" numerical discretizations, we would like to mention also some important studies where the two methods, i.e., Monte Carlo and finite difference/element algorithms, are either combined ( [32]) or compared against each other in numerical simulations ([6, Sect.4] and [32]).A suitable combination of these two methods performed on the "right" problem can lead to significant improvements of the "hybrid" algorithm by G. Loeper and O. Pironneau [32] in both, precision and speed.On the other hand, the work in F. Baustian, M. Fencl, J. Pospíšil, and V. Švígler [6] treats the same subject as does our present work, with an inhomogeneous linear parabolic Cauchy problem [6, Eq. ( 6), p. 48] closely related to ours in eqs.(2.19), (2.2).An interesting comparison of several kinds of Numerical Methods for the linear initial value problem (2.19), (2.2) can be found in that work [6, Sect.4, pp.52-54].In fact, even a quantitative comparison of Monte Carlo and finite difference methods with the exact formula (for the "diffusion (or heat) equation") is provided there [6, Figs.2-4, p. 53].In [6,Sect. 3], Monte Carlo and finite difference/element algorithms are applied separately and then compared against each other in numerical simulations [6, Sect.4, pp.52-54].In general, the comparison can go "both ways"; one has to choose among Monte Carlo, finite difference/element, and "hybrid" algorithms according to the particular problem that should be discretized for a numerical treatment.
In the previous section (Section 3) we have presented a method suitable for direct applications of finite difference/element algorithms to inhomogeneous linear parabolic initial value problems of type (2.19), (2.2) with the inhomogeneity (on the right-hand side in (2.19)) given by the term f (x, t) = G(v(x, t); x, t) from eq. (2.12).Our functional-analytic approach with an arbitrary Lipschitzian nonlinearity suggests an analogous Picard iteration scheme combined with a finite difference (or similar) discretization method for the inhomogeneous linear parabolic initial value problem (2.19), (2.2).Solving this problem is especially easy if all functions σ : [0, T ] → (0, ∞) and q S , γ S : [0, T ] → R in Hypotheses (BS1) and (BS2), respectively, are constants independent from time t ∈ [0, T ], i.e., σ ∈ (0, ∞) and q S , γ S ∈ R. Our choice of the constant r G ∈ R + guarantees that Hypothesis (G3) is valid: For almost every pair (x, t) ∈ R 1 × (0, T ), the function G( • ; x, t) : v → G(v; x, t) : R → R is monotone increasing.Under these Hypotheses, (BS1) and (BS2), supplemented by Hypothesis (f1) in place of Hypotheses (G2) and (G5) used for the special inhomogeneity f (x, t) = G(v(x, t); x, t) on the right-hand side in eq.(2.12), in the next section (Section 5) we give an explicit formula for the solution v(x, t) to the inhomogeneous linear parabolic initial value problem (2.19), (2.2).This formula is obtained in terms of a linear integral operator whose kernel is expressed through a "modification" of the standard "diffusion (or heat) kernel", cf.Theorem 5.1 below.

Numerical methods: the semilinear parabolic problem
In order to calculate (and compute numerically) the monotone iterations treated in Theorem 3.4, we need to solve the inhomogeneous linear parabolic initial value problem (2.12), (2.2) under the assumption that the inhomogeneity f (x, t) = G(v(x, t); x, t) on the right-hand side in eq.(2.12) is known.In other words we wish to supplement our main result, Theorem 3.4, by a result on solving the initial value problem (2.19), (2.2).We will obtain a unique weak (or mild ) solution v : [0, T ] → H under Hypothesis (f1), that is to say, f : [0, T ] → H is a continuous function, where f : Furthermore, our special inhomogeneity choice of f (x, t) = G(v(x, t); x, t) satisfies even the stronger hypothesis (2.20) on the Hölder continuity of the function f : [0, T ] → H with the Hölder exponent ϑ f ∈ (0, 1), see also ineq.(2.21).As a consequence, our mild solution v : [0, T ] → H, which is continuous by definition, becomes a unique classical solution.Thus, from now on let us assume that f : [0, T ] → H is a Hölder-continuous function with the Hölder exponent ϑ f ∈ (0, 1); cf.ineq.(2.20)In our theorem below, Theorem 5.1, we provide an explicit formula for the evolutionary family T.Each operator T(t, s) : H → H (0 ≤ s < t ≤ T ) turns out to be a (bounded) integral operator with the kernel K(x, y; t, s) > 0 described explicitly in terms of the standard "diffusion (or heat) kernel" hence, we have +∞ −∞ G(x; t) dx = 1 for every t ∈ (0, ∞) .
We refer to F. John [24, Chapt.7, §1(a), pp.206-213] for greater details about the "diffusion (or heat) equation".Next, we make use of the abbreviations (5.5) defined for every t ∈ [0, T ].By our Hypothesis (BS1), we have σ 0 = min t∈[0,T ] σ(t) > 0 which entails Theorem 5.1 (Homogeneous linear problem.)Under the Hypotheses (BS1) and (BS2) stated in Section 2, the evolutionary family T on the Hilbert space H takes the following form: for all x ∈ R 1 , 0 ≤ s < t ≤ T , and v s ∈ H , with the kernel K(x, y; t, s) > 0 defined by K(x, y; t, s) ≡ K(x − y; t, s) In fact, our existence and uniqueness result in this theorem corresponds to that in A. Pazy [34, Chapt.5, §5.6], Theorem 6.8 on p. 164.However, since we wish to calculate the kernel K(x, y; t, s) in formula (5.7) explicitly, we cannot use the abstract proof of [34,Chapt. 5,Theorem 6.8] directly.Thus, we prefer to calculate the evolutionary family T directly by taking advantage of well-known results [24, Chapt.7, §1(a), pp.206-213] for the "diffusion equation".
But, before giving the proof of Theorem 5.1, let us state an important corollary concerning the solvability of the inhomogeneous abstract initial value problem (2.23), (2.24).
for v ∈ D C and every time t ∈ (0, T ) , is a sum of two commuting differential operators (meaning that their resolvents commute), respectively, A 1 (t), A 2 (t) : In order to construct the sum with the initial condition v(0) = v 0 ∈ H in eq.(2.24), takes the "C 0 -semigroup" form where (5.9) is the evolutionary family T holds for all v 0 ∈ D C and for every t ∈ (0, T ] , and for all v 0 ∈ D C and for every t ∈ [0, T ] . (5.13) Finally, to construct the full linear operator that appears on the left-hand side of the inhomogeneous abstract differential equation (2.23), let us combine formula (5.9) with eqs.(5.11) and (5.13) in order to define the evolutionary family T = {T(t, s) : H → H : 0 ≤ s ≤ t ≤ T } by the following composition of bounded linear operators Applying eqs.(5.11) In our proof of Theorem 3.4 above, we have taken advantage of Hypotheses (G2) and (G1') in order to conclude that the limit function v ∈ L 2 ([0, T ] → H) obtained in formula (3.25) is a strong (classical) solution to problem (2.12), (2.2).Indeed, these two Hypotheses, (G2) and (G1'), guarantee that all iterates (functions) u j : [0, T ] → H; j = 0, 1, 2, . . ., in Theorem 3.4 are uniformly Hölder-continuous functions with some Hölder exponent ϑ v ∈ (0, 1) and their monotone sequence uniformly bounded in the Hölder space C ϑv ([0, T ] → H).Hence, we have v ∈ C ϑv ([0, T ] → H), as well.We conclude that our function f (x, t) = G(v(x, t); x, t) from eq. (2.12) obeys Hypothesis (f1).Consequently, from now on let us assume that f : [0, T ] → H is a Hölder-continuous function with the Hölder exponent ϑ f ∈ (0, 1); cf.ineq.(2.20) and also ineq.(2.21) related to the the linear initial value problem (2.19), (2.2).Then, given any initial value v s ∈ H, it follows again from [34, Chapt.5, §5.7], Theorem 7.1 on p. 168, that that the function v : [s, T ] → H defined above by the variation--of-constants formula in (5.16), is even a classical solution of the inhomogeneous abstract differential equation (2.23) in the time interval [s, T ] with the initial value v(s) = v s ∈ H. From formulas (5.9), (5.12), and (5.16) we deduce that each mapping T(t, s) : H → H (0 ≤ s < t ≤ T ) is an integral operator with the kernel K(x, y; t, s) > 0; see eq. (5.6).
Next, we will derive an explicit formula for the kernel K(x, y; t, s) with a help from the standard "diffusion kernel" G(x; t) defined in eq.(5.4).First, the standard "diffusion semigroup" G(x − y; t) v 0 (y) dy defined for all x ∈ R 1 , 0 < t < ∞, and v 0 ∈ H .
Our proof of Theorem 5.1 is finished.
Remark 5.3 (An alternative proof of Theorem 5.1.)In order to derive formula (5.7) for the kernel K(x, y; t, s) > 0 of the integral operator T(t, s) : H → H in eq.(5.6), one can replace the calculations in our proof of Theorem 5.1 above by applying first the Fourier transformation to the homogeneous parabolic problem (5.2), (5.3) to calculate the Fourier transform of the kernel K(x, y; t, s) ≡ K(x − y; t, s) with respect to the variable (difference) x − y ∈ R 1 , followed by a standard application of the inverse Fourier transformation to obtain the desired kernel K(x, y; t, s) for all x, y ∈ R 1 and 0 ≤ s < t < ∞.This procedure is analogous with the derivation of the "diffusion kernel" G(x; t) (defined here in eq.(5.4)) by Fourier transformation in F. John Our explicit variation-of-constants formula (5.1) verified in Corollary 5.2 (to Theorem 5.1) renders a precise analytic way of calculating the exact solution to problem (2.23), (2.24), which is the abstract form of the Cauchy problem (2.19), (2.2).In practical, real world applications to problems in Mathematical Finance this solution needs to be approximated by suitable Numerical Methods; typically with reasonable speed and within reasonable precision.Whereas the nonlinear effects, modelled by the nonlinear "reaction" function G( • ; x, t) : v → G(v; x, t) : R → R , defined in eq.(2.8), are computed in a most standard way through the inhomogeneity f (x, t) = G(v(x, t); x, t) from eq. (2.12), with the (nonlinear) substitution operator G( • ; x, t) : R → R, the numerical computation of the (unique) solution v : R 1 × (0, T ) → R to the linear initial value Cauchy problem (2.19) [3,6], treat exactly the problem of "Option pricing under some Value Adjustment" (xVA) in Mathematical Finance; under "Credit Value Adjustment" (CVA), for instance.The third one, [35], treats linear integral equations of type (5.16) which envolve the evolutionary family of bounded linear integral operators T(t, s) : H → H (0 ≤ s < t ≤ T ) on the Hilbert space H.In fact, a numerical method for solving the full, nonlinear integral equation (5.1) for the unknown function v ∈ C([0, T ] → H) substituted into the (subsequently unknown) nonlinearity f (τ ) ≡ f ( • , τ ) : R 1 → R : x → f (x, τ ) = G(v(x, τ ); x, τ ) with f (τ ) ∈ H for every τ ∈ [0, T ], is provided in [35].This work is based in an iteration method for a nonlinear integral equation similar to ours, cf.(iii) Last but not least, a "hybrid" algorithm mixing Monte Carlo with finite difference/element methods in quest for optimization on both, precision and speed, is presented in G. Loeper and O. Pironneau [32].

2 4 [
S (t) − S (s)](5.7)for all x, y ∈ R 1 and 0 ≤ s < t ≤ T .Here, the continuous function v : [s, T ] → H defined in formula (5.6) for t ∈ (s, T ], with v(s) = v s ∈ H, is the unique mild solution of the homogeneous abstract initial value problem (5.2), (5.3) on the interval [s, T ].In fact, v : [s, T ] → H is also a classical solution to this problem.
H to the linear initial value problem (2.19), (2.2) described above happens to be a unique classical solution which, among other properties, is continuous as a function ) .Indeed, according to the existence, uniqueness, and regularity results for problem (2.19), (2.2) in A. Pazy [34, Chapt.5, §5.7], Theorem 7.1 on p. 168, if (2.20) holds, then the unique weak solution v : [0, T ] → is a weak solution to the inhomogeneous linear problem (2.19), (2.2), then the notions of super-and subsolution to the inhomogeneous linear problem (2.19), (2.2) are defined by means of an approximation procedure by a sequence of classical solutions again.