Skip to main content

The Master Equation for Large Population Equilibriums

  • Conference paper
  • First Online:
Stochastic Analysis and Applications 2014

Part of the book series: Springer Proceedings in Mathematics & Statistics ((PROMS,volume 100))

Abstract

We use a simple \(N\)-player stochastic game with idiosyncratic and common noises to introduce the concept of Master Equation originally proposed by Lions in his lectures at the Collège de France. Controlling the limit \(N\rightarrow \infty \) of the explicit solution of the \(N\)-player game, we highlight the stochastic nature of the limit distributions of the states of the players due to the fact that the random environment does not average out in the limit, and we recast the Mean Field Game (MFG) paradigm in a set of coupled Stochastic Partial Differential Equations (SPDEs). The first one is a forward stochastic Kolmogorov equation giving the evolution of the conditional distributions of the states of the players given the common noise. The second is a form of stochastic Hamilton Jacobi Bellman (HJB) equation providing the solution of the optimization problem when the flow of conditional distributions is given. Being highly coupled, the system reads as an infinite dimensional Forward Backward Stochastic Differential Equation (FBSDE). Uniqueness of a solution and its Markov property lead to the representation of the solution of the backward equation (i.e. the value function of the stochastic HJB equation) as a deterministic function of the solution of the forward Kolmogorov equation, function which is usually called the decoupling field of the FBSDE. The (infinite dimensional) PDE satisfied by this decoupling field is identified with the master equation. We also show that this equation can be derived for other large populations equilibriums like those given by the optimal control of McKean-Vlasov stochastic differential equations. The paper is written more in the style of a review than a technical paper, and we spend more time motivating and explaining the probabilistic interpretation of the Master Equation, than identifying the most general set of assumptions under which our claims are true.

Paper presented at the conference “Stochastic Analysis”, University of Oxford, September 23, 2013.

René Carmona—Partially supported by NSF: DMS-0806591.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We refer to the Lasry-Lions monotonicity conditions in Ref. [2] for a typical set of assumptions under which uniqueness holds. See also Ref. [5] for a discussion of uniqueness in the presence of a common noise.

References

  1. A. Bensoussan, J. Frehse, P. Yam, The master equation in mean-field theory. Technical report. http://arxiv.org/abs/1404.4150

  2. P. Cardaliaguet, Notes on mean field games. Notes from P.L. Lions’ lectures at the Collège de France https://www.ceremade.dauphine.fr/cardalia/MFG100629.pdf (2012)

  3. R. Carmona, F. Delarue, Forward-backward stochastic differential equations and controlled McKean Vlasov dynamics. in Annals of Probability To appear

    Google Scholar 

  4. R. Carmona, F. Delarue, Probabilistic analysis of mean field games. SIAM J. Control Optim. 51, 2705–2734 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  5. R. Carmona, F. Delarue, D. Lacker, Mean field games with a common noise. Technical report. http://arxiv.org/abs/1407.6181

  6. R. Carmona, F. Delarue, A. Lachapelle, Control of McKean-Vlasov versus mean field games. Math. Financ. Econ. 7, 131–166 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  7. R. Carmona, J.P. Fouque, A. Sun, Mean field games and systemic risk. To appear in Communications in Mathematical Sciences

    Google Scholar 

  8. J.F. Chassagneux, D. Crisan, F. Delarue, McKean-Vlasov FBSDEs and related master equation. Work in progress

    Google Scholar 

  9. W. Fleming, M. Soner, Controlled Markov Processes and Viscosity Solutions (Springer, New York, 2010)

    Google Scholar 

  10. D.A. Gomes, J. Saude, Mean field games models—a brief survey. Technical report (2013)

    Google Scholar 

  11. O. Guéant, J.M. Lasry, P.L. Lions, Paris Princeton Lectures in Mathematical Finance IV, in Mean Field Games and Applications, Lecture Notes in Mathematics, ed. by R. Carmona, et al. (Springer, Berlin, 2010)

    Google Scholar 

  12. M. Huang, P.E. Caines, R.P. Malhamé, Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. Commun. Inf. Syst. 6, 221–252 (2006)

    MATH  MathSciNet  Google Scholar 

  13. J.M. Lasry, P.L. Lions, Jeux à champ moyen I. Le cas stationnaire. Comptes Rendus de l’Académie des Sciences de Paris, ser. A 343(9), 619–625 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  14. J.M. Lasry, P.L. Lions, Jeux à champ moyen II. Horizon fini et contrôle optimal. Comptes Rendus de l’Académie des Sciences de Paris, ser. A 343(10), 679–684 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  15. J.M. Lasry, P.L. Lions, Mean field games. Jpn. J. Math. 2(1), 229–260 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  16. P.L. Lions, Théorie des jeux à champs moyen et applications. Technical report, 2007–2008

    Google Scholar 

  17. J. Ma, H. Yin, J. Zhang, On non-Markovian forward-backward SDEs and backward stochastic PDEs. Stoch. Process. Appl. 122, 3980–4004 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  18. D. Nualart, The Malliavin Calculus and Related Topics, Probability and its Applications (Springer, New York, 1995)

    Book  MATH  Google Scholar 

  19. S. Peng, Stochastic Hamilton Jacobi Bellman equations. SIAM J. Control Optim. 30, 284–304 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  20. A.S. Sznitman, Topics in propagation of chaos, in D.L. Burkholder et al., Ecole de Probabilités de Saint Flour, XIX-1989. Lecture Notes in Mathematics, vol. 1464 (Springer, Heidelberg, 1989), pp. 165–251

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to François Delarue .

Editor information

Editors and Affiliations

Appendix: A Generalized Form of Itô’s Formula

Appendix: A Generalized Form of Itô’s Formula

Our derivation of the master equation requires the use of a form of Itô formula in a space of probability measures. This subsection is devoted to the proof of such a formula.

1.1 Notion of Differentiability

In Sect. 4, we alluded to a specific notion of differentiability for functions of probability measures. The choice of this notion is dictated by the fact that (1) the probability measures we are dealing with appear as laws of random variables; (2) in trying to differentiate functions of measures, the infinitesimal variations which we consider are naturally expressed as infinitesimal variations in the linear space of those random variables. The relevance of this notion of differentiability was argued by P.L. Lions in his lectures at the Collège de France [16]. The notes [2] offer a readable account, and [3] provides several properties involving empirical measures. It is based on the lifting of functions \(\mathcal {P}_2(\mathbb {R}^d)\ni \mu \mapsto H(\mu )\) into functions \(\tilde{H}\) defined on the Hilbert space \(L^2(\tilde{\Omega };\mathbb {R}^d)\) over some probability space \((\tilde{\Omega },\tilde{\mathcal {F}},\tilde{\mathbb {P}})\) by setting \(\tilde{H}(\tilde{X})=H({\mathcal L}(\tilde{X}))\), for \(\tilde{X} \in L^2(\tilde{\Omega };\mathbb {R}^d)\), \(\tilde{\Omega }\) being a Polish space and \(\tilde{\mathbb {P}}\) an atomless measure.

Then, a function \(H\) is said to be differentiable at \(\mu _0\in \mathcal {P}_2(\mathbb {R}^d)\) if there exists a random variable \(\tilde{X}_0\) with law \(\mu _0\), in other words satisfying \({\mathcal L}(\tilde{X}_0)=\mu _0\), such that the lifted function \(\tilde{H}\) is Fréchet differentiable at \(\tilde{X}_0\). Whenever this is the case, the Fréchet derivative of \(\tilde{H}\) at \(\tilde{X}_0\) can be viewed as an element of \(L^2(\tilde{\Omega };\mathbb {R}^d)\) by identifying \(L^2(\tilde{\Omega };\mathbb {R}^d)\) and its dual. It turns out that its distribution depends only upon the law \(\mu _0\) and not upon the particular random variable \(\tilde{X}_0\) having distribution \(\mu _0\). See Sect. 6 in Ref. [2] for details. This Fréchet derivative \([D\tilde{H}](\tilde{X}_0)\) is called the representation of the derivative of \(H\) at \(\mu _0\) along the variable \(\tilde{X}_{0}\). It is shown in Ref. [2] that, as a random variable, it is of the form \(\tilde{h}(\tilde{X}_0)\) for some deterministic measurable function \(\tilde{h} : \mathbb {R}^d \rightarrow \mathbb {R}^d\), which is uniquely defined \(\mu _0\)-almost everywhere on \(\mathbb {R}^d\). The equivalence class of \(\tilde{h}\) in \(L^2(\mathbb {R}^d,\mu _{0})\) being uniquely defined, it can be denoted by \(\partial _{\mu } H(\mu _{0})\) (or \(\partial H(\mu _{0})\) when no confusion is possible). It is then natural to call \(\partial _\mu H(\mu _0)\) the derivative of \(H\) at \(\mu _{0}\) and to identify it with a function \(\partial _{\mu } H(\mu _{0})( \, \cdot \, ) : \mathbb {R}^d \ni v \mapsto \partial _{\mu } H(\mu _{0})(v) \in \mathbb {R}^d\).

This procedure permits to express \([D \tilde{H}](\tilde{X}_0)\) as a function of any random variable \(\tilde{X}_{0}\) with distribution \(\mu _0\), irrespective of where this random variable is defined.

Remark 6.1

Since it is customary to identify a Hilbert space to its dual, we will identify \(L^2(\tilde{\Omega })\) with its dual, and in so doing, any derivative \(D\tilde{H} (\tilde{X})\) will be viewed as an element of \(L^2(\tilde{\Omega })\). In this way, the derivative in the direction \(\tilde{Y}\) will be given by the inner product \([D\tilde{H} (\tilde{X})]\cdot \tilde{Y}\). Accordingly, the second Frechet derivative \(D^2\tilde{H}(\tilde{X})\) which should be a linear operator from \(L^2(\tilde{\Omega })\) into itself because of the identification with its dual, will be viewed as a bilinear form on \(L^2(\tilde{\Omega })\). In particular, we shall use the notation \(D^2\tilde{H} (\tilde{X})[\tilde{Y}, \tilde{Z}]\) for \(\big ([D^2\tilde{H} (\tilde{X})] (\tilde{Y})\big )\cdot \tilde{Z}\).

Remark 6.2

The following result (see [3] for a proof) gives, though under stronger regularity assumptions on the Fréchet derivatives, a convenient way to handle this notion of differentiation with respect to probability distributions. If the function \(\tilde{H}\) is Fréchet differentiable and if its Fréchet derivative is uniformly Lipschitz (i.e. there exists a constant \(c>0\) such that \(\Vert D\tilde{H}(\tilde{X}) - D\tilde{H}(\tilde{X}')\Vert \le c |\tilde{X} -\tilde{X}'|\) for all \(\tilde{X}, \tilde{X}'\) in \(L^2(\tilde{\Omega })\)), then there exists a function \(\partial _\mu H\)

$$ \mathcal {P}_2(\mathbb {R}^d)\times \mathbb {R}^d \ni (\mu , v)\mapsto \partial _\mu H (\mu )(v) $$

such as \(|\partial _\mu H (\mu )(v)-\partial _\mu H (\mu )(v')|\le c|v-v'|\) for all \(v,v'\in \mathbb {R}^d\) and \(\mu \in \mathcal {P}_2(\mathbb {R}^d)\), and for every \(\mu \in \mathcal {P}_2(\mathbb {R}^d)\), \(\partial _\mu H(\mu )(\tilde{X})=D\tilde{H}(\tilde{X})\) almost surely if \(\mu ={\mathcal L}(\tilde{X})\).

1.2 A.2 Itô’s Formula Along a Flow of Conditional Measures

In the derivation of the master equation, the value function is expanded along a flow of conditional measures. As already explained in Sect. 4.3, this requires a suitable construction of the lifting.

Throughout this section, we assume that \((\Omega ,{\mathcal F},\mathbb {P})\) is of the form \((\Omega ^{0} \times \Omega ^1,{\mathcal F}^0 \otimes {\mathcal F}^1,\mathbb {P}^0 \otimes \mathbb {P}^1)\), \((\Omega ^0,{\mathcal F}^0,\mathbb {P}^0)\) supporting the common noise \(W^0\), and \((\Omega ^1,{\mathcal F}^1,\mathbb {P}^1)\) the idiosyncratic noise \(W\). So an element \(\omega \in \Omega \) can be written as \(\omega =(\omega ^0,\omega ^1) \in \Omega ^0 \times \Omega ^1\), and functionals \(H(\mu (\omega ^0))\) of a random probability measure \(\mu (\omega ^0) \in {\mathcal P}_{2}(\mathbb {R}^d)\) with \(\omega ^0 \in \Omega ^0\), can be lifted into \(\tilde{H}(\tilde{X}(\omega ^0,\cdot ))=H({\mathcal L}(\tilde{X}(\omega ^0,\cdot )))\), where \(\tilde{X}(\omega ^0,\cdot )\) is an element of \(L^2(\tilde{\Omega }^1,\tilde{\mathcal F}^1, \mathbb {P}^1;\mathbb {R}^d)\) with \(\mu (\omega ^0)\) as distribution, \((\tilde{\Omega }^1,\tilde{\mathcal F}^1, \tilde{\mathbb {P}}^1)\) being Polish and atomless. Put it differently, the random variable \(\tilde{X}\) is defined on \((\tilde{\Omega } = \Omega ^0 \times \tilde{\Omega }^1, \tilde{\mathcal F}={\mathcal F}^0 \otimes \tilde{{\mathcal {F}}}^1,\tilde{\mathbb {P}}= \mathbb {P}^0 \otimes \tilde{\mathbb {P}}^1)\).

The objective is then to expand \((\tilde{H}(\tilde{\chi }_{t}(\omega ^0,\cdot )))_{0 \le t \le T}\), where \((\tilde{\chi }_{t})_{0 \le t \le T}\) is the copy so constructed, of an Itô process on \((\Omega ,{\mathcal F},\mathbb {P})\) of the form:

$$\begin{aligned} \chi _{t} = \chi _{0} + \int \limits _{0}^t \beta _{s} ds + \int \limits _{0}^t \int \limits _{\Xi } \varsigma _{s,\xi }^0 W^0(d\xi ,ds) + \int \limits _{0}^t \varsigma _{s} dW_{s}, \end{aligned}$$

for \(t \in [0,T]\), assuming that the processes \((\beta _{t})_{0 \le t \le T}\), \((\varsigma _{t})_{0 \le t \le T}\) and \((\varsigma _{t,\xi }^0)_{0 \le t \le T,\xi \in \Xi }\) are progressively measurable with respect to the filtration generated by \(W\) and \(W^0\) and square integrable, in the sense that

$$\begin{aligned} {\mathbb E} \int \limits _{0}^T \biggl ( \vert \beta _{t} \vert ^2 + \vert \varsigma _{t} \vert ^2 + \int \limits _{\Xi } \vert \varsigma _{t,\xi }^0 \vert ^2 d \nu (\xi ) \biggr ) dt < + \infty . \end{aligned}$$
(74)

Denoting by \((\tilde{W}_{t})_{0 \le t \le T}\), \((\tilde{\beta }_{t})_{0 \le t \le T}\), \((\tilde{\varsigma }_{t})_{0 \le t \le T}\) and \((\tilde{\varsigma }_{t,\xi }^0)_{0 \le t \le T,\xi \in \Xi }\) the copies of \((W_{t})_{0 \le t \le T}\), \((\beta _{t})_{0 \le t \le T}\), \((\varsigma _{t})_{0 \le t \le T}\) and \((\varsigma _{t,\xi }^0)_{0 \le t \le T,\xi \in \Xi }\), we then have

$$\begin{aligned} \tilde{\chi }_{t} = \tilde{\chi }_{0} + \int \limits _{0}^t \tilde{\beta }_{s} ds + \int \limits _{0}^t \int \limits _{\Xi } \tilde{\varsigma }_{s,\xi }^0 W^0(d\xi ,ds) + \int \limits _{0}^t \tilde{\varsigma }_{s} d\tilde{W}_{s}, \end{aligned}$$

for \(t \in [0,T]\). In this framework, we emphasize that it makes sense to look at \(\tilde{H}(\tilde{\chi }_{t}(\omega ^0,\cdot ))\), for \(t \in [0,T]\), since

$$\begin{aligned} {\mathbb E}^0 \tilde{{\mathbb E}}^1 \bigl [ \sup _{0 \le t \le T} \vert \tilde{\chi }_{t} \vert ^2 \bigr ] = {\mathbb E}^0 {\mathbb E}^1 \bigl [ \sup _{0 \le t \le T} \vert \chi _{t} \vert ^2 \bigr ] <+ \infty , \end{aligned}$$

where \({\mathbb E}^0\), \({\mathbb E}^1\) and \(\tilde{\mathbb {E}}^1\) are the expectations associated to \(\mathbb {P}^0\), \(\mathbb {P}^1\) and \(\tilde{\mathbb {P}}^1\) respectively.

In order to simplify notations, we let \(\check{\chi }_{t}(\omega ^0)=\tilde{\chi }_{t}(\omega ^0,\cdot )\) for \(t \in [0,T]\), so that \((\check{\chi }_{t})_{0 \le t \le T}\) is \(L^2(\tilde{\Omega }^1,\tilde{{\mathcal F}}^1,\tilde{\mathbb {P}}^1;\mathbb {R}^d)\)-valued, \(\mathbb {P}^0\) almost surely. Similarly, we let \(\check{\beta }_{t}(\omega ^0)=\tilde{\beta }_{t}(\omega ^0,\cdot )\), \(\check{\varsigma }_{t}(\omega ^0)=\tilde{\varsigma }_{t}(\omega ^0,\cdot )\) \(\check{\varsigma }_{t,\xi }(\omega ^0)=\tilde{\varsigma }_{t,\xi }(\omega ^0,\cdot )\), for \(t \in [0,T]\) and \(\xi \in \Xi \). We then claim

Proposition 6.3

On the top of the assumption and notation introduced right above, assume that \(\tilde{H}\) is twice continuously Fréchet differentiable. Then, we have \(\mathbb {P}^0\) almost surely, for all \(t \in [0,T]\),

$$\begin{aligned} \tilde{H}\bigl (\check{\chi }_{t}\bigr )&= \tilde{H}\bigl (\check{\chi }_{0} \bigr ) + \int \limits _{0}^t D \tilde{H} \bigl (\check{\chi }_{s}\bigr )\cdot \check{\beta }_{s} ds + \int \limits _{0}^t \int \limits _{\Xi } D \tilde{H}\bigl (\check{\chi }_{s}\bigr ) \cdot \check{\varsigma }_{s,\xi }^0 \;W^0(d\xi ,ds) \nonumber \\&\quad + \frac{1}{2} \int \limits _{0}^t \biggl ( D^2 \tilde{H}(\tilde{\chi }_{s}) \bigl [ \check{\varsigma }_{s} \tilde{G},\check{\varsigma }_{s} \tilde{G} \bigr ] + \int \limits _{\Xi } D^2 \tilde{H}\bigl (\check{\chi }_{s}\bigr ) \bigl [ \check{\varsigma }_{s,\xi }^0,\check{\varsigma }_{s,\xi }^0 \bigr ] d\nu (\xi ) \biggr ) ds. \end{aligned}$$
(75)

where \(\tilde{G}\) is an \({\mathcal N}(0,1)\)-distributed random variable on \((\tilde{\Omega }^1,\tilde{{\mathcal F}}^1, \tilde{\mathbb {P}}^1)\), independent of \((\tilde{W}_{t})_{t \ge 0}\).

Remark 6.4

Following Remark 6.2, one can specialize Itô’s formula to a situation with smoother derivatives. See Ref. [8] for a more detailed account. Indeed, if one assumes that

  1. 1.

    the function \(H\) is \(C^1\) in the sense given above and its first derivative is Lipschitz;

  2. 2.

    for each fixed \(v \in \mathbb {R}^d\), the function \(\mu \mapsto \partial _\mu H(\mu )(v)\) is differentiable with Lipschitz derivative, and consequently, there exists a function

    $$ (\mu ,v',v)\mapsto \partial ^2_{\mu }H(\mu )(v)(v') \in \mathbb {R}^{d \times d} $$

    which is Lipschitz in \(v'\) uniformly with respect to \(v\) and \(\mu \) and such that \(\partial ^2_{\mu }H(\mu )(v)(\tilde{X})\) gives the Fréchet derivative of \(\mu \mapsto \partial _\mu H(\mu )(v)\) for every \(v \in \mathbb {R}^d\) as long as \(\mathcal {L}(\tilde{X}) = \mu \);

  3. 3.

    for each fixed \(\mu \in \mathcal {P}_2(\mathbb {R}^d)\), the function \(v \mapsto \partial _\mu H(\mu )(v)\) is differentiable with Lipschitz derivative, and consequently, there exists a bounded function \((v,\mu )\mapsto \partial _v\partial _\mu H(\mu )(v) \in \mathbb {R}^{d \times d}\) giving the value of its derivative;

  4. 4.

    the functions \( (\mu ,v',v)\mapsto \partial ^2_{\mu }H(\mu )(v)(v') \) and \( (\mu ,v)\mapsto \partial _{v} \partial _{\mu }H(\mu )(v) \) are continuous (the space \({\mathcal P}_{2}(\mathbb {R}^d)\) being endowed with the \(2\)-Wasserstein distance).

Then, the second order term appearing in Itô’s formula can be expressed as the sum of two explicit operators whose interpretations are more natural. Indeed, the second Fréchet derivative \(D^2\tilde{H}(\tilde{X})\) can be written as the linear operator \(\tilde{Y}\mapsto A\tilde{Y}\) on \(L^2(\tilde{\Omega }^1,\tilde{{\mathcal F}}^1,\mathbb {P}^1;\mathbb {R}^d)\) defined by

$$\begin{aligned}{}[A\tilde{Y}](\tilde{\omega }^1)&=\int \limits _{\tilde{\Omega }^{1,\prime }} \partial _{\mu }^2 H\bigl ({\mathcal L}(\tilde{X})\bigr ) \bigl ( \tilde{X}(\tilde{\omega }^1) \bigr ) \bigl (\tilde{X}'(\omega ')\bigr )\tilde{Y}'(\omega ')\,d \tilde{\mathbb {P}}^{1,\prime }(\omega ')\\&\quad +\;\partial _v\partial _\mu H\bigl ({\mathcal L}({\tilde{X}}) \bigr ) \bigl ( \tilde{X}(\tilde{\omega }^1)\bigr )\tilde{Y}(\tilde{\omega }^1), \end{aligned}$$

where \((\tilde{\Omega }^{1,\prime },\tilde{\mathcal {F}}^{1,\prime },\tilde{\mathbb {P}}^{1,\prime })\) is another Polish and atomless probability space endowed with a copy \((\tilde{X}',\tilde{Y}')\) of \((\tilde{X},\tilde{Y})\).

In particular, when \(\tilde{Y}\) is replaced by \(\tilde{Y}\times \tilde{G}\), with \(\tilde{G} \sim {\mathcal N}(0,1)\) and independent of \((\tilde{X},\tilde{Y})\), the integral over \(\tilde{\Omega }^{1,\prime }\) in the right-hand side vanishes. We then obtain

$$\begin{aligned}&D^2 \tilde{H}(\tilde{X}) \bigl [ \tilde{Y} ,\tilde{Y} \bigr ] = \tilde{{\mathbb E}}^1 \tilde{{\mathbb E}}^{1,\prime } \bigl \{ \mathrm{trace} \bigl [ \partial ^2_{\mu } H \bigl ( {\mathcal L} (\tilde{X}) \bigr ) ( \tilde{X}) (\tilde{X}') \tilde{Y} \bigl (\tilde{Y}' \bigr )^{\top } \bigr ] \bigr \} \\&\qquad \qquad \qquad \qquad + \tilde{{\mathbb E}}^1 \bigl \{ \mathrm{trace} \bigl [ \partial _{v} \partial _{\mu } H \bigl ( {\mathcal L} (\tilde{X}) \bigr ) ( \tilde{X}) \tilde{Y} \tilde{Y}^{\top } \bigr ] \bigr \}, \\&D^2 \tilde{H}(\tilde{X}) \bigl [ \tilde{Y} \tilde{G},\tilde{Y} \tilde{G} \bigr ] = \tilde{{\mathbb E}}^1 \bigl \{ \mathrm{trace} \bigl [ \partial _{v} \partial _{\mu } H \bigl ( {\mathcal L} (\tilde{X}) \bigr ) ( \tilde{X}) \tilde{Y} \tilde{Y}^{\top } \bigr ] \bigr \}. \end{aligned}$$

The derivation of the master equation actually requires a more general result than Proposition 6.3. Indeed one needs to expand \((\tilde{H}(X_{t},\check{\chi }_{t}))_{0 \le t \le T}\) for a function \(\tilde{H}\) of \((x,\tilde{X}) \in \mathbb {R}^d \times L^2(\tilde{\Omega }^1, \tilde{{\mathcal F}}^1, \tilde{\mathbb {P}}^1;\mathbb {R}^d)\). As before, \((\check{\chi }_{t})_{0 \le t \le T}\) is understood as \((\tilde{\chi }_{t}(\omega ^0,\cdot ))_{0 \le t \le T}\). The process \((X_{t})_{0 \le t \le T}\) is assumed to be another Itô process, defined on the original space \((\Omega ,{\mathcal F},\mathbb {P}) = (\Omega ^0 \times \Omega ^1,{\mathcal F}^0 \otimes {\mathcal F}^1,\mathbb {P}^0 \otimes \mathbb {P}^1)\), with dynamics of the form

$$\begin{aligned} X_{t} = X_{0} + \int \limits _{0}^t b_{s} ds + \int \limits _{0}^t \int \limits _{\Xi } \sigma _{s,\xi }^0 W^0(d\xi ,ds) + \int \limits _{0}^t \sigma _{s} dW_{s}, \end{aligned}$$

for \(t \in [0,T]\), the processes \((b_{t})_{0 \le t \le T}\), \((\sigma _{t})_{0 \le t \le T}\) and \((\sigma _{t,\xi }^0)_{0 \le t \le T,\xi \in \Xi }\) being progressively-measurable with respect to the filtration generated by \(W\) and \(W^0\), and square integrable as in (74). Under these conditions, the result of Proposition 6.3 can be extended to:

Proposition 6.5

On the top of the above assumptions and notations, assume that \(\tilde{H}\) is twice continuously Fréchet differentiable on \(\mathbb {R}^d \times L^2(\tilde{\Omega }^1,\tilde{{\mathcal F}}^1,\tilde{\mathbb {P}}^1;\mathbb {R}^d)\). Then, we have \(\mathbb {P}\) almost surely, for all \(t \in [0,T]\),

$$\begin{aligned}&\tilde{H} \bigl (X_{t},\check{\chi }_{t}\bigr ) = \tilde{H}\bigl (X_{0},\check{\chi }_{0}\bigr ) \\&\quad + \int \limits _{0}^t \Bigl ( \langle \partial _{x} \tilde{H}\bigl (X_{s},\check{\chi }_{s}\bigr ), b_{s}\rangle + D_{\mu } \tilde{H} \bigl (X_{s},\check{\chi }_{s}\bigr ) \cdot \check{\beta }_{s} \Bigr ) ds + \int \limits _{0}^t \big [\partial _{x} \tilde{H}\bigl (X_{s},\check{\chi }_{s}\bigr )\big ]^\dagger \sigma _{s} dW_{s} \\&\quad + \int \limits _{0}^t \int \limits _{\Xi } \Bigl ( \big [\partial _{x} \tilde{H}\bigl (X_{s},\check{\chi }_{s}\bigr )\big ]^\dagger \sigma ^0_{s,\xi } + D_{\mu } \tilde{H}\bigl (X_{s},\check{\chi }_{s}\bigr )\cdot \check{\varsigma }_{s,\xi }^0 \Bigr )\; W^0(d\xi ,ds) \\&\quad + \frac{1}{2} \int \limits _{0}^t \int \limits _{\Xi } \Bigl ( \mathrm{trace} \bigl [ \partial ^2_{x} \tilde{H}\bigl (X_{s},\check{\chi }_{s}\bigr ) \sigma ^0_{s,\xi }( \sigma ^0_{s,\xi })^{\dagger } \bigr ] + D^2_{\mu } \tilde{H}\bigl (X_{s},\check{\chi }_{s}\bigr ) \bigl [ \check{\varsigma }_{s,\xi }^0,\check{\varsigma }_{s,\xi }^0 \bigr ] \Bigr ) d\nu (\xi ) ds \\&\quad + \frac{1}{2} \int \limits _{0}^t \bigg (\mathrm{trace} \bigl [ \partial ^2_{x} \tilde{H}\bigl (X_{s},\check{\chi }_{s}\bigr ) \sigma _{s}( \sigma _{s})^{\dagger } \bigr ] + D^2_{\mu } \tilde{H}\bigl (X_{s},\check{\chi }_{s}\bigr ) \bigl [ \check{\varsigma }_{s} \tilde{G},\check{\varsigma }_{s} \tilde{G} \bigr ] \bigg )ds \\&\quad + \int \limits _{0}^t \int \limits _{\Xi }\bigl \langle \partial _{x} D_{\mu } \tilde{H}\bigl (X_{s},\check{\chi }_{s}\bigr ) \cdot \check{\varsigma }_{s,\xi }^0\, ,\, \sigma _{s,\xi }^0 \bigr \rangle d\nu (\xi ) ds. \end{aligned}$$

where \(\tilde{G}\) is an \({\mathcal N}(0,1)\)-distributed random variable on \((\tilde{\Omega }^1,\tilde{{\mathcal F}}^1, \tilde{\mathbb {P}}^1)\), independent of \((\tilde{W}_{t})_{t \ge 0}\). The partial derivatives in the infinite dimensional component are denoted with the index ‘\(\mu \)’. In that framework, the term \(\langle \partial _{x} D_{\mu } \tilde{H}(X_{s},\check{\chi }_{s}) \cdot \check{\varsigma }_{s,\xi }^0 ,\sigma _{s,\xi }^0 \rangle \) reads

$$ \sum _{i=1}^d \{ \partial _{x_{i}} D_{\mu } \tilde{H}(X_{s},\check{\chi }_{s}) \cdot \check{\varsigma }_{s,\xi }^0 \} \bigl ( \sigma _{s,\xi }^0 \bigr )_{i}.$$

1.3 A.3 Proof of Itô’s Formula

We only provide the proof of Proposition 6.3 as the proof of Proposition 6.5 is similar.

By a standard continuity argument, it is sufficient to prove that Eq. (75) holds for any \(t \in [0,T]\) \(\mathbb {P}^0\)-almost surely. In particular, we can choose \(t=T\). Moreover, by a standard approximation argument, it is sufficient to consider the case of simple processes \((\beta _{t})_{0 \le t \le T}\), \((\varsigma _{t})_{0 \le t \le T}\) and \((\varsigma _{t,\xi }^0)_{0 \le t \le T,\xi }\) of the form

$$\begin{aligned} \beta _{t} = \sum _{i=0}^{M-1} \beta _{i} {\mathbf 1}_{[\tau _{i},\tau _{i+1})}(t), \quad \varsigma _{t} = \sum _{i=0}^{M-1} \varsigma _{i} {\mathbf 1}_{[\tau _{i},\tau _{i+1})}(t), \quad \varsigma _{t,\xi }^0 = \sum _{i=0}^{M-1} \sum _{j=1}^N \varsigma ^0_{i,j} {\mathbf 1}_{[\tau _{i},\tau _{i+1})}(t) {\mathbf 1}_{A_{j}}(\xi ), \end{aligned}$$

where \(M,N \ge 1\), \(0=\tau _{0}<\tau _{1} < \dots < \tau _{M}=T\), \((A_{j})_{1 \le j \le N}\) are piecewise disjoint Borel subsets of \(\Xi \) and \((\beta ^i,\varsigma ^i,\varsigma ^0_{i,j})_{1 \le j \le N}\) are bounded \({\mathcal F}_{\tau _{i}}\)-measurable random variables.

The strategy is taken from Ref. [8] and consists in splitting \(\tilde{H}(\check{\chi }_{T}) - \tilde{H}(\check{\chi }_{0})\) into

$$\begin{aligned} \tilde{H}(\check{\chi }_{T}) - \tilde{H}(\check{\chi }_{0}) = \sum _{k=0}^{K-1} \bigl ( \tilde{H}(\check{\chi }_{t_{k+1}}) - \tilde{H}(\check{\chi }_{t_{k}}) \bigr ), \end{aligned}$$

where \(0=t_{0}< \dots < t_{K}=T\) is a subdivision of \([0,T]\) of step \(h\) such that, for any \(k \in \{0,\dots ,K-1\}\), there exists some \(i \in \{0,\dots ,M-1\}\) such that \([t_{k},t_{k+1}) \subset [\tau _{i},\tau _{i+1})\). We then start with approximating a general increment \(\tilde{H}(\check{\chi }_{t_{k+1}}) - \tilde{H}(\check{\chi }_{t_{k}})\), omitting to specify the dependence upon \(\omega ^0\). By Taylor’s formula, we know that we can find some \(\delta \in [0,1]\) such that

$$\begin{aligned}&\tilde{H}(\check{\chi }_{t_{k+1}}) - \tilde{H}(\check{\chi }_{t_{k}}) \nonumber \\&= D \tilde{H}(\check{\chi }_{t_{k}}) \cdot (\check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}}) \nonumber \\&+\frac{1}{2} D^2 \tilde{H}\bigl (\check{\chi }_{t_{k}} + \delta (\check{\chi }_{t_{k+1}}- \check{\chi }_{t_{k}}) \bigr ) \bigl ( \check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}}, \check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}} \bigr ) \nonumber \\&= D \tilde{H}(\check{\chi }_{t_{k}}) \cdot (\check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}}) + \frac{1}{2} D^2 \tilde{H}(\check{\chi }_{t_{k}}) \bigl ( \check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}}, \check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}} \bigr ) \nonumber \\& + \frac{1}{2} \bigl [ D^2 \tilde{H}\bigl (\check{\chi }_{t_{k}} + \delta (\check{\chi }_{t_{k+1}}- \check{\chi }_{t_{k}}) \bigr ) - D^2 \tilde{H}\bigl (\check{\chi }_{t_{k}} \bigr ) \bigr ] \bigl ( \check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}}, \check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}} \bigr ). \end{aligned}$$
(76)

By Kolmogorov continuity theorem, we know that, \(\mathbb {P}^0\) almost surely, the mapping \([0,T] \ni t \mapsto \tilde{\chi }_{t} \in L^2(\tilde{\Omega }^1,\tilde{{\mathcal F}}^1, \tilde{\mathbb {P}}^1;\mathbb {R}^d)\) is continuous. Therefore, \(\mathbb {P}^0\) almost surely, the mapping \( (s,t,\delta ) \mapsto D^2 \tilde{H}(\check{\chi }_{t} + \delta (\check{\chi }_{s}- \check{\chi }_{t}))\) is continuous from \([0,T]^2 \times [0,1]\) to the space of bounded operators from \(L^2(\tilde{\Omega }^1,\tilde{{\mathcal F}}^1, \tilde{\mathbb {P}}^1;\mathbb {R}^d)\) into itself, which proves that, \(\mathbb {P}^0\) almost surely,

$$\begin{aligned} \lim _{h \searrow 0} \sup _{s,t \in [0,T], \vert t-s \vert \le h} \sup _{\delta \in [0,1]} \vert \!\vert \!\vert D^2 \tilde{H}\bigl (\check{\chi }_{t} + \delta (\check{\chi }_{s}- \check{\chi }_{t}) \bigr ) - D^2 \tilde{H}\bigl (\check{\chi }_{t} \bigr ) \vert \!\vert \!\vert _{2,\tilde{\Omega }^1}=0, \end{aligned}$$

\(\vert \!\vert \!\vert \cdot \vert \!\vert \!\vert _{2,\tilde{\Omega }^1}\) denoting the operator norm on the space of bounded operators on \(L^2(\tilde{\Omega }^1,\tilde{{\mathcal F}}^1, \tilde{\mathbb {P}}^1;\mathbb {R}^d)\). Now,

$$\begin{aligned}&\biggl \vert \sum _{k=0}^{K-1} \bigl [ D^2 \tilde{H}\bigl (\check{\chi }_{t_{k}} + \delta (\check{\chi }_{t_{k+1}}- \check{\chi }_{t_{k}}) \bigr ) - D^2 \tilde{H}\bigl (\check{\chi }_{t_{k}} \bigr ) \bigr ] \bigl ( \check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}}, \check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}} \bigr ) \biggr \vert \\&\quad \le \sup _{s,t \in [0,T], \vert t-s \vert \le h} \sup _{\delta \in [0,1]} \vert \!\vert \!\vert D^2 \tilde{H}\bigl (\check{\chi }_{t} + \delta (\check{\chi }_{s}- \check{\chi }_{t}) \bigr )\\ {}&\qquad - D^2 \tilde{H}\bigl (\check{\chi }_{t} \bigr ) \vert \!\vert \!\vert _{2,\tilde{\Omega }^1} \sum _{k=0}^{K-1} \Vert \check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}} \Vert _{L^2(\tilde{\Omega })}^2. \end{aligned}$$

Since

$$\begin{aligned} \mathbb {E}^0 \biggl [ \sum _{k=0}^{K-1} \Vert \check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}} \Vert _{L^2(\tilde{\Omega })}^2 \biggr ] \le C \sum _{k=0}^{K-1} \bigl (t_{k+1}- t_{k}\bigr ) \le CT, \end{aligned}$$

we deduce that

$$\begin{aligned} \biggl \vert \sum _{k=0}^{K-1} \bigl [ D^2 \tilde{H}\bigl (\check{\chi }_{t_{k}} + \delta (\check{\chi }_{t_{k+1}}- \check{\chi }_{t_{k}}) \bigr ) - D^2 \tilde{H}\bigl (\check{\chi }_{t_{k}} \bigr ) \bigr ] \cdot \bigl ( \check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}}, \check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}} \bigr ) \biggr \vert \rightarrow 0 \end{aligned}$$
(77)

in \(\mathbb {P}^0\) probability as \(h\) tends to \(0\). We now compute the various terms appearing in (76). We write

$$\begin{aligned}&D \tilde{H}(\check{\chi }_{t_{k}}) \cdot (\check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}}) = D \tilde{H}(\check{\chi }_{t_{k}}) \cdot \int \limits _{t_{k}}^{t_{k+1}} \tilde{\beta }_{s}( \omega ^0,\cdot ) ds \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad + D \tilde{H}(\check{\chi }_{t_{k}}) \cdot \biggl [ \biggl ( \int \limits _{t_{k}}^{t_{k+1}} \int _{\Xi } \tilde{\varsigma }_{s,\xi }^0 W^0(d\xi ,ds) \biggr )(\omega ^0,\cdot ) \biggr ] \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad + D \tilde{H}(\check{\chi }_{t_{k}}) \cdot \biggl [ \biggl ( \int \limits _{t_{k}}^{t_{k+1}} \tilde{\varsigma }_{s} d\tilde{W}_{s} \biggr ) \biggr ] (\omega ^0,\cdot ). \end{aligned}$$

Assume that, for some \(0 \le i \le M-1\), \(\tau _{i} \le t_{k} < t_{k+1} \le \tau _{i+1}\). Then,

$$\begin{aligned} D \tilde{H}(\check{\chi }_{t_{k}}) \cdot \int \limits _{t_{k}}^{t_{k+1}} \tilde{\beta }_{s}( \omega ^0,\cdot ) ds = \bigl (t_{k+1}-t_{k} \bigr ) D \tilde{H}(\check{\chi }_{t_{k}}) \cdot \tilde{\beta }_{t_{k}}(\omega ^0,\cdot ). \end{aligned}$$
(78)

Note that the right-hand side is well-defined as \(\beta _{t_{k}}\) is bounded. Similarly, we notice that

$$\begin{aligned} D \tilde{H}(\check{\chi }_{t_{k}}) \cdot \biggl [ \biggl ( \int \limits _{t_{k}}^{t_{k+1}} \tilde{\varsigma }_{s} d \tilde{W}_{s} \biggr )(\omega ^0,\cdot ) \biggr ] = \bigl (t_{k+1}-t_{k} \bigr ) D \tilde{H}(\check{\chi }_{t_{k}}) \cdot \bigl [ \tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) \bigl (\tilde{W}_{t_{k+1}} - \tilde{W}_{t_{k}} \bigr ) \bigr ]. \end{aligned}$$

Now, using the specific form of \(D \tilde{H}\), \(D \tilde{H}(\check{\chi }_{t_{k}}(\omega ^0))=(\tilde{\omega }^1 \mapsto \partial _{\mu }H({\mathcal L}( \check{\chi }_{t_{k}}(\omega ^0)))(\tilde{\chi }_{t_{k}}(\omega ^0,\tilde{\omega }^1))\) appears to be a \(\tilde{{\mathcal F}}_{t_{k}}\)-measurable random variable, and as such, it is orthogonal to \(\tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) (\tilde{W}_{t_{k+1}} -\tilde{W}_{t_{k}})\), which shows that

$$\begin{aligned} D \tilde{H}(\check{\chi }_{t_{k}}) \cdot \biggl [ \biggl ( \int \limits _{t_{k}}^{t_{k+1}} \tilde{\varsigma }_{s} d\tilde{W}_{s} \biggr )(\omega ^0,\cdot ) \biggr ] = 0. \end{aligned}$$
(79)

Finally,

$$\begin{aligned}&D \tilde{H}(\check{\chi }_{t_{k}}) \cdot \biggl [ \biggl ( \int \limits _{t_{k}}^{t_{k+1}} \int \limits _{\Xi } \tilde{\varsigma }^0_{s,\xi } W^0(d\xi ,ds) \biggr )(\omega ^0,\cdot ) \biggr ] \\&\quad = D \tilde{H}(\check{\chi }_{t_{k}}) \cdot \biggl [ \sum _{j=1}^N \tilde{\varsigma }^0_{i,j}(\omega ^0,\cdot ) W^0\bigl (A_{j} \times [t_{k},t_{k+1}) \bigr )(\omega ^0) \biggr ]. \end{aligned}$$

Now, \(W^0\bigl (A_{j} \times [t_{k},t_{k+1}) \bigr )(\omega ^0)\) behaves as a constant in the linear form above. Therefore,

$$\begin{aligned}&D \tilde{H}(\check{\chi }_{t_{k}}) \cdot \biggl [ \biggl ( \int \limits _{t_{k}}^{t_{k+1}} \int \limits _{\Xi } \tilde{\varsigma }^0_{s,\xi } W^0(d\xi ,ds) \biggr )(\omega ^0,\cdot ) \biggr ] \nonumber \\&\quad = \sum _{j=1}^N D \tilde{H}(\check{\chi }_{t_{k}}) \cdot \tilde{\varsigma }^0_{i,j}(\omega ^0,\cdot ) W^0\bigl (A_{j} \times [t_{k},t_{k+1}) \bigr )(\omega ^0) \nonumber \\&\quad = \biggl [\int \limits _{t_{k}}^{t_{k+1}} \int \limits _{\Xi } \bigl \{ D \tilde{H}(\check{\chi }_{t_{k}}) \cdot \tilde{\varsigma }^0_{s,\xi }(\omega ^0,\cdot ) \bigr \} W^0(d\xi ,ds) \biggr ](\omega ^0). \end{aligned}$$
(80)

Therefore, in analogy with (77), we deduce from (78), (79) and (80) that

$$\begin{aligned} \sum _{k=0}^{K-1} D \tilde{H}(\check{\chi }_{t_{k}}) \cdot (\check{\chi }_{t_{k+1}} - \check{\chi }_{t_{k}}) \rightarrow \int \limits _{0}^T D \tilde{H}(\tilde{X}_{s}) \cdot \check{\beta }_{s} ds + \int \limits _{0}^T \int \limits _{\Xi } \bigl \{ D \tilde{H}(\check{\chi }_{s}) \cdot \check{\varsigma }^0_{s,\xi } \bigr \} W^0(d\xi ,ds), \end{aligned}$$

in \(\mathbb {P}^0\) probability as \(h\) tends to \(0\).

We now reproduce this analysis for the second order derivatives. We need to compute:

$$\begin{aligned} \Gamma _k&:= D^2 \tilde{H}(\check{\chi }_{t_{k}}) \Bigl [\tilde{\beta }_{t_{k}}(\omega ^0,\cdot ) \bigl (t_{k+1}-t_{k}\bigr ) + \tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) \bigl ( \tilde{W}_{t_{k+1}}- \tilde{W}_{t_{k}}\bigr ) \\&\qquad \qquad \qquad \qquad + \sum _{j=1}^N \tilde{\varsigma }_{i,j}^0(\omega ^0,\cdot ) W^0\bigl ([t_{k},t_{k+1}) \times A_{j}\bigr )(\omega ^0), \\&\qquad \qquad \tilde{\beta }_{t_{k}}(\omega ^1,\cdot ) \bigl (t_{k+1}-t_{k}\bigr ) + \tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) \bigl ( \tilde{W}_{t_{k+1}}-\tilde{W}_{t_{k}}\bigr )\\&\qquad \qquad \qquad \qquad + \sum _{j=1}^N \tilde{\varsigma }_{i,j}^0(\omega ^0,\cdot ) W^0\bigl ([t_{k},t_{k+1}) \times A_{j}\bigr )(\omega ^0) \Bigr ]. \end{aligned}$$

Clearly, the drift has very low influence on the value of \(\Gamma _{k}\). Precisely, for investigating the limit (in \(\mathbb {P}^0\) probability) of \(\sum _{k=0}^{K-1} \Gamma _{k}\), we can focus on the ‘reduced’ version of \(\Gamma _{k}\):

$$\begin{aligned} \Gamma _{k}&:= D^2 \tilde{H}(\check{\chi }_{t_{k}}) \Bigl [ \tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) \bigl ( \tilde{W}_{t_{k+1}}- \tilde{W}_{t_{k}}\bigr ) + \sum _{j=1}^N \varsigma _{i,j}^0(\omega ^0,\cdot ) W^0\bigl ([t_{k},t_{k+1}) \times A_{j}\bigr )(\omega ^0), \\&\quad \qquad \qquad \qquad \quad \tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) \bigl ( \tilde{W}_{t_{k+1}}- \tilde{W}_{t_{k}}\bigr ) + \sum _{j=1}^N \varsigma _{i,j}^0(\omega ^0,\cdot ) W^0\bigl ([t,t+h] \times A_{j}\bigr )(\omega ^0) \Bigr ]. \end{aligned}$$

We first notice that

$$\begin{aligned} D^2 \tilde{H}(\check{\chi }_{t_{k}}) \bigl [\tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) \bigl ( \tilde{W}_{t_{k+1}} - \tilde{W}_{t_{k}} \bigr ) , \tilde{\varsigma }_{i,j}^0(\omega ^0,\cdot ) W^0\bigl ([t_{k},t_{k+1}) \times A_{j}\bigr )(\omega ^0) \bigr ]= 0 \end{aligned}$$

(and the same for the symmetric term), the reason being that

$$\begin{aligned}&D^2 \tilde{H}(\check{\chi }_{t_{k}}) \bigl [\tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) \bigl ( \tilde{W}_{t_{k+1}} - \tilde{W}_{t_{k}} \bigr ) , \tilde{\varsigma }_{i,j}^0(\omega ^0,\cdot ) W^0\bigl ([t_{k},t_{k+1}) \times A_{j}\bigr )(\omega ^0) \bigr ] \\&= \lim _{\epsilon \rightarrow 0} \epsilon ^{-1}\bigl [ D \tilde{H}\bigl ( \check{\chi }_{t_{k}} + \epsilon \tilde{\varsigma }_{i,j}^0(\omega ^0,\cdot ) W^0\bigl ([t_{k},t_{k+1}) \times A_{j}\bigr )(\omega ^0) \bigr ) \\&\quad \qquad \qquad \quad - D \tilde{H}( \check{\chi }_{t_{k}}) \bigr ] \cdot \bigl [\tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) \bigl ( \tilde{W}_{t_{k+1}}- \tilde{W}_{t_{k}} \bigr ) \bigr ], \end{aligned}$$

which is zero by the independence argument used in (79). Following the proof of (80),

$$\begin{aligned}&D^2 \tilde{H}(\check{\chi }_{t_{k}}) \Bigl [\sum _{j=1}^N \tilde{\varsigma }_{i,j}^0(\omega ^0,\cdot ) W^0\bigl ([t_{k},t_{k+1}) \times A_{j}\bigr )(\omega ^0) ,\\&\qquad \qquad \qquad \sum _{j=1}^N \tilde{\varsigma }_{i,j}^0(\omega ^0,\cdot ) W^0\bigl ([t_{k},t_{k+1}) \times A_{j}\bigr )(\omega ^0) \Bigr ] \\&\quad = \sum _{j,j'=1}^N D^2 \tilde{H}(\check{\chi }_{t_{k}}) \bigl [ \tilde{\varsigma }_{i,j}^0(\omega ^0,\cdot ),\tilde{\varsigma }_{i,j'}^0(\omega ^0,\cdot ) \bigr ]\\&\qquad \times W^0\bigl ([t_{k},t_{k+1}) \times A_{j}\bigr )(\omega ^0) W^0\bigl ([t_{k},t_{k+1}) \times A_{j'}\bigr )(\omega ^0). \end{aligned}$$

The second line reads as a the bracket of a discrete stochastic integral. Letting \(\check{\varsigma }_{i,j}^0(\omega ^0) = \tilde{\varsigma }_{i,j}^0(\omega ^0,\cdot )\), it is quite standard to check

$$\begin{aligned}&\sum _{k=0}^{K-1} \sum _{j,j'=1}^N D^2 \tilde{H}(\check{\chi }_{t_{k}}) \bigl [ \check{\varsigma }_{i,j}^0,\check{\varsigma }_{i,j'}^0 \bigr ] W^0\bigl ([t_{k},t_{k+1}) \times A_{j}\bigr ) W^0\bigl ([t_{k},t_{k+1}) \times A_{j'}\bigr ) \\&\quad - \sum _{k=0}^{K-1} \sum _{j=1}^N D^2 \tilde{H}(\check{\chi }_{t_{k}}) \bigl [ \check{\varsigma }_{i,j}^0,\check{\varsigma }_{i,j}^0 \bigr ] \bigl (t_{k+1} - t_{k}\bigr ) \nu (A_{j}) \rightarrow 0 \end{aligned}$$

in \(\mathbb {P}^0\) probability as \(h\) tends to \(0\). Noticing that

$$\begin{aligned}&\sum _{k=0}^{K-1} \sum _{j=1}^N D^2 \tilde{H}(\check{\chi }_{t_{k}}) \bigl [ \check{\varsigma }_{i,j}^0,\check{\varsigma }_{i,j}^0 \bigr ] \bigl (t_{k+1} - t_{k}\bigr ) \nu (A_{j})\\ {}&\quad = \sum _{k=0}^{K-1} \int \limits _{t_{k}}^{t_{k+1}} \int \limits _{\Xi } D^2 \tilde{H}(\check{\chi }_{t_{k}}) \bigl [ \check{\varsigma }_{s,\xi }^0,\check{\varsigma }_{s,\xi }^0 \bigr ] d\nu (\xi ) ds, \end{aligned}$$

we deduce that

$$\begin{aligned}&\sum _{k=0}^{K-1} \sum _{j,j'=1}^N D^2 \tilde{H}(\check{\chi }_{t_{k}}) \bigl [ \check{\varsigma }_{i,j}^0,\check{\varsigma }_{i,j'}^0 \bigr ] W^0\bigl ([t_{k},t_{k+1}) \times A_{j}\bigr ) W^0\bigl ([t_{k},t_{k+1}) \times A_{j'}\bigr ) \\&\quad - \int \limits _{0}^T \int \limits _{\Xi } D^2 \tilde{H}(\check{\chi }_{s}) \bigl [ \check{\varsigma }_{s,\xi }^0,\check{\varsigma }_{s,\xi }^0 \bigr ] d\nu (\xi ) ds \rightarrow 0 \end{aligned}$$

in \(\mathbb {P}^0\) probability as \(h\) tends to \(0\). It remains to compute

$$\begin{aligned} D^2 \tilde{H}(\check{\chi }_{t_{k}}) \bigl [\tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) \bigl ( \tilde{W}_{t_{k+1}} - \tilde{W}_{t_{k}} \bigr ) , \tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) \bigl ( \tilde{W}_{t_{k+1}} - \tilde{W}_{t_{k}} \bigr ) \bigr ]. \end{aligned}$$

Recall that this is the limit

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \frac{1}{\varepsilon ^2}&\bigl [ \tilde{H}\bigl (\tilde{\chi }_{t_{k}}(\omega ^0,\cdot ) + \varepsilon \tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) (\tilde{W}_{t_{k+1}}- \tilde{W}_{t_{k}}) \bigr ) \\&\quad + \tilde{H}\bigl (\tilde{\chi }_{t_{k}}(\omega ^0,\cdot ) - \varepsilon \tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) (\tilde{W}_{t_{k+1}}- \tilde{W}_{t_{k}}) \bigr ) - 2 \tilde{H}\bigl (\tilde{\chi }_{t_{k}}(\omega ^0,\cdot ) \bigr ) \bigr ], \end{aligned}$$

which is the same as

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \frac{1}{\varepsilon ^2} \bigl [ \tilde{H}\bigl (\tilde{\chi }_{t_{k}}(\omega ^0,\cdot ) + \varepsilon \tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) \sqrt{t_{k+1}-t_{k}} \tilde{G} \bigr ) - \tilde{H}\bigl (\tilde{\chi }_{t_{k}}(\omega ^0,\cdot ) \bigr ) \bigr ], \end{aligned}$$

where \(\tilde{G}\) is independent of \((\tilde{W}_{t})_{0 \le t \le T}\), and \({\mathcal N}(0,1)\) distributed. Therefore,

$$\begin{aligned} D^2 \tilde{H}(\check{\chi }_{t_{k}}) \bigl [ \check{\varsigma }_{t_{k}} \bigl ( \tilde{W}_{t_{k+1}} - \tilde{W}_{t_{k}} \bigr ) ,\check{\varsigma }_{t_{k}} \bigl ( \tilde{W}_{t_{k+1}}- \tilde{W}_{t_{k}} \bigr ) \bigr ] = \bigl ( t_{k+1}-t_{k} \bigr ) D^2 \tilde{H}(\check{\chi }_{t_{k}}) \bigl [ \check{\varsigma }_{t_{k}} \tilde{G},\check{\sigma }_{t_{k}} \tilde{G} \bigr ], \end{aligned}$$

which is enough to prove that

$$\begin{aligned}&\sum _{k=0}^{K-1} D^2 \tilde{H}(\check{\chi }_{t_{k}}) \bigl [\check{\varsigma }_{t_{k}} \bigl ( \tilde{W}_{t_{k+1}} - \tilde{W}_{t_{k}} \bigr ) , \check{\varsigma }_{t_{k}} \bigl ( \tilde{W}_{t_{k+1}} - \tilde{W}_{t_{k}} \bigr ) \bigr ] \rightarrow \int \limits _{0}^T D^2 \tilde{H}(\check{\chi }_{s}) \bigl [ \check{\varsigma }_{s} \tilde{G},\check{\varsigma }_{s} \tilde{G} \bigr ] ds \end{aligned}$$

in \(\mathbb {P}^0\) probability as \(h\) tends to \(0\).

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Carmona, R., Delarue, F. (2014). The Master Equation for Large Population Equilibriums. In: Crisan, D., Hambly, B., Zariphopoulou, T. (eds) Stochastic Analysis and Applications 2014. Springer Proceedings in Mathematics & Statistics, vol 100. Springer, Cham. https://doi.org/10.1007/978-3-319-11292-3_4

Download citation

Publish with us

Policies and ethics