Abstract
We use a simple \(N\)-player stochastic game with idiosyncratic and common noises to introduce the concept of Master Equation originally proposed by Lions in his lectures at the Collège de France. Controlling the limit \(N\rightarrow \infty \) of the explicit solution of the \(N\)-player game, we highlight the stochastic nature of the limit distributions of the states of the players due to the fact that the random environment does not average out in the limit, and we recast the Mean Field Game (MFG) paradigm in a set of coupled Stochastic Partial Differential Equations (SPDEs). The first one is a forward stochastic Kolmogorov equation giving the evolution of the conditional distributions of the states of the players given the common noise. The second is a form of stochastic Hamilton Jacobi Bellman (HJB) equation providing the solution of the optimization problem when the flow of conditional distributions is given. Being highly coupled, the system reads as an infinite dimensional Forward Backward Stochastic Differential Equation (FBSDE). Uniqueness of a solution and its Markov property lead to the representation of the solution of the backward equation (i.e. the value function of the stochastic HJB equation) as a deterministic function of the solution of the forward Kolmogorov equation, function which is usually called the decoupling field of the FBSDE. The (infinite dimensional) PDE satisfied by this decoupling field is identified with the master equation. We also show that this equation can be derived for other large populations equilibriums like those given by the optimal control of McKean-Vlasov stochastic differential equations. The paper is written more in the style of a review than a technical paper, and we spend more time motivating and explaining the probabilistic interpretation of the Master Equation, than identifying the most general set of assumptions under which our claims are true.
Paper presented at the conference “Stochastic Analysis”, University of Oxford, September 23, 2013.
René Carmona—Partially supported by NSF: DMS-0806591.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
A. Bensoussan, J. Frehse, P. Yam, The master equation in mean-field theory. Technical report. http://arxiv.org/abs/1404.4150
P. Cardaliaguet, Notes on mean field games. Notes from P.L. Lions’ lectures at the Collège de France https://www.ceremade.dauphine.fr/cardalia/MFG100629.pdf (2012)
R. Carmona, F. Delarue, Forward-backward stochastic differential equations and controlled McKean Vlasov dynamics. in Annals of Probability To appear
R. Carmona, F. Delarue, Probabilistic analysis of mean field games. SIAM J. Control Optim. 51, 2705–2734 (2013)
R. Carmona, F. Delarue, D. Lacker, Mean field games with a common noise. Technical report. http://arxiv.org/abs/1407.6181
R. Carmona, F. Delarue, A. Lachapelle, Control of McKean-Vlasov versus mean field games. Math. Financ. Econ. 7, 131–166 (2013)
R. Carmona, J.P. Fouque, A. Sun, Mean field games and systemic risk. To appear in Communications in Mathematical Sciences
J.F. Chassagneux, D. Crisan, F. Delarue, McKean-Vlasov FBSDEs and related master equation. Work in progress
W. Fleming, M. Soner, Controlled Markov Processes and Viscosity Solutions (Springer, New York, 2010)
D.A. Gomes, J. Saude, Mean field games models—a brief survey. Technical report (2013)
O. Guéant, J.M. Lasry, P.L. Lions, Paris Princeton Lectures in Mathematical Finance IV, in Mean Field Games and Applications, Lecture Notes in Mathematics, ed. by R. Carmona, et al. (Springer, Berlin, 2010)
M. Huang, P.E. Caines, R.P. Malhamé, Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. Commun. Inf. Syst. 6, 221–252 (2006)
J.M. Lasry, P.L. Lions, Jeux à champ moyen I. Le cas stationnaire. Comptes Rendus de l’Académie des Sciences de Paris, ser. A 343(9), 619–625 (2006)
J.M. Lasry, P.L. Lions, Jeux à champ moyen II. Horizon fini et contrôle optimal. Comptes Rendus de l’Académie des Sciences de Paris, ser. A 343(10), 679–684 (2006)
J.M. Lasry, P.L. Lions, Mean field games. Jpn. J. Math. 2(1), 229–260 (2007)
P.L. Lions, Théorie des jeux à champs moyen et applications. Technical report, 2007–2008
J. Ma, H. Yin, J. Zhang, On non-Markovian forward-backward SDEs and backward stochastic PDEs. Stoch. Process. Appl. 122, 3980–4004 (2012)
D. Nualart, The Malliavin Calculus and Related Topics, Probability and its Applications (Springer, New York, 1995)
S. Peng, Stochastic Hamilton Jacobi Bellman equations. SIAM J. Control Optim. 30, 284–304 (1992)
A.S. Sznitman, Topics in propagation of chaos, in D.L. Burkholder et al., Ecole de Probabilités de Saint Flour, XIX-1989. Lecture Notes in Mathematics, vol. 1464 (Springer, Heidelberg, 1989), pp. 165–251
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix: A Generalized Form of Itô’s Formula
Appendix: A Generalized Form of Itô’s Formula
Our derivation of the master equation requires the use of a form of Itô formula in a space of probability measures. This subsection is devoted to the proof of such a formula.
1.1 Notion of Differentiability
In Sect. 4, we alluded to a specific notion of differentiability for functions of probability measures. The choice of this notion is dictated by the fact that (1) the probability measures we are dealing with appear as laws of random variables; (2) in trying to differentiate functions of measures, the infinitesimal variations which we consider are naturally expressed as infinitesimal variations in the linear space of those random variables. The relevance of this notion of differentiability was argued by P.L. Lions in his lectures at the Collège de France [16]. The notes [2] offer a readable account, and [3] provides several properties involving empirical measures. It is based on the lifting of functions \(\mathcal {P}_2(\mathbb {R}^d)\ni \mu \mapsto H(\mu )\) into functions \(\tilde{H}\) defined on the Hilbert space \(L^2(\tilde{\Omega };\mathbb {R}^d)\) over some probability space \((\tilde{\Omega },\tilde{\mathcal {F}},\tilde{\mathbb {P}})\) by setting \(\tilde{H}(\tilde{X})=H({\mathcal L}(\tilde{X}))\), for \(\tilde{X} \in L^2(\tilde{\Omega };\mathbb {R}^d)\), \(\tilde{\Omega }\) being a Polish space and \(\tilde{\mathbb {P}}\) an atomless measure.
Then, a function \(H\) is said to be differentiable at \(\mu _0\in \mathcal {P}_2(\mathbb {R}^d)\) if there exists a random variable \(\tilde{X}_0\) with law \(\mu _0\), in other words satisfying \({\mathcal L}(\tilde{X}_0)=\mu _0\), such that the lifted function \(\tilde{H}\) is Fréchet differentiable at \(\tilde{X}_0\). Whenever this is the case, the Fréchet derivative of \(\tilde{H}\) at \(\tilde{X}_0\) can be viewed as an element of \(L^2(\tilde{\Omega };\mathbb {R}^d)\) by identifying \(L^2(\tilde{\Omega };\mathbb {R}^d)\) and its dual. It turns out that its distribution depends only upon the law \(\mu _0\) and not upon the particular random variable \(\tilde{X}_0\) having distribution \(\mu _0\). See Sect. 6 in Ref. [2] for details. This Fréchet derivative \([D\tilde{H}](\tilde{X}_0)\) is called the representation of the derivative of \(H\) at \(\mu _0\) along the variable \(\tilde{X}_{0}\). It is shown in Ref. [2] that, as a random variable, it is of the form \(\tilde{h}(\tilde{X}_0)\) for some deterministic measurable function \(\tilde{h} : \mathbb {R}^d \rightarrow \mathbb {R}^d\), which is uniquely defined \(\mu _0\)-almost everywhere on \(\mathbb {R}^d\). The equivalence class of \(\tilde{h}\) in \(L^2(\mathbb {R}^d,\mu _{0})\) being uniquely defined, it can be denoted by \(\partial _{\mu } H(\mu _{0})\) (or \(\partial H(\mu _{0})\) when no confusion is possible). It is then natural to call \(\partial _\mu H(\mu _0)\) the derivative of \(H\) at \(\mu _{0}\) and to identify it with a function \(\partial _{\mu } H(\mu _{0})( \, \cdot \, ) : \mathbb {R}^d \ni v \mapsto \partial _{\mu } H(\mu _{0})(v) \in \mathbb {R}^d\).
This procedure permits to express \([D \tilde{H}](\tilde{X}_0)\) as a function of any random variable \(\tilde{X}_{0}\) with distribution \(\mu _0\), irrespective of where this random variable is defined.
Remark 6.1
Since it is customary to identify a Hilbert space to its dual, we will identify \(L^2(\tilde{\Omega })\) with its dual, and in so doing, any derivative \(D\tilde{H} (\tilde{X})\) will be viewed as an element of \(L^2(\tilde{\Omega })\). In this way, the derivative in the direction \(\tilde{Y}\) will be given by the inner product \([D\tilde{H} (\tilde{X})]\cdot \tilde{Y}\). Accordingly, the second Frechet derivative \(D^2\tilde{H}(\tilde{X})\) which should be a linear operator from \(L^2(\tilde{\Omega })\) into itself because of the identification with its dual, will be viewed as a bilinear form on \(L^2(\tilde{\Omega })\). In particular, we shall use the notation \(D^2\tilde{H} (\tilde{X})[\tilde{Y}, \tilde{Z}]\) for \(\big ([D^2\tilde{H} (\tilde{X})] (\tilde{Y})\big )\cdot \tilde{Z}\).
Remark 6.2
The following result (see [3] for a proof) gives, though under stronger regularity assumptions on the Fréchet derivatives, a convenient way to handle this notion of differentiation with respect to probability distributions. If the function \(\tilde{H}\) is Fréchet differentiable and if its Fréchet derivative is uniformly Lipschitz (i.e. there exists a constant \(c>0\) such that \(\Vert D\tilde{H}(\tilde{X}) - D\tilde{H}(\tilde{X}')\Vert \le c |\tilde{X} -\tilde{X}'|\) for all \(\tilde{X}, \tilde{X}'\) in \(L^2(\tilde{\Omega })\)), then there exists a function \(\partial _\mu H\)
such as \(|\partial _\mu H (\mu )(v)-\partial _\mu H (\mu )(v')|\le c|v-v'|\) for all \(v,v'\in \mathbb {R}^d\) and \(\mu \in \mathcal {P}_2(\mathbb {R}^d)\), and for every \(\mu \in \mathcal {P}_2(\mathbb {R}^d)\), \(\partial _\mu H(\mu )(\tilde{X})=D\tilde{H}(\tilde{X})\) almost surely if \(\mu ={\mathcal L}(\tilde{X})\).
1.2 A.2 Itô’s Formula Along a Flow of Conditional Measures
In the derivation of the master equation, the value function is expanded along a flow of conditional measures. As already explained in Sect. 4.3, this requires a suitable construction of the lifting.
Throughout this section, we assume that \((\Omega ,{\mathcal F},\mathbb {P})\) is of the form \((\Omega ^{0} \times \Omega ^1,{\mathcal F}^0 \otimes {\mathcal F}^1,\mathbb {P}^0 \otimes \mathbb {P}^1)\), \((\Omega ^0,{\mathcal F}^0,\mathbb {P}^0)\) supporting the common noise \(W^0\), and \((\Omega ^1,{\mathcal F}^1,\mathbb {P}^1)\) the idiosyncratic noise \(W\). So an element \(\omega \in \Omega \) can be written as \(\omega =(\omega ^0,\omega ^1) \in \Omega ^0 \times \Omega ^1\), and functionals \(H(\mu (\omega ^0))\) of a random probability measure \(\mu (\omega ^0) \in {\mathcal P}_{2}(\mathbb {R}^d)\) with \(\omega ^0 \in \Omega ^0\), can be lifted into \(\tilde{H}(\tilde{X}(\omega ^0,\cdot ))=H({\mathcal L}(\tilde{X}(\omega ^0,\cdot )))\), where \(\tilde{X}(\omega ^0,\cdot )\) is an element of \(L^2(\tilde{\Omega }^1,\tilde{\mathcal F}^1, \mathbb {P}^1;\mathbb {R}^d)\) with \(\mu (\omega ^0)\) as distribution, \((\tilde{\Omega }^1,\tilde{\mathcal F}^1, \tilde{\mathbb {P}}^1)\) being Polish and atomless. Put it differently, the random variable \(\tilde{X}\) is defined on \((\tilde{\Omega } = \Omega ^0 \times \tilde{\Omega }^1, \tilde{\mathcal F}={\mathcal F}^0 \otimes \tilde{{\mathcal {F}}}^1,\tilde{\mathbb {P}}= \mathbb {P}^0 \otimes \tilde{\mathbb {P}}^1)\).
The objective is then to expand \((\tilde{H}(\tilde{\chi }_{t}(\omega ^0,\cdot )))_{0 \le t \le T}\), where \((\tilde{\chi }_{t})_{0 \le t \le T}\) is the copy so constructed, of an Itô process on \((\Omega ,{\mathcal F},\mathbb {P})\) of the form:
for \(t \in [0,T]\), assuming that the processes \((\beta _{t})_{0 \le t \le T}\), \((\varsigma _{t})_{0 \le t \le T}\) and \((\varsigma _{t,\xi }^0)_{0 \le t \le T,\xi \in \Xi }\) are progressively measurable with respect to the filtration generated by \(W\) and \(W^0\) and square integrable, in the sense that
Denoting by \((\tilde{W}_{t})_{0 \le t \le T}\), \((\tilde{\beta }_{t})_{0 \le t \le T}\), \((\tilde{\varsigma }_{t})_{0 \le t \le T}\) and \((\tilde{\varsigma }_{t,\xi }^0)_{0 \le t \le T,\xi \in \Xi }\) the copies of \((W_{t})_{0 \le t \le T}\), \((\beta _{t})_{0 \le t \le T}\), \((\varsigma _{t})_{0 \le t \le T}\) and \((\varsigma _{t,\xi }^0)_{0 \le t \le T,\xi \in \Xi }\), we then have
for \(t \in [0,T]\). In this framework, we emphasize that it makes sense to look at \(\tilde{H}(\tilde{\chi }_{t}(\omega ^0,\cdot ))\), for \(t \in [0,T]\), since
where \({\mathbb E}^0\), \({\mathbb E}^1\) and \(\tilde{\mathbb {E}}^1\) are the expectations associated to \(\mathbb {P}^0\), \(\mathbb {P}^1\) and \(\tilde{\mathbb {P}}^1\) respectively.
In order to simplify notations, we let \(\check{\chi }_{t}(\omega ^0)=\tilde{\chi }_{t}(\omega ^0,\cdot )\) for \(t \in [0,T]\), so that \((\check{\chi }_{t})_{0 \le t \le T}\) is \(L^2(\tilde{\Omega }^1,\tilde{{\mathcal F}}^1,\tilde{\mathbb {P}}^1;\mathbb {R}^d)\)-valued, \(\mathbb {P}^0\) almost surely. Similarly, we let \(\check{\beta }_{t}(\omega ^0)=\tilde{\beta }_{t}(\omega ^0,\cdot )\), \(\check{\varsigma }_{t}(\omega ^0)=\tilde{\varsigma }_{t}(\omega ^0,\cdot )\) \(\check{\varsigma }_{t,\xi }(\omega ^0)=\tilde{\varsigma }_{t,\xi }(\omega ^0,\cdot )\), for \(t \in [0,T]\) and \(\xi \in \Xi \). We then claim
Proposition 6.3
On the top of the assumption and notation introduced right above, assume that \(\tilde{H}\) is twice continuously Fréchet differentiable. Then, we have \(\mathbb {P}^0\) almost surely, for all \(t \in [0,T]\),
where \(\tilde{G}\) is an \({\mathcal N}(0,1)\)-distributed random variable on \((\tilde{\Omega }^1,\tilde{{\mathcal F}}^1, \tilde{\mathbb {P}}^1)\), independent of \((\tilde{W}_{t})_{t \ge 0}\).
Remark 6.4
Following Remark 6.2, one can specialize Itô’s formula to a situation with smoother derivatives. See Ref. [8] for a more detailed account. Indeed, if one assumes that
-
1.
the function \(H\) is \(C^1\) in the sense given above and its first derivative is Lipschitz;
-
2.
for each fixed \(v \in \mathbb {R}^d\), the function \(\mu \mapsto \partial _\mu H(\mu )(v)\) is differentiable with Lipschitz derivative, and consequently, there exists a function
$$ (\mu ,v',v)\mapsto \partial ^2_{\mu }H(\mu )(v)(v') \in \mathbb {R}^{d \times d} $$which is Lipschitz in \(v'\) uniformly with respect to \(v\) and \(\mu \) and such that \(\partial ^2_{\mu }H(\mu )(v)(\tilde{X})\) gives the Fréchet derivative of \(\mu \mapsto \partial _\mu H(\mu )(v)\) for every \(v \in \mathbb {R}^d\) as long as \(\mathcal {L}(\tilde{X}) = \mu \);
-
3.
for each fixed \(\mu \in \mathcal {P}_2(\mathbb {R}^d)\), the function \(v \mapsto \partial _\mu H(\mu )(v)\) is differentiable with Lipschitz derivative, and consequently, there exists a bounded function \((v,\mu )\mapsto \partial _v\partial _\mu H(\mu )(v) \in \mathbb {R}^{d \times d}\) giving the value of its derivative;
-
4.
the functions \( (\mu ,v',v)\mapsto \partial ^2_{\mu }H(\mu )(v)(v') \) and \( (\mu ,v)\mapsto \partial _{v} \partial _{\mu }H(\mu )(v) \) are continuous (the space \({\mathcal P}_{2}(\mathbb {R}^d)\) being endowed with the \(2\)-Wasserstein distance).
Then, the second order term appearing in Itô’s formula can be expressed as the sum of two explicit operators whose interpretations are more natural. Indeed, the second Fréchet derivative \(D^2\tilde{H}(\tilde{X})\) can be written as the linear operator \(\tilde{Y}\mapsto A\tilde{Y}\) on \(L^2(\tilde{\Omega }^1,\tilde{{\mathcal F}}^1,\mathbb {P}^1;\mathbb {R}^d)\) defined by
where \((\tilde{\Omega }^{1,\prime },\tilde{\mathcal {F}}^{1,\prime },\tilde{\mathbb {P}}^{1,\prime })\) is another Polish and atomless probability space endowed with a copy \((\tilde{X}',\tilde{Y}')\) of \((\tilde{X},\tilde{Y})\).
In particular, when \(\tilde{Y}\) is replaced by \(\tilde{Y}\times \tilde{G}\), with \(\tilde{G} \sim {\mathcal N}(0,1)\) and independent of \((\tilde{X},\tilde{Y})\), the integral over \(\tilde{\Omega }^{1,\prime }\) in the right-hand side vanishes. We then obtain
The derivation of the master equation actually requires a more general result than Proposition 6.3. Indeed one needs to expand \((\tilde{H}(X_{t},\check{\chi }_{t}))_{0 \le t \le T}\) for a function \(\tilde{H}\) of \((x,\tilde{X}) \in \mathbb {R}^d \times L^2(\tilde{\Omega }^1, \tilde{{\mathcal F}}^1, \tilde{\mathbb {P}}^1;\mathbb {R}^d)\). As before, \((\check{\chi }_{t})_{0 \le t \le T}\) is understood as \((\tilde{\chi }_{t}(\omega ^0,\cdot ))_{0 \le t \le T}\). The process \((X_{t})_{0 \le t \le T}\) is assumed to be another Itô process, defined on the original space \((\Omega ,{\mathcal F},\mathbb {P}) = (\Omega ^0 \times \Omega ^1,{\mathcal F}^0 \otimes {\mathcal F}^1,\mathbb {P}^0 \otimes \mathbb {P}^1)\), with dynamics of the form
for \(t \in [0,T]\), the processes \((b_{t})_{0 \le t \le T}\), \((\sigma _{t})_{0 \le t \le T}\) and \((\sigma _{t,\xi }^0)_{0 \le t \le T,\xi \in \Xi }\) being progressively-measurable with respect to the filtration generated by \(W\) and \(W^0\), and square integrable as in (74). Under these conditions, the result of Proposition 6.3 can be extended to:
Proposition 6.5
On the top of the above assumptions and notations, assume that \(\tilde{H}\) is twice continuously Fréchet differentiable on \(\mathbb {R}^d \times L^2(\tilde{\Omega }^1,\tilde{{\mathcal F}}^1,\tilde{\mathbb {P}}^1;\mathbb {R}^d)\). Then, we have \(\mathbb {P}\) almost surely, for all \(t \in [0,T]\),
where \(\tilde{G}\) is an \({\mathcal N}(0,1)\)-distributed random variable on \((\tilde{\Omega }^1,\tilde{{\mathcal F}}^1, \tilde{\mathbb {P}}^1)\), independent of \((\tilde{W}_{t})_{t \ge 0}\). The partial derivatives in the infinite dimensional component are denoted with the index ‘\(\mu \)’. In that framework, the term \(\langle \partial _{x} D_{\mu } \tilde{H}(X_{s},\check{\chi }_{s}) \cdot \check{\varsigma }_{s,\xi }^0 ,\sigma _{s,\xi }^0 \rangle \) reads
1.3 A.3 Proof of Itô’s Formula
We only provide the proof of Proposition 6.3 as the proof of Proposition 6.5 is similar.
By a standard continuity argument, it is sufficient to prove that Eq. (75) holds for any \(t \in [0,T]\) \(\mathbb {P}^0\)-almost surely. In particular, we can choose \(t=T\). Moreover, by a standard approximation argument, it is sufficient to consider the case of simple processes \((\beta _{t})_{0 \le t \le T}\), \((\varsigma _{t})_{0 \le t \le T}\) and \((\varsigma _{t,\xi }^0)_{0 \le t \le T,\xi }\) of the form
where \(M,N \ge 1\), \(0=\tau _{0}<\tau _{1} < \dots < \tau _{M}=T\), \((A_{j})_{1 \le j \le N}\) are piecewise disjoint Borel subsets of \(\Xi \) and \((\beta ^i,\varsigma ^i,\varsigma ^0_{i,j})_{1 \le j \le N}\) are bounded \({\mathcal F}_{\tau _{i}}\)-measurable random variables.
The strategy is taken from Ref. [8] and consists in splitting \(\tilde{H}(\check{\chi }_{T}) - \tilde{H}(\check{\chi }_{0})\) into
where \(0=t_{0}< \dots < t_{K}=T\) is a subdivision of \([0,T]\) of step \(h\) such that, for any \(k \in \{0,\dots ,K-1\}\), there exists some \(i \in \{0,\dots ,M-1\}\) such that \([t_{k},t_{k+1}) \subset [\tau _{i},\tau _{i+1})\). We then start with approximating a general increment \(\tilde{H}(\check{\chi }_{t_{k+1}}) - \tilde{H}(\check{\chi }_{t_{k}})\), omitting to specify the dependence upon \(\omega ^0\). By Taylor’s formula, we know that we can find some \(\delta \in [0,1]\) such that
By Kolmogorov continuity theorem, we know that, \(\mathbb {P}^0\) almost surely, the mapping \([0,T] \ni t \mapsto \tilde{\chi }_{t} \in L^2(\tilde{\Omega }^1,\tilde{{\mathcal F}}^1, \tilde{\mathbb {P}}^1;\mathbb {R}^d)\) is continuous. Therefore, \(\mathbb {P}^0\) almost surely, the mapping \( (s,t,\delta ) \mapsto D^2 \tilde{H}(\check{\chi }_{t} + \delta (\check{\chi }_{s}- \check{\chi }_{t}))\) is continuous from \([0,T]^2 \times [0,1]\) to the space of bounded operators from \(L^2(\tilde{\Omega }^1,\tilde{{\mathcal F}}^1, \tilde{\mathbb {P}}^1;\mathbb {R}^d)\) into itself, which proves that, \(\mathbb {P}^0\) almost surely,
\(\vert \!\vert \!\vert \cdot \vert \!\vert \!\vert _{2,\tilde{\Omega }^1}\) denoting the operator norm on the space of bounded operators on \(L^2(\tilde{\Omega }^1,\tilde{{\mathcal F}}^1, \tilde{\mathbb {P}}^1;\mathbb {R}^d)\). Now,
Since
we deduce that
in \(\mathbb {P}^0\) probability as \(h\) tends to \(0\). We now compute the various terms appearing in (76). We write
Assume that, for some \(0 \le i \le M-1\), \(\tau _{i} \le t_{k} < t_{k+1} \le \tau _{i+1}\). Then,
Note that the right-hand side is well-defined as \(\beta _{t_{k}}\) is bounded. Similarly, we notice that
Now, using the specific form of \(D \tilde{H}\), \(D \tilde{H}(\check{\chi }_{t_{k}}(\omega ^0))=(\tilde{\omega }^1 \mapsto \partial _{\mu }H({\mathcal L}( \check{\chi }_{t_{k}}(\omega ^0)))(\tilde{\chi }_{t_{k}}(\omega ^0,\tilde{\omega }^1))\) appears to be a \(\tilde{{\mathcal F}}_{t_{k}}\)-measurable random variable, and as such, it is orthogonal to \(\tilde{\varsigma }_{t_{k}}(\omega ^0,\cdot ) (\tilde{W}_{t_{k+1}} -\tilde{W}_{t_{k}})\), which shows that
Finally,
Now, \(W^0\bigl (A_{j} \times [t_{k},t_{k+1}) \bigr )(\omega ^0)\) behaves as a constant in the linear form above. Therefore,
Therefore, in analogy with (77), we deduce from (78), (79) and (80) that
in \(\mathbb {P}^0\) probability as \(h\) tends to \(0\).
We now reproduce this analysis for the second order derivatives. We need to compute:
Clearly, the drift has very low influence on the value of \(\Gamma _{k}\). Precisely, for investigating the limit (in \(\mathbb {P}^0\) probability) of \(\sum _{k=0}^{K-1} \Gamma _{k}\), we can focus on the ‘reduced’ version of \(\Gamma _{k}\):
We first notice that
(and the same for the symmetric term), the reason being that
which is zero by the independence argument used in (79). Following the proof of (80),
The second line reads as a the bracket of a discrete stochastic integral. Letting \(\check{\varsigma }_{i,j}^0(\omega ^0) = \tilde{\varsigma }_{i,j}^0(\omega ^0,\cdot )\), it is quite standard to check
in \(\mathbb {P}^0\) probability as \(h\) tends to \(0\). Noticing that
we deduce that
in \(\mathbb {P}^0\) probability as \(h\) tends to \(0\). It remains to compute
Recall that this is the limit
which is the same as
where \(\tilde{G}\) is independent of \((\tilde{W}_{t})_{0 \le t \le T}\), and \({\mathcal N}(0,1)\) distributed. Therefore,
which is enough to prove that
in \(\mathbb {P}^0\) probability as \(h\) tends to \(0\).
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Carmona, R., Delarue, F. (2014). The Master Equation for Large Population Equilibriums. In: Crisan, D., Hambly, B., Zariphopoulou, T. (eds) Stochastic Analysis and Applications 2014. Springer Proceedings in Mathematics & Statistics, vol 100. Springer, Cham. https://doi.org/10.1007/978-3-319-11292-3_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-11292-3_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-11291-6
Online ISBN: 978-3-319-11292-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)