Skip to main content
Log in

A Maximum Principle for Mean-Field SDEs with Time Change

  • Published:
Applied Mathematics & Optimization Submit manuscript

Abstract

Time change is a powerful technique for generating noises and providing flexible models. In the framework of time changed Brownian and Poisson random measures we study the existence and uniqueness of a solution to a general mean-field stochastic differential equation. We consider a mean-field stochastic control problem for mean-field controlled dynamics and we present a necessary and a sufficient maximum principle. For this we study existence and uniqueness of solutions to mean-field backward stochastic differential equations in the context of time change. An example of a centralised control in an economy with specialised sectors is provided.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Andersson, D., Djehiche, B.: A maximum principle for SDEs of mean-field type. Appl. Math. Optim. 63, 311–356 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  2. Applebaum, D.: Lévy Processes and Stochastic Calculus. Cambridge University Press, Cambridge (2009)

    Book  MATH  Google Scholar 

  3. Bensoussan, A., Frehse, J., Yam, P.: Mean Field Games and Mean Field Type Control Theory. Springer, Berlin (2013)

    Book  MATH  Google Scholar 

  4. Buckdahn, R., Li, J., Peng, S.: Mean-field backward stochastic differential equations and related partial differential equations. Stoch. Process. Appl. 119, 3133–3154 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Cairoli, R., Walsh, J.: Stochastic integrals in the plane. Acta Math. 134, 111–183 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  6. Carmona, R., Delarue, F.: Forward-backward stochastic differential equations and controlled McKean-Vlasov dynamics. Ann. Prob. 43, 2647–2700 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Carmona, R., Delarue, F., Lachapelle, A.: Control of McKean-Vlasov Vlasov dynamics versus mean field games. Math. Financ. Econ. 7, 131–166 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  8. Di Nunno, G., Eide, I.B.: Minimal-variance hedging in large financial markets: random fields approach. Stoch. Anal. Appl. 28, 54–85 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  9. Di Nunno, G., Sjursen, S.: BSDEs driven by time-changed Lévy noises and optimal control. Stoch. Process. Appl. 124, 1679–1709 (2014)

    Article  MATH  Google Scholar 

  10. Di Nunno, G., Sjursen, S.: On Chaos representation and orthogonal polynomials for the doubly stochastic Poisson process. In: Dalang, R.C., Dozzi, M., Russo, F. (eds.) Seminar on Stochastic Analysis, Random Fields and Applications VII, pp. 23–54. Springer, Basel (2013)

    Chapter  Google Scholar 

  11. Grigelionis, B.: Characterisation of stochastic processes with conditionally independent increments. Lith. Math. J. 15, 562–567 (1975)

    Article  MATH  Google Scholar 

  12. Huang, M., Malhamé, R.P., Caines, P.E.: Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. Commun. Inf. Syst. 6, 221–251 (2006)

    MathSciNet  MATH  Google Scholar 

  13. Jourdain, B., Méléard, S., Woyczynski, W.: Nonlinear SDEs driven by Lévy processes and related PDEs. Alea 4, 1–29 (2008)

    MathSciNet  MATH  Google Scholar 

  14. Kallenberg, O.: Foundations of Modern Probability. Springer, Berlin (1997)

    MATH  Google Scholar 

  15. Lasry, J.-M., Lions, P.-L.: Mean field games. Jpn. J. Math. 2, 229–260 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  16. Pham, H., Wei, X.: Bellman equation and viscosity solutions for mean-field stochastic control problem. arXiv:1512.07866v2 (2016)

  17. Pham, H., Wei, X.: Dynamic programming for optimal control of stochastic McKean-Vlasov dynamics. arXiv:1604.04057v2 (2017)

  18. Serfozo, R.F.: Processes with conditional stationary independent increments. J. Appl. Prob. 9, 303–315 (1972)

    Article  MathSciNet  MATH  Google Scholar 

  19. Willet, D.: A linear generalization of Gronwall’s inequality. Proc. Am. Math. Soc. 16(4), 774–778 (1965)

    MathSciNet  Google Scholar 

  20. Willet, D.: Nonlinear vector integral equations as contraction mappings. Arch. Rational Mech. Anal. 15, 79–86 (1964)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The financial support from the Norwegian Research Council within the ISP project 239019 “FINance, INsurance, Energy, Weather and STOCHastics” (FINEWSTOCH) and the project 250768/F20 ”Challenges in STOchastic CONtrol, INFormation and Applications” (STOCONINF) is greatly acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giulia Di Nunno.

Appendix

Appendix

Proof that the mapping \(\Psi \) in (4.5) is a contraction. Fix \(\beta >0\). We define the norm \(||\cdot ||_{\beta }\) on \( L^2_{ad}(\mathbb {G}) \times \mathcal {I}\) by

$$\begin{aligned} ||(Y,Z)||_{\beta }:=\left( E\left[ \int \limits _0^Te^{\beta s}(|Y_s|^2+||Z_s||^2_{\lambda _s})ds\right] \right) ^{\frac{1}{2}} \end{aligned}$$

which is equivalent to the canonical one.

Let \((y^{(1)},z^{(1)}),\,(y^{(2)},z^{(2)})\in L^2_{ad}(\mathbb {G}) \times \mathcal {I}\) be two given inputs and define \((Y^{(1)},Z^{(1)}):=\Psi (y^{(1)},z^{(1)})\), \((Y^{(2)},Z^{(2)}):=\Psi (y^{(2)},z^{(2)})\), which are indeed the corresponding solutions of (4.4). Furthermore, define

$$\begin{aligned}&\hat{Y}:=Y^{(1)}-Y^{(2)},\,\hat{y}:=y^{(1)}-y^{(2)},\,\\&\hat{Z}:=Z^{(1)}-Z^{(2)},\,\hat{z}:=z^{(1)}-z^{(2)}. \end{aligned}$$

Then \((\hat{Y},\hat{Z})\) satisfies the BSDE

$$\begin{aligned} {\left\{ \begin{array}{ll} d\hat{Y}_t&{}=E'\left[ h\left( t,\lambda _t,\lambda '_t,Y^{(1)}_t,(y^{(1)}_t)', Z^{(1)}_t,(z^{(1)}_t)'\right) \right. \\ &{}\qquad \left. -h\left( t,\lambda _t,\lambda '_t,Y^{(2)}_t, (y^{(2)}_t)',Z^{(2)}_t,(z^{(2)}_t)'\right) \right] dt\\ &{}\quad +\int _\mathbb {R}Z^{(1)}_t(z)-Z^{(2)}_t(z)\mu (dt,dz)\\ \hat{Y}_T&{}=F-F=0. \end{array}\right. } \end{aligned}$$

The application of Ito’s formula on \(e^{\beta s}|\hat{Y}_s|^2\) yields

$$\begin{aligned} 0&\ge E\left[ e^{\beta \cdot T}|\hat{Y}_T|^2-e^{\beta \cdot 0}|\hat{Y}_0|^2\right] =E\Bigg [\int \limits _0^T\beta e^{\beta s}|\hat{Y}_{s-}|^2ds+\int \limits _0^T\int \limits _\mathbb {R}2e^{\beta s}\hat{Y}_{s-}\hat{Z}_s(\xi )\mu (ds,d\xi ) \\&\quad +\int \limits _0^T2e^{\beta s}\hat{Y}_{s-}E'\Big [h\Big (s,\lambda _s,\lambda '_s,Y^{(1)}_s,(y^{(1)}_s)',Z^{(1)}_s, (z^{(1)}_s)'\Big )\nonumber \\&\quad -h\Big (s,\lambda _s,\lambda '_s,Y^{(2)}_s,(y^{(2)}_s)', Z^{(2)}_s,(z^{(2)}_s)'\Big )\Big ]ds\\&\quad +\frac{1}{2}\int \limits _0^T2e^{\beta s}|\hat{Z}_s(0)|^2\lambda ^B_sds\quad \\&\quad +\int \limits _0^T\int \limits _{\mathbb {R}_0} \{ e^{\beta s}(|\hat{Y}_{s-}+\hat{Z}_s(\xi )|^2-|\hat{Y}_{s-}|^2)-2e^{\beta s}\hat{Y}_{s-}\hat{Z}_s(\xi ) \} \lambda ^B_s\nu (d\xi )ds\Bigg ] \end{aligned}$$

Since \(Z^{(1)},\,Z^{(2)}\in \mathcal I\), then the process \(M_t:=\int _0^t\int _\mathbb {R}\hat{Z}_s(z)\mu (ds,dz)\) is a martingale. Since the filtration \(\mathbb {G}\) is right continuous (see [9, Lemma 2.4]), Doob’s Regularization Theorem (see, e.g. [14, Theorem 6.27]) implies that M has a càdlàg version and, being the integral w.r.t. ds continuous, we conclude that Y has a càdlàg version. Hence the càdlàg version of Y has only countably many discontinuities, we can replace the \(\hat{Y}_{s-}\) by \(\hat{Y}_{s}\) in the integrals w.r.t. ds. Rearranging terms and the Lipschitzianity of h, given by (C3) yields

$$\begin{aligned}&E\left[ \int \limits _0^T\beta e^{\beta s}|\hat{Y}_{s}|^2ds+\int \limits _0^Te^{\beta s}||\hat{Z}_{s}||_{\lambda _s}^2ds\right] \\&\quad \le -E\Bigg [\int \limits _0^T2e^{\beta s}\hat{Y}_{s}E'\Bigg [h\Bigg (s,\lambda _s,\lambda '_s,Y^{(1)}_s,(y^{(1)}_s)', Z^{(1)}_s,(z^{(1)}_s)'\Bigg )\\&\qquad -h\Bigg (s,\lambda _s,\lambda '_s,Y^{(2)}_s, (y^{(2)}_s)',Z^{(2)}_s,(z^{(2)}_s)'\Bigg )\Bigg ]ds\Bigg ]\\&\quad \le E\Bigg [\int \limits _0^T2e^{\beta s}|\hat{Y}_{s}|E'\Bigg [\Bigg |h\Bigg (s,\lambda _s,\lambda '_s,Y^{(1)}_s,(y^{(1)}_s)', Z^{(1)}_s,(z^{(1)}_s)'\Bigg )\\&\qquad -h\Bigg (s,\lambda _s,\lambda '_s,Y^{(2)}_s, (y^{(2)}_s)',Z^{(2)}_s,(z^{(2)}_s)'\Bigg )\Bigg |\Bigg ]ds\Bigg ]\\&\quad \le E\Bigg [\int \limits _0^T2Ke^{\beta s}|\hat{Y}_{s}|E'\Bigg [|Y^{(1)}_s-Y^{(2)}_s|+|(y^{(1)}_s)'-(y^{(2)}_s)'|\\&\qquad +||Z^{(1)}_s-Z^{(2)}_s||_{\lambda _s}+||(z^{(1)}_s)'-(z^{(2)}_s)'||_{\lambda '_s}\Bigg ]ds\Bigg ] \end{aligned}$$

By the definition of the operator \(E'\), we have

$$\begin{aligned}&E'[|Y^{(1)}_s-Y^{(2)}_s|]=|Y^{(1)}_s-Y^{(2)}_s|=|\hat{Y}_s|\\&E'[|(y^{(1)}_s)'-(y^{(2)}_s)'|]=E[|y^{(1)}_s-y^{(2)}_s|]=E[|\hat{y}_s|]\\&E'[||Z^{(1)}_s-Z^{(2)}_s||_{\lambda _s}]=||Z^{(1)}_s-Z^{(2)}_s||_{\lambda _s}=||\hat{Z}_s||_{\lambda _s}\\&E'[||(z^{(1)}_s)'-(z^{(2)}_s)'||_{\lambda _s}]=E[||z^{(1)}_s-z^{(2)}_s||_{\lambda _s}]=E[||\hat{z}_s||_{\lambda _s}]. \end{aligned}$$

Making use of the fact that \(2ab\le ka^2+\frac{1}{k}b^2\) for all \(a,b\in \mathbb {R}\) and all \(k>0\), and choosing \(k:=16K\), \(a:=|\hat{Y}_s|\), \(b=(|\hat{Y}_s|+E[|\hat{y}_s|]+||\hat{Z}_s||_{\lambda _s}+E[||\hat{z}_s||_{\lambda _s}])\), we get

$$\begin{aligned}&E\left[ \int \limits _0^T\beta e^{\beta s}|\hat{Y}_{s}|^2ds+\int \limits _0^Te^{\beta s}||\hat{Z}_{s}||_{\lambda _s}^2ds\right] \\&\quad \le 16K^2E\left[ \int \limits _0^Te^{\beta s}|\hat{Y}_{s}|^2ds\right] +\frac{1}{16}E\left[ \int \limits _0^Te^{\beta s}(|\hat{Y}_s|+E[|\hat{y}_s|]+||\hat{Z}_s||_{\lambda _s}+E[||\hat{z}_s||_{\lambda _s}])^2ds\right] \\&\quad \le 16K^2E\left[ \int \limits _0^Te^{\beta s}|\hat{Y}_{s}|^2ds\right] +\frac{1}{4}E\left[ \int \limits _0^Te^{\beta s}|\hat{Y}_s|^2ds\right] +\frac{1}{4}E\left[ \int \limits _0^Te^{\beta s}|\hat{y}_s|^2ds\right] \\&\qquad +\frac{1}{4}E\left[ \int \limits _0^Te^{\beta s}||\hat{Z}_s||_{\lambda _s}^2ds\right] +\frac{1}{4}E\left[ \int \limits _0^Te^{\beta s}||\hat{z}_s||_{\lambda _s}^2ds\right] , \end{aligned}$$

where we also used that \((\sum _{i=1}^na_i)^2\le n\sum _{i=1}^na_i^2\) and \(E[X]^2\le E[X^2]\). This yields

$$\begin{aligned}&\left( \beta -16K^2-\frac{1}{4}\right) E\left[ \int \limits _0^Te^{\beta s}|\hat{Y}_{s}|^2ds\right] +\frac{3}{4}E\left[ \int \limits _0^Te^{\beta s}||\hat{Z}_{s}||_{\lambda _s}^2ds\right] \\&\quad \le \frac{1}{4}E\left[ \int \limits _0^Te^{\beta s}(|\hat{y}_s|^2+||\hat{z}_s||_{\lambda _s}^2)ds\right] \end{aligned}$$

Choosing \(\beta =16K^2+1>0\), we finally get

$$\begin{aligned}&||(\hat{Y},\hat{Z})||_{\beta }=E\left[ \int \limits _0^Te^{\beta s}(|\hat{Y}_{s}|^2ds+||\hat{Z}_{s}||_{\lambda _s}^2)ds\right] \\&\quad \le \frac{1}{3}E\left[ \int \limits _0^Te^{\beta s}(|\hat{y}_s|^2+||\hat{z}_s||_{\lambda _s}^2)ds\right] =\frac{1}{3}||(\hat{y},\hat{z})||_{\beta }. \end{aligned}$$

By this we see that \(\Psi \) is a contraction.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Di Nunno, G., Haferkorn, H. A Maximum Principle for Mean-Field SDEs with Time Change. Appl Math Optim 76, 137–176 (2017). https://doi.org/10.1007/s00245-017-9426-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00245-017-9426-0

Keywords

Mathematics Subject Classification

Navigation