Skip to main content
Log in

A Mean Field Game Inverse Problem

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

Mean-field games arise in various fields, including economics, engineering, and machine learning. They study strategic decision-making in large populations where the individuals interact via specific mean-field quantities. The games’ ground metrics and running costs are of essential importance but are often unknown or only partially known. This paper proposes mean-field game inverse-problem models to reconstruct the ground metrics and interaction kernels in the running costs. The observations are the macro motions, to be specific, the density distribution and the velocity field of the agents. They can be corrupted by noise to some extent. Our models are PDE constrained optimization problems, solvable by first-order primal-dual methods. We apply the Bregman iteration method to improve the parameter reconstruction. We numerically demonstrate that our model is both efficient and robust to the noise.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data Availibility Statement

Enquiries about data availability should be directed to the authors.

References

  1. Achdou, Y., Buera, F.J., Lasry, J.-M., Lions, P.-L., Moll, B.: Partial differential equation models in macroeconomics. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 372(2028), 20130397 (2014)

    Article  MathSciNet  Google Scholar 

  2. Achdou, Y., Capuzzo-Dolcetta, I.: Mean field games: numerical methods. SIAM J. Numer. Anal. 48(3), 1136–1162 (2010)

    Article  MathSciNet  Google Scholar 

  3. Achdou, Y., Han, J., Lasry, J.-M., Lions, P.-L., Moll, B.: Income and wealth distribution in macroeconomics: a continuous-time approach. Technical Report w23732, National Bureau of Economic Research, Cambridge, MA (2017)

  4. Benamou, J.-D., Carlier, G.: Augmented Lagrangian methods for transport optimization, mean field games and degenerate elliptic equations. J. Optim. Theory Appl. 167(1), 1–26 (2015)

    Article  MathSciNet  Google Scholar 

  5. Benamou, J.-D., Carlier, G., Santambrogio, F.: Variational mean field games. In: Nicola, B., Pierre, D., Eitan, T., (eds.) Active particles, Vol. 1, pp. 141–171. Springer International Publishing, Cham, (2017). Series Title: Modeling and Simulation in Science, Engineering and Technology

  6. Briceño-Arias, L.M., Kalise, D., Silva, F.J.: Proximal methods for stationary mean field games with local couplings. SIAM J. Control. Optim. 56(2), 801–836 (2018)

    Article  MathSciNet  Google Scholar 

  7. Cardaliaguet, P.: The Master Equation and the Convergence Problem in Mean Field Games Number 201 in Annals of mathematics studies. Princeton University Press, Princeton, NJ (2019)

    Google Scholar 

  8. Cardaliaguet, P., Lehalle, C.-A.: Mean field game of controls and an application to trade crowding. Math. Financ. Econ. 12(3), 335–363 (2018)

    Article  MathSciNet  Google Scholar 

  9. Carmona, R.: Applications of mean field games in financial engineering and economic theory. arXiv:2012.05237 [econ, math, q-fin], (December 2020)

  10. Casgrain, P., Jaimungal, S.: Mean field games with partial information for algorithmic trading. arXiv:1803.04094 [math, q-fin], (2019)

  11. Chambolle, A., Pock, T.: On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 159(1–2), 253–287 (2016)

    Article  MathSciNet  Google Scholar 

  12. Chan, T.F., Shen, J.: Image processing and analysis: variational, PDE, wavelet, and stochastic methods. Society for Industrial and Applied Mathematics, Philadelphia, (2005). OCLC: ocm60321765

  13. Chen, Y., Georgiou, T.T., Pavon, M.: On the relation between optimal transport and Schrödinger bridges: a stochastic control viewpoint. J. Optim. Theory Appl. 169(2), 671–691 (2016)

    Article  MathSciNet  Google Scholar 

  14. Chow, Y.T., Darbon, J., Osher, S., Yin, W.: Algorithm for overcoming the curse of dimensionality for time-dependent non-convex Hamilton–Jacobi equations arising from optimal control and differential games problems. J. Sci. Comput. 73(2–3), 617–643 (2017)

    Article  MathSciNet  Google Scholar 

  15. Chow, Y.T., Darbon, J., Osher, S., Yin, W.: Algorithm for overcoming the curse of dimensionality for certain non-convex Hamilton–Jacobi equations, projections and differential games. Ann. Math. Sci. Appl. 3(2), 369–403 (2018)

    Article  MathSciNet  Google Scholar 

  16. Chow, Y.T., Li, W., Osher, S., Yin, W.: Algorithm for Hamilton–Jacobi equations in density space via a generalized Hopf formula. J. Sci. Comput. 80(2), 1195–1239 (2019)

    Article  MathSciNet  Google Scholar 

  17. De Paola, A., Trovato, V., Angeli, D., Strbac, G.: A mean field game approach for distributed control of thermostatic loads acting in simultaneous energy-frequency response markets. IEEE Trans. Smart Grid 10(6), 5987–5999 (2019)

    Article  Google Scholar 

  18. Weinan, E., Jiequn, H., Qianxiao, L.: A mean-field optimal control formulation of deep learning. Res. Math. Sci. 6(1), 10 (2019)

    Article  MathSciNet  Google Scholar 

  19. Elamvazhuthi, K., Liu, S., Li, W., Osher, S.: Optimal Transport of Nonlinear Control-Affine Systems. Publisher, Unpublished (2020)

  20. Engquist, B., Yang, Y.: Seismic imaging and optimal transport. arXiv:1808.04801 [math], (2018)

  21. Firoozi, D., Caines, P.E.: An optimal execution problem in finance targeting the market trading speed: an MFG formulation. In 2017 IEEE 56th annual conference on decision and control (CDC), pp. 7–14, Melbourne, Australia. IEEE (2017)

  22. Gomes, D.A., Nurbekyan, L., Pimentel, E.: Economic models and mean-field games theory. Publicaoes Matematicas, IMPA, Rio, Brazil (2015)

  23. Gomes, D.A., Saúde, J.: A mean-field game approach to price formation. Dyn. Games Appl. 11, 29–53 (2021)

  24. Gu, H., Guo, X., Wei, X., Xu, R.: Dynamic programming principles for learning MFCs. arXiv:1911.07314 [math], (2020)

  25. Guo, X., Hu, A., Xu, R., Zhang, J.: A general framework for learning mean-field games. arXiv:2003.06069 [cs, math, stat], (2020)

  26. Guéant, O., Lasry, J.-M., Lions, P.-L.: Mean field games and applications. In Morel, J.-M., Takens, F., Teissier, B. (eds), Paris-Princeton Lectures on Mathematical Finance 2010, volume 2003, pages 205–266. Springer Berlin Heidelberg, Berlin, Heidelberg, (2011). Series Title: Lecture Notes in Mathematics

  27. Hofmann, T., Schölkopf, B., Smola, A.J.: Kernel methods in machine learning. Ann. Stat. 36(3), 1171–1220 (2008)

    Article  MathSciNet  Google Scholar 

  28. Huang, X., Jaimungal, S., Nourian, M.: Mean-field game strategies for optimal execution. Appl. Math. Finance 26(2), 153–185 (2019)

  29. Kachroo, P., Agarwal, S., Sastry, S.: Inverse problem for non-viscous mean field control: example from traffic. IEEE Trans. Autom. Control 61(11), 3412–3421 (2016)

    Article  MathSciNet  Google Scholar 

  30. Kizilkale, A.C., Salhab, R., Malhamé, R.P.: An integral control formulation of mean field game based large scale coordination of loads in smart grids. Automatica 100, 312–322 (2019)

    Article  MathSciNet  Google Scholar 

  31. Lasry, J.-M., Lions, P.-L.: Jeux á champ moyen. I - Le cas stationnaire. C.R. Math. 343(9), 619–625 (2006)

    Article  MathSciNet  Google Scholar 

  32. Lasry, J.-M., Lions, P.-L.: Jeux á champ moyen. II - Horizon fini et contrôle optimal. C.R. Math. 343(10), 679–684 (2006)

    Article  MathSciNet  Google Scholar 

  33. Lasry, J.-M., Lions, P.-L.: Mean field games. Jpn. J. Math. 2(1), 229–260 (2007)

    Article  MathSciNet  Google Scholar 

  34. Lehalle, C.-A., Mouzouni, C.: A mean field game of portfolio trading and its consequences on perceived correlations. arXiv:1902.09606 [math, q-fin], (2019)

  35. Li, R., Ye, X., Zhou, H., Zha, H.: Learning to match via inverse optimal transport. arXiv:1802.03644 [cs, stat], (2018)

  36. Li, W.: Hessian metric via transport information geometry. arXiv:2003.10526 [math-ph], (2020)

  37. Li, W.: Transport information geometry I: Riemannian calculus on probability simplex. arXiv:1803.06360 [math], (2020)

  38. Li, W., Ryu, E.K., Osher, S., Yin, W., Gangbo, W.: A parallel method for earth mover’s distance. J. Sci. Comput. 75(1), 182–197 (2018)

    Article  MathSciNet  Google Scholar 

  39. Li, W., Yin, P., Osher, S.: Computations of optimal transport distance with fisher information regularization. J. Sci. Comput. 75(3), 1581–1595 (2018)

    Article  MathSciNet  Google Scholar 

  40. Liu, J., Yin, W., Li, W., Chow, Y.T.: Multilevel optimal transport: a fast approximation of wasserstein-1 distances. arXiv:1810.00118 [math, stat], (2019)

  41. Liu, S., Jacobs, M., Li, W., Nurbekyan, L., Osher, S.J.: Computational methods for nonlocal mean field games with applications. arXiv:2004.12210 [math], (2020)

  42. Léonard, C., Modal-X.: Université Paris Ouest, Bât. G, 200 av. de la République. 92001 nanterre. a survey of the Schrödinger problem and some of its connections with optimal transport. Discrete Contin. Dyn. Syst.- A, 34(4):1533–1574 (2014)

  43. Mohri, M., Rostamizadeh, A., Talwalkar, A.: Foundations of Machine Learning. Adaptive Computation and Machine Learning series. MIT Press, Cambridge, MA (2012)

    MATH  Google Scholar 

  44. Osher, S., Burger, M., Goldfarb, D., Jinjun, X., Yin, W.: An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 4(2), 460–489 (2005)

    Article  MathSciNet  Google Scholar 

  45. Papadakis, N., Peyré, G., Oudet, E.: Optimal transport with proximal splitting. SIAM J. Imag. Sci. 7(1), 212–238 (2014)

    Article  MathSciNet  Google Scholar 

  46. Stuart, A.M., Wolfram, M.-T.: Inverse optimal transport. SIAM J. Appl. Math. 80(1), 599–619 (2020)

    Article  MathSciNet  Google Scholar 

  47. Villani, C.: Topics in optimal transportation. (2003). OCLC: 908039764

  48. Villani, C.: Optimal transport: old and new. Number 338 in Grundlehren der mathematischen Wissenschaften. Springer, Berlin (2009). OCLC: 271643433

  49. Yang, Y., Luo, R., Li, M., Zhou, M., Zhang, W., Wang, J.: Mean field multi-agent reinforcement learning. arXiv:1802.05438 [cs], (2018)

  50. Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for 1-minimization with applications to compressed sensing. SIAM J. Imag. Sci. 1(1), 143–168 (2008)

    Article  MathSciNet  Google Scholar 

Download references

Funding

The authors have not disclosed any funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wuchen Li.

Ethics declarations

Conflict of interest

The authors have not disclosed any competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This paper is under the support of AFOSR MURI FA9550-18-1-0502.

Appendices

Appendix A. Table of Notations

Table 4 Table of notations

Appendix B. Proofs of the Theorems in Sect. 3

Proof of Theorem 3

For simplicity, we just prove the case \({\mathcal {G}}\) being an indicator function, i.e., \(\rho _T=\rho _{term}\). For the case \({\mathcal {G}}\) being differentiable, the proof is similar. We start our proof with one direction, if \((\rho ,{\mathbf {v}})\) is the solution of (1.1). According to the assumption that \(\rho \) is strictly positive, the KKT condition of (1.1) is (2.1). The minimizer \((\rho ,{\mathbf {v}})\) solves (2.1) with some Lagrangian multiplier \(\varphi \). Then, by substituting \({\mathbf {m}}\) with \(\rho ,\varphi ,G_M\), the KKT condition is equivalent to (2.2), \((\rho ,\varphi )\) solves (2.2), \({\mathbf {v}}\) can be represented as \({\mathbf {v}}=G_M^{-1}\nabla \varphi \).

Note \({\mathbf {w}}=\nabla \varphi \). The HJE (2.2b) can be written as:

$$\begin{aligned} \begin{aligned} \varphi _t=\frac{\delta }{\delta \rho }{\mathcal {F}}(\rho )-\frac{1}{2}{\mathbf {w}}^TG_M^{-1}{\mathbf {w}}. \end{aligned} \end{aligned}$$

Let

$$\begin{aligned} \xi :=\frac{1}{2}{\mathbf {w}}^TG_M^{-1}{\mathbf {w}}-\frac{\delta }{\delta \rho }{\mathcal {F}}(\rho ). \end{aligned}$$

Since \(\varphi \) has a second-order mixed derivative by assumption, we can substitute HJE with

$$\begin{aligned} {\mathbf {w}}_t+\nabla \xi =0,\quad \frac{\partial w_i}{\partial x_j}=\frac{\partial w_j}{\partial x_i},\quad i\not =j, \end{aligned}$$

where \(w_i,w_j\) are the ith and jth component of \({\mathbf {w}}\). The MFG system is transformed into:

$$\begin{aligned} \left\{ \begin{aligned}&\rho _t+\nabla \cdot (\rho G_M^{-1}{\mathbf {w}})=0\\&{\mathbf {w}}_t+\nabla \left( \frac{1}{2}{\mathbf {w}}^TG_M^{-1}{\mathbf {w}}-\frac{\delta }{\delta \rho }{\mathcal {F}}(\rho )\right) =0\\&\frac{\partial w_i}{\partial {x_j}}=\frac{\partial w_j}{\partial {x_i}}, \quad i\not =j \end{aligned} \right. \end{aligned}$$
(B.1)

Since the domain \({\mathbb {T}}^d\) is multiply connected, for compatibility, the integral of \(w_i\) over the path \(s_i({\hat{x}}^i,\cdot )\) equals to 0:

$$\begin{aligned} \int _{s_i({\hat{x}}^i,\cdot )} w_i dS_x=0,\quad {\hat{x}}^i\in {\mathbb {T}}^{d-1},i=1,2,\ldots ,d. \end{aligned}$$
(B.2)

Since

$$\begin{aligned} {\mathbf {v}}=\frac{{\mathbf {m}}}{\rho }=G_M^{-1}\nabla \varphi =G_M^{-1}{\mathbf {w}}, \end{aligned}$$

by substituting \({\mathbf {w}}\) with \(G_M{\mathbf {v}}\), (B.1) (B.2) can be reformulated into (3.1). Hence the minimizer \((\rho ,{\mathbf {v}})\) of the optimization problem solves (3.1).

Conversely, we assume that \((\rho ,{\mathbf {v}})\) is a solution of (3.1). Since \(G_M{\mathbf {v}}\) satisfies (3.1c), (3.1d), according to Helmholtz’s theorem, \(G_M{\mathbf {v}}\) is the gradient of some function \(\varphi \) on \({\mathbb {T}}\), i.e. \(G_M{\mathbf {v}}=\nabla \varphi \). Inserting \(G_M{\mathbf {v}}=\nabla \varphi \) into (3.1a), (3.1b), we can get

$$\begin{aligned} \rho _t+\nabla \cdot (\rho G_{M}^{-1}\nabla \varphi )&=0 \\ (\nabla \varphi )_t+\nabla \left( \frac{1}{2}\nabla \varphi ^TG_{M}^{-1}\nabla \varphi -\frac{\delta }{\delta \rho }{\mathcal {F}}(\rho )\right)&=0 \end{aligned}$$

Further, we have

$$\begin{aligned}&\rho _t+\nabla \cdot (\rho G_{M}^{-1}\nabla \varphi )=0 \end{aligned}$$
(B.4a)
$$\begin{aligned}&\varphi _t+\frac{1}{2}\nabla \varphi ^TG_{M}^{-1}\nabla \varphi -\frac{\delta }{\delta \rho }{\mathcal {F}}(\rho )=C(t) \end{aligned}$$
(B.4b)

With out losing generality, we can assume \(C(t)=0\), else, we can substitute \(\varphi \) with \({\tilde{\varphi }}\):

$$\begin{aligned} {\tilde{\varphi }}=\varphi -\int _0^tC(s)\,ds, \end{aligned}$$

here \(\nabla {\tilde{\varphi }}=G_M{\mathbf {v}}\) still holds. Set \({\mathbf {m}}=\rho G_M^{-1}\nabla \varphi \), and insert it into (B.4). By variable substitution among \({\mathbf {m}},\rho ,\varphi \), we can reach (2.1). Due to the strict convexity of the optimization problem (1.1), the above pair \((\rho ,{\mathbf {m}})\) satisfying the KKT condition is a minimizer of (1.1). \(\square \)

Proof of Theorem 5

Take \(\Phi ,\pmb {\psi },\pmb {\chi },\Theta \) as the dual variable. Especially,

$$\begin{aligned} \Theta (x,t)=\left( \Theta _1((x_2,x_3),t),\Theta _2((x_1,x_2),t),\Theta _3((x_2,x_3),t)\right) ^T, \end{aligned}$$

and \(\Theta _i\) is the dual variable of the constrain:

$$\begin{aligned} \int _{s_i({\hat{x}}^i,\cdot )} (G_M{\mathbf {v}})_i\,dS_x=0,\qquad i=1,2,3. \end{aligned}$$

The Lagrangian of (4.1) can be written as:

$$\begin{aligned} \begin{aligned}&\mathop {{\text {min}}}\limits _{ \begin{array}{c} \rho ,{\mathbf {v}},g_0 \end{array} }\mathop {{\text {max}}}\limits _{\Phi ,\pmb {\psi },\pmb {\chi },\Theta } {\mathcal {J}}(\rho ,{\mathbf {v}},\rho _0,\rho _T,{\mathbf {v}}_T;{\hat{\rho }},{\hat{{\mathbf {v}}}},{\hat{\rho }}_0,{\hat{\rho }}_T) +\frac{\gamma }{2}\Vert \nabla g_0 \Vert _2^2\\&\quad + \int _0^T\int _{{\mathbb {T}}^3}\Phi \left( \rho _t+\nabla \cdot (\rho {\mathbf {v}}) \right) + \pmb {\chi }\cdot \left( \nabla \times (G_M{\mathbf {v}})\right) \\&\quad + \pmb {\psi }\cdot \left( (G_M {\mathbf {v}})_t+\nabla \left( \frac{1}{2} {\mathbf {v}}^T G_M {\mathbf {v}}-F'(\rho )\right) \right) +\Theta \cdot (G_M{\mathbf {v}}) \,dx\,dt . \end{aligned} \end{aligned}$$

Besides, by integration by parts, we have

$$\begin{aligned} \begin{aligned}&\frac{1}{2}\int _{{\mathbb {T}}^3} \nabla g_0\cdot \nabla g_0\,dx=\frac{1}{2}\int _{{\mathbb {T}}^3}\nabla \cdot (g_0\nabla g_0)-g_0\triangle g_0 dx=-\frac{1}{2}\int _{{\mathbb {T}}^3} g_0\triangle g_0\,dx, \\&\quad \int _0^T\int _{{\mathbb {T}}^3} \Phi \left( \rho _t+\nabla \cdot (\rho {\mathbf {v}}) \right) \,dx\,dt\\&\quad = \int _0^T\int _{{\mathbb {T}}^3} (\rho \Phi )_t-\rho \Phi _t+\nabla \cdot (\Phi \rho {\mathbf {v}}) -\rho \nabla \Phi \cdot {\mathbf {v}}\,dx\,dt\\&\quad =\left. \int _{{\mathbb {T}}^3} \rho \Phi \,dx\right| _{t=0}^T +\int _0^T\int _{{\mathbb {T}}^3}-\rho \Phi _t-\rho \nabla \Phi \cdot {\mathbf {v}}\,dx\,dt, \\&\quad \int _0^T\int _{{\mathbb {T}}^3} \pmb {\psi }\cdot \left( (G_M {\mathbf {v}})_t+\nabla \left( \frac{1}{2} {\mathbf {v}}^T G_M {\mathbf {v}}-F'(\rho )\right) \right) \,dx\,dt\\&\quad =\int _0^T\int _{{\mathbb {T}}^3} (\pmb {\psi }^T G_M {\mathbf {v}})_t-\pmb {\psi }_t^TG_M{\mathbf {v}}+ \nabla \cdot \left( \pmb {\psi }\left( \frac{1}{2} {\mathbf {v}}^T G_M {\mathbf {v}}-F'(\rho )\right) \right) \\&\qquad -\nabla \cdot \pmb {\psi }\left( \frac{1}{2} {\mathbf {v}}^T G_M {\mathbf {v}}-F'(\rho )\right) \,dx\,dt,\\&\quad =\left. \int _{{\mathbb {T}}^3}\pmb {\psi }^T G_M {\mathbf {v}}\,dx \right| _{t=0}^{T} +\int _0^T\int _{{\mathbb {T}}^3} -\pmb {\psi }_t^TG_M{\mathbf {v}}-\nabla \cdot \pmb {\psi }\left( \frac{1}{2} {\mathbf {v}}^T G_M {\mathbf {v}}-F'(\rho )\right) \,dx\,dt \\&\quad \int _0^T\int _{{\mathbb {T}}^3}\pmb {\chi }\cdot (\nabla \times (G_M{\mathbf {v}}))\,dx\,dt\\&\quad = \int _0^T\int _{{\mathbb {T}}^3}\nabla \cdot \left( (G_M{\mathbf {v}})\times \pmb {\chi }\right) +(G_M{\mathbf {v}})\cdot (\nabla \times \pmb {\chi })\,dx\,dt\\&\quad =\int _0^T\int _{{\mathbb {T}}^3}(G_M{\mathbf {v}})\cdot (\nabla \times \pmb {\chi })\,dx\,dt. \end{aligned} \end{aligned}$$

We insert these equalities into the Lagrangian. By making first variation with regard to \(\rho ,{\mathbf {v}}\) at initial and terminal time, i.e., in the space \({\mathbb {T}}^3\times \{0\}, {\mathbb {T}}^3\times \{T\}\), we can deduce

$$\begin{aligned} \frac{\delta }{\delta \rho _0}{\mathcal {J}}\!-\!\Phi (x,0)\!=\!0,\quad ,\frac{\delta }{\delta \rho _T}{\mathcal {J}}\!+\!\Phi (x,T)=0,\quad \pmb {\psi }(x,0)\!=\!0,\quad G_M\pmb {\psi }(x,T)\!+\!\frac{\delta }{\delta {\mathbf {v}}_T}{\mathcal {J}}\!=\!0. \end{aligned}$$

Taking first variation with regard to \(\rho ,{\mathbf {v}},g_0\) in the density space \({\mathbb {T}}^3\times [0,T]\), combining with the fact that \(\pmb {\psi }\) vanishes at \(t=0,T\), we can get the first three equations in (4.3). And finally, the variation with regard to dual variable leads back to constraints in (3.4). \(\square \)

The proof of Theorem 6 is quite analog to the proof of Theorem 5, except for some details in dealing with the convolution operator. The summation over \(x^*\in {\mathcal {S}}(x)\) comes from the symmetry of the convolution kernel \(K(\cdot ,\cdot )\). We omit the proof here.

Appendix C. Complementary Computational Results

Test 7

We apply our algorithm to solve Model 3 in 1D. No noise impacts on the observations, \(G_M=g_0\) is taken. We discretize the problem on a \(50\times 30\) grid. The information about \(G_M\) at a single grid is gained. The scaling parameters \(\alpha ,\alpha _0,\beta \) are taken as in (6.1), \(\gamma \) varies in \(\{10^{-8},10^{-7},\ldots ,10^{-3}\}\). The settings for the iteration step size and the iteration time follows Test 3. We test our algorithm on two ground metrics of different shapes. The results are depicted in Figs. 9 and 10. The error estimate is in Table 5 (Table 6).

Fig. 9
figure 9

The result for \(G_M(x)=1-0.6 \sin (\pi x)^2\). From left to right, \(\gamma =10^{-8},10^{-7},\ldots ,10^{-3}\). The red curve presents the leaned ground metric, and the blue curve depicts the real metric

Fig. 10
figure 10

The result for \(G_M(x)=1-0.6 \sin (2\pi x)^2\). From left to right, \(\gamma =10^{-8},10^{-7},\ldots ,10^{-3}\)

From the numeric results, we conclude that the optimal parameter for the inverse model depends on the shape of the ground metric. When \(g_0\) has more fluctuation, we should choose a smaller \(\gamma \) to preserve the oscillation of \(g_0\). This matches our intuition.

Table 5 The absolute (relative) error in Test 7 for different source data

Test 8

In this test, we use 2-dimensional noisy observations. We design our experiment based on the data in Test 1, and the data is corrupted by additive noise defined in (6.2), with level \(\epsilon ^*=0.1\). The scaling parameters are selected as \(\alpha =1/\Vert {\hat{\rho }}\Vert ^2,\alpha _0=0,\beta = 1/{\hat{{\mathbf {v}}}}^T{\hat{{\mathbf {v}}}},\gamma =0.01\). During the iteration, as prior information, \(g_0\) on one row of grids is fixed to be the ground truth. Other settings all follow Test 1. The result is depicted in Fig. 11.

Test 9

In Test 8, the prior information about \(g_0\) on a row of grids is assumed to be known. In this test, we inspect the sensitivity of the reconstruction of the ground metric about the prior information. We variate the value of \(g_0\) on the row to be \(1.05\times \) the ground truth. Moreover, this row of grids is set fixed during the iteration. All other settings are the same as they are in Test 8. The computational result is in Fig. 12.

Table 6 The error in selected Bregman iteration, the last column is intended to compare with the result by taking optimal \(\gamma \) in non-Bregman algorithm
Fig. 11
figure 11

The recreated metric kernel from noisy data, in 2 dimensional case, when \(\epsilon ^*=0.1\). From left to right, the figures correspond to the recreated metric kernel \(g_0\), the ground truth, and the residue \(|g_0-\bar{g_0}|\). The absolute error is 0.0900, and the relative error is 0.1683

Fig. 12
figure 12

The sensitive test for 2D ground metric reconstruction. The prior information of \(g_0\) is biased by 5%. From left to right, the figures are the heat maps for the reconstruction ground metric, ground truth and the absolute difference between them. The absolute error is 0.0907, and the relative error is 0.1694

Test 10

In this test, we run Algorithm 2 and use the same data as Test 4, the noise level is \(\epsilon ^*=1\). We set \(\gamma =10^{-1}\), and take \(3\times 10^6\) primal-dual sub-iterations in each Bregman iteration. For comparison, a non-Bregman result of Algorithm 1 is also given. The iteration time for the non-Bregman algorithm is \(3\times 10^6\). The results of the first 13 Bregman iterations are depicted in Fig. 13.

Fig. 13
figure 13

The recreated convolution kernels from noisy data from the first 13 Bregman iterations. \(\gamma =10^{-1}\). (The last figure shows the recreated convolution kernel from Algorithm 1 with the nearly optimal \(\gamma =10^{-3}\))

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ding, L., Li, W., Osher, S. et al. A Mean Field Game Inverse Problem. J Sci Comput 92, 7 (2022). https://doi.org/10.1007/s10915-022-01825-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-022-01825-8

Keywords

Navigation