Abstract
The singular perturbation of mean field game systems arising from minimization problems with control of acceleration is addressed, that is, we analyze the behavior of solutions as the acceleration costs vanishes. In this setting, the Hamiltonian fails to be strictly convex and coercive w.r.t. the momentum variable and, so, the classical results for Tonelli Hamiltonian systems cannot be applied. However, we show that the limit system is of MFG type in two different cases: we first study the convergence to the classical MFG system and, then, by a finer analysis of the Euler–Lagrange flow associated with the control of acceleration, we prove the convergence to a class of MFG systems, known as, MFG of control.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The study of the singular perturbation of control systems has a long history going back to [5,6,7] and references therein. Such a problem concerns the analysis of systems where some state variables evolve at a much faster time scale than the others. Generally, the solution of a typical singular perturbation problem leads to the elimination of the fast state variable and, consequently, to a reduction in the dimension of the system. Clearly, the limit problem keeps some information on the fast part.
Besides classical control systems, other types of singular perturbation problems have been studied and we refer, for instance, to homogenization (e.g., [27, 33]) and the long time behavior (e.g., [22, 23, 10]). More recently, such analysis was extended to the case of differential games (e.g., [2, 3, 29, 35]) and of that mean field games (MFG) (e.g., [12, 13, 16, 17, 19, 34]). Based on this recent literature, in this paper we make a step further. Indeed, the goal of this work is twofold: first, we show a connection between the classical MFG system, where the underlying payoff is a calculus of variation problem, and the MFG with control of acceleration; secondly, we show how a MFG of control system can be recovered from a MFG system with control of acceleration. We will extend such analysis to the case of singular perturbation problems associated with sub-Riemannian structure in a future work.
We recall that MFG were introduced in [30,31,32] and [25, 26] in order to describe the behavior of Nash equilibria in problems with infinitely many rational agents (we refer to [15] and references therein for more details). Since these pioneering works, the MFG theory has grown very fast: we refer for instance to the survey papers and the monographs [11, 18, 24]. The classical MFG system introduced in [30,31,32] describes systems in which each of the typical payoff is represented by deterministic calculus of variation problem. MFG systems with control of acceleration, first introduced in [1, 14], describe models where agents control their acceleration and the cost functional to minimize depends on higher order derivatives of admissible trajectories. Such problems naturally appear in the study of agent-based models which describe the collective behavior of various animal populations (e.g., [28, 36]) or crowd dynamics (e.g., [20, 21]). The study of the singular perturbation problem provided in this paper finds a lot of applications as, for instance, to a MFG system with Cucker–Smale type dynamics ( [9]) to describe the behavior of a flock in which the control is increasingly weaker.
We now describe the problems we address in this paper.
-
(1)
Convergence to the classical MFG system. We study the limit of the solution to the system
$$\begin{aligned} {\left\{ \begin{array}{ll} -\partial _{t} u^{{{\,\mathrm{\varepsilon }\,}}} +\frac{1}{2{{\,\mathrm{\varepsilon }\,}}}|D_{v}u^{{{\,\mathrm{\varepsilon }\,}}}|^{2} - \langle D_{x}u^{{{\,\mathrm{\varepsilon }\,}}}, v \rangle -L_{0}(x, v, m^{{{\,\mathrm{\varepsilon }\,}}}_{t})= 0, &{} (t,x,v) \in [0,T] \times \mathbb {R}^{2d}\\ \partial _{t}\mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t} - \langle D_{x}\mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t},v \rangle - \frac{1}{{{\,\mathrm{\varepsilon }\,}}}{{\,\textrm{div}\,}}_{v}\left( \mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}D_{v}u^{{{\,\mathrm{\varepsilon }\,}}} \right) =0, &{} (t,x,v) \in [0,T] \times \mathbb {R}^{2d}\\ \mu ^{{{\,\mathrm{\varepsilon }\,}}}_{0}=\mu _{0}, \quad u^{{{\,\mathrm{\varepsilon }\,}}}(T,x,v)=g(x,m^{{{\,\mathrm{\varepsilon }\,}}}_{T}), &{} (x,v) \in \mathbb {R}^{2d} \end{array}\right. } \end{aligned}$$(1.1)as the parameter \({{\,\mathrm{\varepsilon }\,}}\) goes to zero. Heuristically, the state equation associated with the above PDEs system is given by
$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{x}(t)=v(t)\\ \dot{v}(t)= \frac{1}{{{\,\mathrm{\varepsilon }\,}}} \alpha (t) \end{array}\right. } \end{aligned}$$(1.2)where \(\alpha : [0,T] \rightarrow \mathbb {R}^{d}\) is a measurable control function and, from [1, 14], we have that for any \({{\,\mathrm{\varepsilon }\,}}> 0\) a typical player aims to minimize a cost functional of the form
$$\begin{aligned} \int _{t}^{T}{\left( \frac{{{\,\mathrm{\varepsilon }\,}}}{2}|\ddot{\gamma }(s)|^{2} + L_{0}(\gamma (s), {\dot{\gamma }}(s), m^{{{\,\mathrm{\varepsilon }\,}}}_{s}) \right) \ \textrm{d}s} + g(\gamma (T), m^{{{\,\mathrm{\varepsilon }\,}}}_{T}). \end{aligned}$$Moreover, still from [1, 14], under suitable assumptions (listed below), we have that for any \({{\,\mathrm{\varepsilon }\,}}> 0\) there exists a unique solution \((u^{{{\,\mathrm{\varepsilon }\,}}}, \mu ^{{{\,\mathrm{\varepsilon }\,}}})\) to (1.1).
Following the previous considerations on a typical singular perturbation problem, in case of control of acceleration, we expect that the fast variable, in this case the velocity of each player, is eliminated in the limit and all the information are captured by the behavior of the space variable. Moreover, since the aim of such analysis is to establish a rigorous mathematical connection between the classical MFG system and the MFG system with control of acceleration, the data in (1.3) have a particular structure. Indeed, we observe that the function \(L_{0}\) and the terminal costs g only depend on the space marginal of the measures \(\{\mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}\}_{t \in [0,T]}\). Such a marginal flow posses all the information of the behavior of the fast variable in the limit, and it is also the object of investigation in classical MFG since it represents the distribution of players in space at each time \(t \in [0,T]\).
-
(2)
Convergence to MFG of control system. In the second part, we analyze the limit of the solution to the system
$$\begin{aligned} {\left\{ \begin{array}{ll} -\partial _{t} u^{{{\,\mathrm{\varepsilon }\,}}} +\frac{1}{2{{\,\mathrm{\varepsilon }\,}}}|D_{v}u^{{{\,\mathrm{\varepsilon }\,}}}|^{2} - \langle D_{x}u^{{{\,\mathrm{\varepsilon }\,}}}, v \rangle -L_{0}(x, v, \mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t})= 0, &{} (t,x,v) \in [0,T] \times \mathbb {R}^{2d}\\ \partial _{t}\mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t} - \langle D_{x}\mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t},v \rangle - \frac{1}{{{\,\mathrm{\varepsilon }\,}}} {{\,\textrm{div}\,}}_{v}\left( \mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}D_{v}u^{{{\,\mathrm{\varepsilon }\,}}} \right) =0, &{} (t,x,v) \in [0,T] \times \mathbb {R}^{2d}\\ \mu ^{{{\,\mathrm{\varepsilon }\,}}}_{0}=\mu _{0}, \quad u^{{{\,\mathrm{\varepsilon }\,}}}(T,x,v)=g(x,m^{{{\,\mathrm{\varepsilon }\,}}}_{T}), &{} (x,v) \in \mathbb {R}^{2d} \end{array}\right. } \end{aligned}$$(1.3)still as the parameter \({{\,\mathrm{\varepsilon }\,}}\) goes to zero. The main issue here is that both the data L and g depend on \(\mu ^{{{\,\mathrm{\varepsilon }\,}}}\) and we have to deal with the convergence of the whole measure. Note that, even though the limit control problem does not depend on velocity as a state variable we have that the second marginal of the limit measure, and so the Lagrangian function still depends on it. For this reason, we expect the limit system to be of mean field game of control type.
Next, we briefly explain the main results and the methods of proof.
-
(1)
Toward the classical MFG system. We prove that \((u^{{{\,\mathrm{\varepsilon }\,}}}, m^{{{\,\mathrm{\varepsilon }\,}}})\), where \(m^{{{\,\mathrm{\varepsilon }\,}}}_{t}\) is the space marginal of the solution \(\mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}\) for any \(t \in [0,T]\), converges (up to subsequence) to a solution \((u^{0}, m^{0})\) of the classical MFG system
$$\begin{aligned} {\left\{ \begin{array}{ll} (i)\,\, -\partial _{t} u^{0}(t,x) + H_{0}(x, D_{x}u^{0}(t,x), m^{0}_{t})=0, &{} \quad (t,x) \in [0,T] \times \mathbb {R}^{d} \\ (ii)\,\, \partial _{t} m^{0}_{t} - {{\,\textrm{div}\,}}\Big ( m^{0}_{t}D_{p}H_{0}(x, D_{x}u^{0}(t,x), m^{0}_{t}) \Big )=0, &{} \quad (t,x) \in [0,T] \times \mathbb {R}^{d} \\ m^{0}_{0}= \pi _{1} \sharp \mu _{0},\,\, u^{0}(T,x)=g(x,m^{0}_{T}), &{} \quad x \in \mathbb {R}^{d} \end{array}\right. } \end{aligned}$$(1.4)where \(H_{0}:\mathbb {R}^d \times \mathbb {R}^d \rightarrow \mathbb {R}\) is the Legendre Transform of the function \(L_0\). Observe that, the main difference between our result and the existing one concerning the homogenization problem in MFG [9, 19, 34] is that the limit system is still of MFG type. Indeed, in [19, 34] and [9] it has been proved that in the limit the MFG structure of the problem is lost and, in particular, an explicit example of MFG system with potential coupling function is constructed in [34].
In order to prove our first main convergence result, we begin by showing that \(u^{{{\,\mathrm{\varepsilon }\,}}}\) is equibounded and \(m^{{{\,\mathrm{\varepsilon }\,}}}\) is tight (see Lemma 4.1 and Theorem 4.4). Thus, as a first consequence we get that, up to a subsequence, there exists \(m^{0} \in C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\) such that \(m^{{{\,\mathrm{\varepsilon }\,}}} \rightarrow m^{0}\) in \(C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\). Then, we proceed with the analysis of the value function \(u^{{{\,\mathrm{\varepsilon }\,}}}\): we show that \(u^{{{\,\mathrm{\varepsilon }\,}}}(t,\cdot ,v)\) is equi-Lipschitz continuous, \(u^{{{\,\mathrm{\varepsilon }\,}}}(\cdot ,x,v)\) is equicontinuous and \(u^{{{\,\mathrm{\varepsilon }\,}}}(t,x,\cdot )\) has decreasing oscillation w.r.t. \({{\,\mathrm{\varepsilon }\,}}\) (see Lemma 4.6 and Proposition 4.7). We finally address the locally uniform convergence of \(u^{{{\,\mathrm{\varepsilon }\,}}}\), showing that there exists a subsequence \({{\,\mathrm{\varepsilon }\,}}_{k} \downarrow 0\) such that \((u^{{{\,\mathrm{\varepsilon }\,}}_{k}}, m^{{{\,\mathrm{\varepsilon }\,}}_{k}})\) converges to a solution \((u^{0}, m^{0})\) of (1.4) (see Theorem 4.9, Proposition 4.10 and Corollary 4.12). The main issues in proving the above results are due to the lack of strict convexity and the lack of coercivity of the Hamiltonian in system (1.1). The technic we use to study our singular perturbation problem is a combination of variational approach to Hamilton–Jacobi equation and optimal transport in order to overcome the issues mentioned above.
-
(2)
Toward MFG of control system. Just for simplicity of notation, we restrict the attention to a Lagrangian of the form
$$\begin{aligned} L(x,v,w, m)= \frac{1}{2}|w|^2 +\frac{1}{2}|v|^2 + L_{0} (x, m). \end{aligned}$$In this setting, we prove that \((u^{{{\,\mathrm{\varepsilon }\,}}}, \mu ^{{{\,\mathrm{\varepsilon }\,}}})\) converges (up to subsequence) to a solution \((u^{0}, \mu ^{0})\) to the MFG of control system
$$\begin{aligned} {\left\{ \begin{array}{ll} (i)\,\, -\partial _{t} u^{0}(t,x) + \frac{1}{2} |D_{x}u^{0}(t,x)|^{2} - L_{0}(x,\mu ^{0}_{t})=0, &{} \quad (t,x) \in [0,T] \times \mathbb {R}^{d}\\ (ii)\,\, \partial _{t} m^{0}_{t} - {{\,\textrm{div}\,}}\big ( m^{0}_{t} D_{x}u^{0}(t,x) \big )=0, &{} \quad (t,x) \in [0,T] \times \mathbb {R}^{d}\\ (iii)\,\, \mu ^{0}_{t} = (\text {Id}(\cdot ), Du^{0}(t, \cdot )) \sharp m^{0}_{t}\\ \mu ^{0}_{0}= \mu _{0},\,\, u^{0}(T,x)=g(x, \mu ^{0}_{T}), &{} \quad x \in \mathbb {R}^{d} \end{array}\right. } \end{aligned}$$(1.5)where \(H_{0}:\mathbb {R}^d \times \mathbb {R}^d \rightarrow \mathbb {R}\) is the Legendre Transform of the function \(L_0\), \(m^{0}_{t} = \pi _1 \sharp \mu ^{0}_{t}\) and \(\text {Id}(\cdot )\) denotes the identity function. As observed before, the main difference with the previous study is the convergence of the whole measure \(\mu ^{{{\,\mathrm{\varepsilon }\,}}}\) which requires a finer study of the Euler–Lagrange flow associated with the problem of control of acceleration (Proposition 5.2, Proposition 5.5 and Proposition 5.6). Indeed, we show again that the limit system has a MFG structure and, in particular, equations (i), (ii) are in common with system (1.4) and they differ only in the measure argument of the function \(L_{0}\). However, system (1.5) has a third equation, (iii), which describes the evolution of the flow \(\{\mu ^{0}_{t}\}_{t \in [0,T]}\): the second marginal, that is the one w.r.t. the velocity variable, is given by the push-forward of the optimal feedback function \(Du^{0}\) by the first marginal \(\{m^{0}_{t}\}_{t \in [0,T]}\). Heuristically, such an equation describes the evolution of the density distribution of controls w.r.t. the state of a typical player. For this reason, system (1.5) is called MFG system of control. In conclusion, we stress that the result can be generalized to any Lagrangian following the same arguments but with heavy notation that leads to an hard presentation of the ideas.
The paper is organized as follows. In Sect. 2, we fix the notation that will be used throughout the paper and we recall the main definitions and results from measure theory. In Sect. 3, we introduce the MFG system associated with the singular perturbation problem, we give the standing assumptions on the data and finally, we state the main results (Theorems 3.2 and 3.5). Sections 4 and 5 are devoted to the proofs of preliminary results needed to demonstrate Theorems 3.2 and 3.5, respectively.
2 Notations and preliminaries
2.1 Notation
We write below a list of symbols used throughout this paper.
-
Denote by \(\mathbb {N}\) the set of positive integers, by \(\mathbb {R}^d\) the d-dimensional real Euclidean space, by \(\langle \cdot ,\cdot \rangle \) the Euclidean scalar product, by \(|\cdot |\) the usual norm in \(\mathbb {R}^d\), and by \(B_{R}\) the open ball with center 0 and radius R.
-
For a Lebesgue-measurable subset A of \(\mathbb {R}^d\), we let \(\mathcal {L}^{d}(A)\) be the d-dimensional Lebesgue measure of A and \(\textbf{1}_{A}:\mathbb {R}^n\rightarrow \{0,1\}\) be the characteristic function of A, i.e.,
$$\begin{aligned} \textbf{1}_{A}(x)= {\left\{ \begin{array}{ll} 1 \ \ \ {} &{}x\in A,\\ 0 &{}x \not \in A. \end{array}\right. } \end{aligned}$$We denote by \(L^p(A)\) (for \(1\le p\le \infty \)) the space of Lebesgue-measurable functions f with \(\Vert f\Vert _{p,A}<\infty \), where
$$\begin{aligned}&\Vert f\Vert _{\infty , A}:={{\,\mathrm{ess\ sup}\,}}_{x \in A} |f(x)|,\\&\Vert f\Vert _{p,A}:=\left( \int _{A}|f|^{p}\ dx\right) ^{\frac{1}{p}}, \quad 1\le p<\infty . \end{aligned}$$For brevity, \(\Vert f\Vert _{\infty }\) and \(\Vert f\Vert _{p}\) stand for \(\Vert f\Vert _{\infty ,\mathbb {R}^d}\) and \(\Vert f\Vert _{p,\mathbb {R}^d}\), respectively.
-
\(C_b(\mathbb {R}^d)\) stands for the function space of bounded uniformly continuous functions on \(\mathbb {R}^d\). \(C^{2}_{b}(\mathbb {R}^{d})\) stands for the space of bounded functions on \(\mathbb {R}^d\) with bounded uniformly continuous first and second derivatives. \(C^k(\mathbb {R}^{d})\) (\(k\in \mathbb {N}\)) stands for the function space of k-times continuously differentiable functions on \(\mathbb {R}^d\), and \(C^\infty (\mathbb {R}^{d}):=\cap _{k=0}^\infty C^k(\mathbb {R}^{d})\). \(C_c^\infty (\mathbb {R}^{d})\) stands for the space of functions in \(C^\infty (\mathbb {R}^{d})\) with compact support. Let \(a<b\in \mathbb {R}\). \(AC([a,b];\mathbb {R}^d)\) denotes the space of absolutely continuous maps \([a,b]\rightarrow \mathbb {R}^d\).
-
For \(f \in C^{1}(\mathbb {R}^{d})\), the gradient of f is denoted by \(Df=(D_{x_{1}}f,..., D_{x_{n}}f)\), where \(D_{x_{i}}f=\frac{\partial f}{\partial x_{i}}\), \(i=1,2,\ldots ,d\). Let k be a nonnegative integer, and let \(\alpha =(\alpha _1,\ldots ,\alpha _d)\) be a multi-index of order k, i.e., \(k=|\alpha |=\alpha _1+\ldots +\alpha _d\), where each component \(\alpha _i\) is a nonnegative integer. For \(f \in C^{k}(\mathbb {R}^{d})\), define \(D^{\alpha }f:= D_{x_{1}}^{\alpha _{1}} \cdot \cdot \cdot D^{\alpha _{d}}_{x_{d}}f\).
2.2 The Wasserstein spaces
We recall here the notations and definitions of Wasserstein spaces and Wasserstein distance, for more details we refer to [4, 37].
Let \((X,\textbf{d})\) be a metric space (in the paper, we use \(X= \mathbb {R}^d\) or \(X= \mathbb {R}^d\times \mathbb {R}^d\)). Denote by \(\mathcal {B}(X)\) the Borel \(\sigma \)-algebra on X and by \(\mathcal {P}(X)\) the space of Borel probability measures on X. The support of a measure \(\mu \in \mathcal {P}(X)\), denoted by \({{\,\textrm{spt}\,}}(\mu )\), is the closed set defined by
We say that a sequence \(\{\mu _k\}_{k\in \mathbb {N}}\subset \mathcal {P}(X)\) is weakly-\(*\) convergent to \(\mu \in \mathcal {P}(X)\), denoted by \(\mu _k {\mathop {\longrightarrow }\limits ^{w^*}}\mu \), if
For \(p\in [1,+\infty )\), the Wasserstein space of order p is defined as
for some (and thus all) \(x_0 \in X\). Given any two measures m and \(m^{\prime }\) in \(\mathcal {P}_p(X)\), define
The Wasserstein distance of order p between m and \(m'\) is defined by
The distance \(d_1\) is also commonly called the Kantorovich–Rubinstein distance and can be characterized by a useful duality formula (see, for instance, [37]) as follows
for all m, \(m'\in \mathcal {P}_1(X)\).
Let \(X_{1}\), \(X_{2}\) be metric spaces, let \(\mu \in \mathcal {P}(X_{1})\) and let \(f: X_{1} \rightarrow X_{2}\) be a \(\mu \) measurable map. Then, we denote by \(f \sharp \mu \in \mathcal {P}(X_{2})\) the push-forward of \(\mu \) through f defined by
More generally, in integral form, it reads as
3 Setting and main results
3.1 Convergence to classical MFG system
Let \(L_{0}: \mathbb {R}^{2d} \times \mathcal {P}_{1}(\mathbb {R}^{d}) \rightarrow \mathbb {R}\) satisfy the following.
- (M1):
-
\(L_{0}\) is continuous w.r.t. all variables and for any \(m \in \mathcal {P}_{1}(\mathbb {R}^{d})\) the map \((x,v) \mapsto L_{0}(x,v,m)\) belongs to \(C^{1}(\mathbb {R}^{2d})\).
- (M2):
-
The map \(v \mapsto L_0(x, v, m)\) belongs to \(C^2(\mathbb {R}^d)\), and there exists \(M_{0} > 0\) such that for any \((x,v,m) \in \mathbb {R}^{2d} \times \mathcal {P}_{1}(\mathbb {R}^{d})\)
$$\begin{aligned}&|L_0(x, 0, m)| \le \ M_0, \end{aligned}$$(3.1)$$\begin{aligned}&|D_{x}L_{0}(x,v,m)| \le \ M_{0}\big (1+|v|^{2} \big ),&\end{aligned}$$(3.2)$$\begin{aligned}&|D_{v}L_{0}(x,v,m)| \le \ M_{0}\big ( 1 + |v|\big ), \end{aligned}$$(3.3)$$\begin{aligned}&\frac{1}{M_{0}}\text {Id} \le \ D^{2}_{v}L_{0}(x,v,m) \le \ M_0 \text {Id}. \end{aligned}$$(3.4) - (M3):
-
There exist two moduli \(\theta : \mathbb {R}_{+} \rightarrow \mathbb {R}_{+}\) and \(\omega _{0}: \mathbb {R}_{+} \rightarrow \mathbb {R}_{+}\) such that
$$\begin{aligned} |L_{0}(x,v,m_{1})-L_{0}(x,v,m_{2})| \le \theta (|x|) \omega _{0}(d_{1}(m_{1},m_{2})), \end{aligned}$$for any \((x,v) \in \mathbb {R}^{2d}\) and \(m_{1}\), \(m_{2} \in \mathcal {P}_{1}(\mathbb {R}^{d})\).
Observe that from (M2) one easily obtains
and, without loss of generality, \(L_{0}(x,v,m) \ge 0\) for any \((x,v,m) \in \mathbb {R}^{2d} \times \mathcal {P}_{1}(\mathbb {R}^{d})\). Let \(H_{0}\) be the Legendre Transform of the function \(L_{0}\), i.e.,
We consider the MFG system
where \(\pi _{1}: \mathbb {R}^{2d} \rightarrow \mathbb {R}^{d}\) denotes the projection onto the first factor, i.e., \(\pi _{1} (x,v)= x\) and, so, \(m^{{{\,\mathrm{\varepsilon }\,}}}_{t}=\pi _{1} \sharp \mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}\) represents the space marginal. We assume the following on the boundary data of the system.
- (BC1):
-
The measure \(\mu _{0} \in \mathcal {P}(\mathbb {R}^{2d})\) is absolutely continuous w.r.t. Lebesgue measure, we still denote by \(\mu _{0}\) its density, and it has compact support.
- (BC2):
-
The terminal costs \(g(\cdot , m)\) belong to \(C^{1}_{b}(\mathbb {R}^{d})\) such that \(M_{0} \ge \max \{\frac{1}{2}, \frac{1}{2}\Vert Dg(\cdot , m)\Vert _{\infty , \mathbb {R}^{d}}\}\) and \(g(x,\cdot )\) uniformly continuous w.r.t. space.
We also recall that \(m_{0}:= \pi _{1} \sharp \mu _{0}\).
Let \(\Gamma \) be the set of \(C^{1}\) curves \(\gamma :[0,T] \rightarrow \mathbb {R}^{d}\), endowed with the local uniform convergence of the curve and its derivative, and given \((t,x,v) \in [0,T] \times \mathbb {R}^{2d}\) let \(\Gamma _{t}(x,v)\) be the subset of \(\Gamma \) such that \(\gamma (t)=x\), \({\dot{\gamma }}(t)=v\). Similarly, let \(\Gamma _{t}(x)\) be the subset of \(\Gamma \) such that \(\gamma (t)=x\). Define the functional \(J^{{{\,\mathrm{\varepsilon }\,}}}_{t,T}: \Gamma \rightarrow \mathbb {R}\)
and set \(J^{{{\,\mathrm{\varepsilon }\,}}}_{t,T}(\gamma )=+\infty \) if \(\gamma \not \in H^{2}(0,T;\mathbb {R}^{d})\). Then, from [1, 14] we know that there exist a solution \((u^{{{\,\mathrm{\varepsilon }\,}}}, \mu ^{{{\,\mathrm{\varepsilon }\,}}}) \in W^{1,\infty }_{loc}([0,T] \times \mathbb {R}^{2d}) \times C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{2d}))\) to system (3.6) such that
and for any \(t \in [0,T]\) the probability measure \(\mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}\) is the image of \(\mu _{0}\) under the flow
That is, \(u^{{{\,\mathrm{\varepsilon }\,}}}\) solves the Hamilton–Jacobi equation in the viscosity sense and \(\mu ^{{{\,\mathrm{\varepsilon }\,}}}\) solves the continuity equation in the sense of distributions.
Remark 3.1
Note that for a.e. \((x,v) \in \mathbb {R}^{2d}\) there exists a unique solution to system (3.8), which we will denote by \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}_{(x,v)}\), such that \(\gamma _{(x,v)}^{{{\,\mathrm{\varepsilon }\,}}}(0)=x\) and \({\dot{\gamma }}_{(x,v)}^{{{\,\mathrm{\varepsilon }\,}}}(0)=v\). Moreover, \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}_{(x,v)}(\cdot )\) is optimal for \(u^{{{\,\mathrm{\varepsilon }\,}}}(t,x,v)\) and satisfies \(\gamma _{(x,v)}^{{{\,\mathrm{\varepsilon }\,}}}(t)=x\), \({\dot{\gamma }}_{(x,v)}^{{{\,\mathrm{\varepsilon }\,}}}(t)=v\).
Theorem 3.2
(Main result 1). Assume (M1)–(M3) and (BC1), (BC2). Let \((u^{{{\,\mathrm{\varepsilon }\,}}}, \mu ^{{{\,\mathrm{\varepsilon }\,}}})\) be a solution to (3.10) and let \(m^{{{\,\mathrm{\varepsilon }\,}}}_{t}=\pi _{1} \sharp \mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}\) for any \(t \in [0,T]\). Then, there exists a sequence \(\{{{\,\mathrm{\varepsilon }\,}}_{k}\}_{k \in \mathbb {N}}\) with \({{\,\mathrm{\varepsilon }\,}}_{k} \downarrow 0\), as \(k \rightarrow \infty \), a function \(u^{0} \in W^{1,\infty }_{loc}([0,T] \times \mathbb {R}^{d})\) and a flow of probability measures \(\{m^{0}_{t}\}_{t \in [0,T]} \in C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\) such that for any \(R \ge 0\)
and
Moreover, the following holds.
- (i):
-
\((u^{0}, m^{0}) \in W^{1,\infty }_{loc}([0,T] \times \mathbb {R}^{d}) \times C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\) is a solution of
$$\begin{aligned} {\left\{ \begin{array}{ll} -\partial _{t} u^{0}(t,x) + H_{0}(x, D_{x}u^{0}(t,x), m^{0}_{t})=0, &{} \quad (t,x) \in [0,T] \times \mathbb {R}^{d} \\ \partial _{t} m^{0}_{t} - {{\,\textrm{div}\,}}\Big ( m^{0}_{t}D_{p}H_{0}(x, D_{x}u^{0}(t,x), m^{0}_{t}) \Big )=0, &{} \quad (t,x) \in [0,T] \times \mathbb {R}^{d} \\ m^{0}_{0}= m_{0},\,\, u^{0}(T,x)=g(x,m^{0}_{T}), &{} \quad x \in \mathbb {R}^{d}, \end{array}\right. } \end{aligned}$$(3.9)that is, \(u^{0}\) solves the Hamilton–Jacobi equation in the viscosity sense and \(m^{0}\) is a solution of the continuity equation in the sense of distributions.
- (ii):
-
For any \(t \in [0,T]\), the probability measure \(m^{0}_{t}\) is the image of \(m_{0}\) under the Euler flow associated with \(L_{0}\).
Remark 3.3
Note that, assumption (M2) is needed to guarantee the well-posedness of the limit system (3.9). Indeed, following [14] and [1], (M2) can be weaken in the analysis of (3.6).
Remark 3.4
Let \((u^{{{\,\mathrm{\varepsilon }\,}}}, \mu ^{{{\,\mathrm{\varepsilon }\,}}})\) be a solution to (3.10). Assume that \(H_{0}\) is of separated form, i.e., there exists a coupling function \(F:\mathbb {R}^{d} \times \mathcal {P}_{1}(\mathbb {R}^{d}) \rightarrow \mathbb {R}\) such that
Moreover, assume that F is continuous w.r.t. all variables, that the map \(x \mapsto F(x,m)\) belongs to \(C^{1}_{b}(\mathbb {R}^{d})\) and that the functions F, g are monotone in the sense of Lasry–Lions, i.e.,
Then, from [1, 14] we know that that there exists a unique solution \((u^{0}, m^{0}) \in W^{1,\infty }_{loc}([0,T] \times \mathbb {R}^{d}) \times C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\) of (3.13) and thus, as \((u^{{{\,\mathrm{\varepsilon }\,}}}, m^{{{\,\mathrm{\varepsilon }\,}}})\) is relatively compact then convergence of \((u^{{{\,\mathrm{\varepsilon }\,}}}, m^{{{\,\mathrm{\varepsilon }\,}}})\) holds for the whole sequence.
3.2 Convergence to MFG of control
We now consider the function \(L_{0}: \mathbb {R}^{d} \times \mathcal {P}_{1}(\mathbb {R}^{2d}) \rightarrow \mathbb {R}\), and we assume the following.
- (C1):
-
\(L_{0}\) is continuous w.r.t. all variables and for any \(\mu \in \mathcal {P}_{1}(\mathbb {R}^{2d})\) the map \( x \mapsto L_{0}(x, \mu )\) belongs to \(C^{1}(\mathbb {R}^{d})\).
- (C2):
-
There exist two moduli \(\theta : \mathbb {R}_{+} \rightarrow \mathbb {R}_{+}\) and \(\omega _{0}: \mathbb {R}_{+} \rightarrow \mathbb {R}_{+}\) such that
$$\begin{aligned} |L_{0}(x,\mu _{1})-L_{0}(x,\mu _{2})| \le \theta (|x|) \omega _{0}(d_{1}(\mu _{1},\mu _{2})), \end{aligned}$$for any \(x \in \mathbb {R}^{d}\) and \(\mu _{1}\), \(\mu _{2} \in \mathcal {P}_{1}(\mathbb {R}^{2d})\).
We consider the MFG system
and we assume
- (A1):
-
the measure \(\mu _{0} \in \mathcal {P}(\mathbb {R}^{2d})\) is absolutely continuous w.r.t. Lebesgue measure, we still denote by \(\mu _{0}\) its density, and it has compact support.
- (A2):
-
The terminal costs \(g(\cdot , \mu )\) belong to \(C^{1}_{b}(\mathbb {R}^{d})\), \(g(x,\cdot )\) uniformly continuous w.r.t. space, and we have that \(M_{0} \ge \max \{\frac{1}{2}, \frac{1}{2}\Vert Dg(\cdot , \mu )\Vert _{\infty , \mathbb {R}^{d}}\}\).
Similarly to the previous part, we define the functional \(J^{{{\,\mathrm{\varepsilon }\,}}}_{t,T}: \Gamma \rightarrow \mathbb {R}\)
and set \(J^{{{\,\mathrm{\varepsilon }\,}}}_{t,T}(\gamma )=+\infty \) if \(\gamma \not \in H^{2}(0,T;\mathbb {R}^{d})\). Then, from [1, 14] we know that there exist a solution \((u^{{{\,\mathrm{\varepsilon }\,}}}, \mu ^{{{\,\mathrm{\varepsilon }\,}}}) \in W^{1,\infty }_{loc}([0,T] \times \mathbb {R}^{2d}) \times C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{2d}))\) to system (3.6) such that
and for any \(t \in [0,T]\) the probability measure \(\mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}\) is the image of \(\mu _{0}\) under the flow
That is, \(u^{{{\,\mathrm{\varepsilon }\,}}}\) solves the Hamilton–Jacobi equation in the viscosity sense and \(\mu ^{{{\,\mathrm{\varepsilon }\,}}}\) solves the continuity equation in the sense of distributions.
Theorem 3.5
(Main result 2). Assume (C1)–(C3) and (A1), (A2). Let \((u^{{{\,\mathrm{\varepsilon }\,}}}, \mu ^{{{\,\mathrm{\varepsilon }\,}}})\) be a solution to (3.10). Then, there exists a sequence \(\{{{\,\mathrm{\varepsilon }\,}}_{k}\}_{k \in \mathbb {N}}\) with \({{\,\mathrm{\varepsilon }\,}}_{k} \downarrow 0\), as \(k \rightarrow \infty \), a function \(u^{0} \in W^{1,\infty }_{loc}([0,T] \times \mathbb {R}^{d})\) and a flow of probability measures \(\{\mu ^{0}_{t}\}_{t \in [0,T]} \in C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{2d}))\) such that for any \(R \ge 0\)
and
Moreover, we have that the pair \((u^{0}, \mu ^{0}) \in W^{1,\infty }_{loc}([0,T] \times \mathbb {R}^{d}) \times C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\) is a solution of the MFG of control system
that is: \(u^{0}\) solves the Hamilton–Jacobi equation in the viscosity sense, \(m^{0}_{t}=\pi _{1} \sharp \mu ^{0}_{t}\), for all \(t \in [0,T]\), is a solution of the continuity equation in the sense of distributions and the measure \(\mu ^0\) satisfies equality (iii) for any \(t \in [0,T]\).
4 Proof of Theorem 3.2
In order to prove Theorem 3.2, we proceed by steps analyzing the behavior of the value function \(u^{{{\,\mathrm{\varepsilon }\,}}}\) and that of the flow of probability measures \(\{m^{{{\,\mathrm{\varepsilon }\,}}}_{t}\}_{t \in [0,T]}\) separately. First, we show that \(u^{{{\,\mathrm{\varepsilon }\,}}}\) is equibounded and that, up to a subsequence, \(m^{{{\,\mathrm{\varepsilon }\,}}}\) converges to a flow of probability measure in \(C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\). Then, we address the problem of convergence of the value function, up to a subsequence, to a solution of a suitable Hamilton–Jacobi equation and we study the limit of its minimizing trajectories. Consequently, we are able to characterize the limit flow of measures as solution of a continuity equation coupled with the Hamilton–Jacobi equation, previously constructed, and together they define the limit MFG system (3.9).
Lemma 4.1
Assume (M1)–(M3) and (BC1), (BC2). Then, we have that
for any \((t,x,v) \in [0,T] \times \mathbb {R}^{2d}\) and for any \({{\,\mathrm{\varepsilon }\,}}> 0\).
Proof
First, since \(u^{{{\,\mathrm{\varepsilon }\,}}}\) satisfy (3.7), from (3.5) and (BC) follows that for any \((t,x,v) \in [0,T] \times \mathbb {R}^{2d}\) there holds
On the other hand, let us recall that \(u^{{{\,\mathrm{\varepsilon }\,}}}\) solves the Hamilton–Jacobi equation
Then, the function
is a supersolution to (4.1) for a suitable choice of the real constant \(C \ge 0\). Indeed, we have that
where the last inequality holds by Young’s inequality. Thus, taking \(C=5M_{0}\) by (BC) we obtain
So, we get the result by Comparison Theorem [8, Theorem 2.12]. \(\square \)
Corollary 4.2
Assume (M1)–(M3) and (BC1), (BC2). Let \((t,x,v) \in [0,T] \times \mathbb {R}^{2d}\), and let \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) be a minimizer for \(u^{{{\,\mathrm{\varepsilon }\,}}}(t,x,v)\). Then, there exists a constant \(Q_{1} \ge 0\) such that
where \(Q_{1}\) is independent of \({{\,\mathrm{\varepsilon }\,}}\), t, x and v.
Proof
On the one hand, from Lemma 4.1 we know that
On the other hand, let \((t,x,v) \in [0,T] \times \mathbb {R}^{2d}\) and let \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) be a minimizer for \(u^{{{\,\mathrm{\varepsilon }\,}}}(t,x,v)\). Then, by (3.5) we have that
Therefore, combining the above inequalities we get
where \(Q_{1}\) depends only on \(M_{0}\), T and \(\Vert g(\cdot , m^{{{\,\mathrm{\varepsilon }\,}}}_{T})\Vert _{\infty , \mathbb {R}^{d}}\) which is bounded uniformly in \(m^{{{\,\mathrm{\varepsilon }\,}}}_{T}\). \(\square \)
Corollary 4.3
Assume (M1)–(M3) and (BC1), (BC2). Then, there exists a constant \(Q_{2} \ge 0\) such that for any \(s_{1}\), \(s_{2} \in [0,T]\) with \(s_{1} \le s_{2}\) there holds
where \(Q_{2}\) is independent of \({{\,\mathrm{\varepsilon }\,}}\).
Proof
We first recall that for any \(t \in [0,T]\) we know that \(m^{{{\,\mathrm{\varepsilon }\,}}}_{t}=\pi _{1} \sharp \mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}\) where \(\mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}\) is the image of \(\mu _{0}\) under the flow (3.8) whose space marginal we denote by \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}_{(x,v)}\) for \((x,v) \in \mathbb {R}^{2d}\).
Let \(s_{1}\), \(s_{2} \in [0,T]\) be such that \(s_{1} \le s_{2}\). Then, by (2.2) we have that
and thus, appealing to Corollary 4.2 and the Hölder inequality we obtain
So, since \(\mu _{0}\) has compact support we get the result setting
\(\square \)
We are now ready to prove that the flow of probability measures \(m^{{{\,\mathrm{\varepsilon }\,}}}\) converges, up to a subsequence. First, we recall that for any \(t \in [0,T]\) the measure \(m_{t}^{{{\,\mathrm{\varepsilon }\,}}}\) is the space marginal of \(\mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}\) which is given by the push-forward of the initial distribution \(\mu _{0}\) by the optimal flow (3.8), that is
Theorem 4.4
Assume (M1)–(M3) and (BC1), (BC2). Then, the flow of measures \(\{m^{{{\,\mathrm{\varepsilon }\,}}}_{t}\}_{t \in [0,T]}\) is tight and there exists a sequence \(\{{{\,\mathrm{\varepsilon }\,}}_{k}\}_{k \in \mathbb {N}}\) such that \(m^{{{\,\mathrm{\varepsilon }\,}}_{k}}\) converges to some probability measure \(m^{0}\) in \(C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\).
Proof
Since \(m_{t}^{{{\,\mathrm{\varepsilon }\,}}}=\pi _{1} \sharp \mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}\), for any \(t \in [0,T]\), where \(\mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t} \) is given by push-forward of \(\mu _{0}\) under the flow (3.8), we know that
So, we are interested in estimating the curve \(\gamma _{(x,v)}^{{{\,\mathrm{\varepsilon }\,}}}\) for any (x, v), uniformly in \({{\,\mathrm{\varepsilon }\,}}>0\). In order to get it, from Corollary 4.2 we immediately deduce that
Hence, for any \(t \ge 0\) we have that
for some constant \(C_{0} \ge 0\). Thus, since \(\mu _{0}\) has compact support, we deduce that \(\{m^{{{\,\mathrm{\varepsilon }\,}}}_{t}\}_{t \in [0,T]}\) has bounded second-order momentum, uniformly in \({{\,\mathrm{\varepsilon }\,}}> 0\) and, consequently, \(\{m^{{{\,\mathrm{\varepsilon }\,}}}_{t}\}_{t \in [0,T]}\) is tight. Therefore, by Prokhorov Theorem and Ascoli–Arzela Theorem, \(\mu _{0}\) has uniformly bounded support and by Corollary 4.3\(m^{{{\,\mathrm{\varepsilon }\,}}}_{t}\) is equicontinuous in time, there exists a sequence \(\{{{\,\mathrm{\varepsilon }\,}}_{k}\}_{k \in \mathbb {N}}\) and measure \(m^{0} \in C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\) such that \(m^{{{\,\mathrm{\varepsilon }\,}}_{k}} \rightarrow m^{0}\) in \(C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\).\(\square \)
Next, we turn to the convergence of the value function \(u^{{{\,\mathrm{\varepsilon }\,}}}\). Before proving it, we need preliminary estimates on the oscillation of the value function w.r.t. velocity variable and then, w.r.t. time and space variable. In particular, we will show that the function \(u^{{{\,\mathrm{\varepsilon }\,}}}(t,x,\cdot )\) has decreasing oscillation w.r.t. \({{\,\mathrm{\varepsilon }\,}}\), which will allow us to conclude that the limit function does not depend on v.
Lemma 4.5
Assume (M1)–(M3) and (BC1), (BC2). Let \(R \ge 0\) and let \((x,v_{0})\), \((x,v) \in \mathbb {R}^{d} \times \overline{B}_{R}\). Then, there exists \(C_{R} \ge 0\) and a parametric curve \(\sigma : [0,\sqrt{{{\,\mathrm{\varepsilon }\,}}}] \rightarrow \mathbb {R}^{d}\) such that
and
where \(C_{R}\) is independent of \({{\,\mathrm{\varepsilon }\,}}\), x, v and \(v_{0}\).
Proof
Let \(R \ge 0\) and let \((x,v_{0})\), \((x,v) \in \mathbb {R}^{d} \times \overline{B}_{R}\). Define the curve \(\sigma :[0,\sqrt{{{\,\mathrm{\varepsilon }\,}}}] \rightarrow \mathbb {R}^{d}\) by
with A, \(B \in \mathbb {R}\) satisfying the following conditions
Thus, we obtain
Hence, we get
for some positive constant \(\widehat{C}\) and the proof is thus complete. \(\square \)
Lemma 4.6
Assume (M1), (M2) and (BC). Let \(R \ge 0\), let \(T > 1\) and \({{\,\mathrm{\varepsilon }\,}}> 0\). Then, there exists \(\widehat{C}_{R}({{\,\mathrm{\varepsilon }\,}}) \ge 0\) such that for any \(t \in [0,T]\), any \(x \in \mathbb {R}^{d}\), and any v, w in \(\overline{B}_{R}\) there holds
and \(\widehat{C}_{R}({{\,\mathrm{\varepsilon }\,}}) \rightarrow 0\) as \({{\,\mathrm{\varepsilon }\,}}\downarrow 0\).
Proof
Fix \(R \ge 0\) and take (x, v), \((x,w) \in \mathbb {R}^{d} \times \overline{B}_{R}\). Let \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) be a minimizer for \(u^{{{\,\mathrm{\varepsilon }\,}}}(t,x,v)\) and define the curve
where \(\sigma : [0,\sqrt{{{\,\mathrm{\varepsilon }\,}}}] \rightarrow \mathbb {R}^{2d}\) connects, in the sense of Lemma 4.5, (x, w) with (x, v). Then, we obtain
Now, from Lemma 4.5 we know that
and, moreover, from the optimality of \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) we get
Then, as observed before from Corollary 4.2 we have that
and also that the curve \(\sigma \) is bounded. Hence, by (M3) and Corollary 4.3 we deduce that there exists \(P({{\,\mathrm{\varepsilon }\,}}) \ge 0\), with \(P({{\,\mathrm{\varepsilon }\,}}) \rightarrow 0\) as \({{\,\mathrm{\varepsilon }\,}}\downarrow 0\), such that
where we have used that the modulus \(\theta \) in (M3) is bounded from the boundedness of \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) and \(\sigma \). Therefore, combining (4.2), (4.3) and (4.4) we get the result. \(\square \)
Proposition 4.7
Assume (M1)–(M3) and (BC1), (BC2). Then, for any \(R \ge 0\) there exists a modulus \(\omega _{R}:\mathbb {R}_{+} \rightarrow \mathbb {R}_{+}\) and a constant \(C_{1} \ge 0\), independent of R, such that for any \({{\,\mathrm{\varepsilon }\,}}> 0\) the following holds:
Proof
We begin by proving (4.6). Let \((t,x,v) \in [0,T] \times \mathbb {R}^{d} \times \mathbb {R}^{d}\) and let \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) be a minimizer for \(u^{{{\,\mathrm{\varepsilon }\,}}}(t,x,v)\). Then, from (3.2) we get
Hence, Corollary 4.2 yields to the conclusion.
Next, we proceed to show (4.5). Let \(R \ge 0\) and take \((t,x,v) \in [0,T] \times \overline{B}_{R} \times \overline{B}_{R}\). Let \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) be a minimizer for \(u^{{{\,\mathrm{\varepsilon }\,}}}(t,x,v)\) and let \(h \in [0,T-t]\). Then, we have that
where the last inequality holds by (M3). Hence, from Corollary 4.2 we know that
and thus, \(\theta (\cdot )\) turns out to be bounded. Therefore, appealing to Corollary 4.3 we obtain
On the other hand, let \(R \ge 0\) and let \((t,x,v) \in [0,T] \times \overline{B}_{R}\times \overline{B}_{R}\). For \(h \in [0,T-t]\), define the curve \(\gamma :[t,t+h] \rightarrow \mathbb {R}^{d}\) by \(\gamma (s)=x+(s-t)v\). Then, by Dynamic Programming Principle we deduce that
where we applied (3.5) and (4.6) to get the last inequality. Therefore, combining (4.7) and (4.8) the proof is complete. \(\square \)
Remark 4.8
Next, we study the behavior of the value function \(u^{{{\,\mathrm{\varepsilon }\,}}}\) as \({{\,\mathrm{\varepsilon }\,}}\rightarrow 0\) and before doing that we recall the following argument needed to get uniform convergence to a function which does not depend on v. Assume that there exists a nonnegative function \(\Theta (\delta _{0}, {{\,\mathrm{\varepsilon }\,}}_{0}, R_{0})\) such that
and assume that for any \(|t_{1}-t_{2}| + |x_{1} - x_{2}| \le \delta _{0}\), any \({{\,\mathrm{\varepsilon }\,}}\le {{\,\mathrm{\varepsilon }\,}}_{0}\) and any \(|x_{i}|\), \(|v_{i}| \le R_{0}\) (\(i=1,2\)) there holds
Then: if \(u^{{{\,\mathrm{\varepsilon }\,}}}\) converge point-wise, then \(u^{{{\,\mathrm{\varepsilon }\,}}}\) converges locally uniformly and the limit function does not depend on v. \(\square \)
Let \(m^{0} \in C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\) be the flow of measures obtained in Theorem 4.4 as limit of the flow \(m^{{{\,\mathrm{\varepsilon }\,}}_{k}}\) in \(C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\) for some subsequence \({{\,\mathrm{\varepsilon }\,}}_{k} \downarrow 0\). Define the function \(u^{0}: [0,T] \times \mathbb {R}^{2d} \rightarrow \mathbb {R}\) by
We will prove now that for the subsequence \({{\,\mathrm{\varepsilon }\,}}_{k}\) the sequence of value functions \(u^{{{\,\mathrm{\varepsilon }\,}}_{k}}\) locally uniformly converges to \(u^{0}\).
Theorem 4.9
Assume (M1)–(M3) and (BC1), (BC2). Then, there exists a subsequence \({{\,\mathrm{\varepsilon }\,}}_{k} \downarrow 0\) such that \(u^{{{\,\mathrm{\varepsilon }\,}}_{k}}\) locally uniformly converges to \(u^{0}\).
Proof
We proceed to show first the point-wise convergence of \(u^{{{\,\mathrm{\varepsilon }\,}}_{k}}\) to \(u^{0}\), for some subsequence \({{\,\mathrm{\varepsilon }\,}}_{k} \downarrow 0\), and then, using Remark 4.8, i.e., constructing such a modulus \(\Theta \), we deduce that the convergence is locally uniform.
From Theorem 4.4, let \({{\,\mathrm{\varepsilon }\,}}_{k}\) be the subsequence such that \(m^{{{\,\mathrm{\varepsilon }\,}}_{k}} \rightarrow m^{0}\) in \(C([0,T];\mathcal {P}_{1}(\mathbb {R}^{d}))\) as \(k \rightarrow \infty \). Let \(R \ge 0\), let \((t,x,v) \in [0,T] \times \mathbb {R}^{d} \times \overline{B}_{R}\) and let \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}_{k}}\) be a minimizer for \(u^{{{\,\mathrm{\varepsilon }\,}}_{k}}(t,x,v)\). Then, we have that
where the last inequality holds by (M1) and the convergence of \(m^{{{\,\mathrm{\varepsilon }\,}}_{k}}\) in \(C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\).
On the other hand, let \(R \ge 0\) and take \((t,x,v) \in [0,T] \times \mathbb {R}^{d} \times \overline{B}_{R}\). Let \(\gamma ^{0} \in \Gamma _{t}(x)\) be a solution of
Next, we distinguish two cases: first, when \({\dot{\gamma }}^{0}(t)=v\) and then when \({\dot{\gamma }}^{0}(t)\not =v\). Indeed, if \({\dot{\gamma }}^{0}(t)=v\), by the Euler equation and the \(C^{2}\)-regularity of \(L_{0}\), we have that \(\gamma \in C^{2}([0,T])\). Hence, we can use \(\gamma ^{0}\) as a competitor for \(u^{{{\,\mathrm{\varepsilon }\,}}_{k}}(t,x,v)\) and we get
where the last inequality follows again from the convergence of \(m^{{{\,\mathrm{\varepsilon }\,}}_{k}}\) in \(C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\). If this is not the case, i.e., \({\dot{\gamma }}^{0}(t)\not =v\), from Lemma 4.6 we deduce that
Thus, in order to conclude it is enough to estimate \(u^{{{\,\mathrm{\varepsilon }\,}}_{k}}(t,x,{\dot{\gamma }}^{0}(t))\) as in (4.10). Therefore, we obtain
which implies that \(u^{{{\,\mathrm{\varepsilon }\,}}_{k}}\) point-wise converges to \(u^{0}\).
Finally, in order to conclude we need to show that the convergence is locally uniform. From (4.5), (4.6) and Lemma 4.6 we have that for any \(R \ge 0\) and any \((t_{1}, x_{1}, v_{1})\), \((t_{2}, x_{2}, v_{2}) \in [0,T] \times \overline{B}_{R} \times \overline{B}_{R}\) there holds
Therefore, setting
by Remark 4.8 we deduce that the convergence is locally uniform and the proof is thus complete. \(\square \)
After proving the convergence of \(u^{{{\,\mathrm{\varepsilon }\,}}}\), we go back to the analysis of the flow of measures and in particular we will characterize it in terms of the limit function \(u^{0}\). In order to do so, we study the convergence of minimizers for \(u^{{{\,\mathrm{\varepsilon }\,}}}\) and appealing to such a result we will show that \(m^{0} \in C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\) solves a continuity equation with vector field \(D_{p}H_{0}(x,D_{x}u^{0})\), in the sense of distribution.
Proposition 4.10
Assume (M1)–(M3) and (BC1), (BC2). Let \((t, x, v) \in [0,T] \times \mathbb {R}^{2d}\) be such that \(u^{0}\) is differentiable at (t, x) and let \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) be a minimizer for \(u^{{{\,\mathrm{\varepsilon }\,}}}(t,x,v)\). Then, \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) uniformly converges to a curve \(\gamma ^{0} \in \text {AC}([0,T]; \mathbb {R}^{d})\) and \(\gamma ^{0}\) is the unique minimizer for \(u^{0}(t,x)\) in (4.9).
Proof
Let us start by proving that \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) uniformly converges, up to a subsequence. By Corollary 4.2, we know that
Thus, for any \(s \in [t,T]\), by Hölder inequality we have that
Therefore, \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) is bounded in \(H^{1}(0,T; \mathbb {R}^{d})\) which implies that by Ascoli–Arzela Theorem there exists a sequence \(\{{{\,\mathrm{\varepsilon }\,}}_{k}\}_{k \in \mathbb {N}}\) and a curve \(\gamma ^{0} \in \text {AC}([0,T]; \mathbb {R}^{d})\) such that \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}_{k}}\) converges uniformly to \(\gamma ^{0}\).
We show now that such a limit \(\gamma ^{0}\) is a minimizer for \(u^{0}(t,x)\). First, we observe that
Then, as observed at the beginning of this proof \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) is uniformly bounded in \(H^{1}(0,T)\). So by lower-semicontinuity of L and Theorem 4.4, we deduce that
Moreover, for any \(R \ge 0\) taking \((t,x,v) \in [0,T] \times \mathbb {R}^{d} \times \overline{B}_{R}\), from Theorem 4.9 we obtain
and we recall that
Hence, we get
Therefore, passing to the limit as \({{\,\mathrm{\varepsilon }\,}}\downarrow 0\) from (4.11) we obtain
which proves that \(\gamma ^{0}\) is a minimizer for \(u^{0}(t,x)\). Since \(u^{0}\) is differentiable at \((t,x) \in \mathbb {R}^{d}\), there exists a unique minimizing trajectory and thus, we have that the uniform convergence of \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) holds for the whole sequence. \(\square \)
Remark 4.11
Since \(u^{0}\) is locally Lipschitz continuous w.r.t. time and Lipschitz continuous w.r.t. space, we have that Proposition 4.10 holds for a.e. \((t,x) \in [0,T] \times \mathbb {R}^{d}\).
Let \(u^{0}\) be as in (4.9) and let \((\gamma ^{0}_{t}(\cdot ), {\dot{\gamma }}^{0}_{t}(\cdot ))\) be the flow of Euler–Lagrange equations associated with the minimization problem in (4.9). Note that, since \(u^{0}\) is Lipschitz continuous and \(\mu _{0}\) is absolutely continuous w.r.t. the Lebesgue measure, we have that on \({{\,\textrm{spt}\,}}(\mu _{0})\) the curve \((\gamma ^{0}_{t}(\cdot ), {\dot{\gamma }}^{0}_{t}(\cdot ))\) is a minimizer for \(u^{0}\). We also recall that the measure \(\mu ^{{{\,\mathrm{\varepsilon }\,}}}\) is the image of \(\mu _{0}\) under the flow (3.8), which is optimal as observed in Remark 3.1 for \(u^{{{\,\mathrm{\varepsilon }\,}}}(0,x,v)\) for a.e. \((x,v) \in \mathbb {R}^{2d}\), and thus, for any function \(\varphi \in C^{\infty }_{c}(\mathbb {R}^{d})\) the measure \(m^{{{\,\mathrm{\varepsilon }\,}}}_{t}\) is given by
We finally recall that by assumption \(\mu _{0}\) is absolutely continuous w.r.t. Lebesgue measure.
Corollary 4.12
Assume (M1)–(M3) and (BC1), (BC2). Then, we have that
Moreover, \(m^{0} \in C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\) solves
in the sense of distributions.
Proof
From Theorem 4.4, let \({{\,\mathrm{\varepsilon }\,}}_{k} \downarrow 0\) be such that \(m^{{{\,\mathrm{\varepsilon }\,}}_{k}} \rightarrow m^{0}\) in \(C([0,T];\mathcal {P}_{1}(\mathbb {R}^{d}))\). Then, since \(\mu _{0}\) is absolutely continuous w.r.t. Lebesgue measure by Proposition 4.10, we have that
Therefore, from (4.12), for \({{\,\mathrm{\varepsilon }\,}}={{\,\mathrm{\varepsilon }\,}}_{k}\), as \(k \rightarrow \infty \), we get
which proves (4.13). Moreover, again by Proposition 4.10 we have that \(\gamma ^{0}_{t}\) is a minimizer for \(u^{0}(0,x)\) since it is the limit of \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}_{(x,v)}\) which is optimal \(u^{{{\,\mathrm{\varepsilon }\,}}}(0,x,v)\) and we are taking (x, v) in a subset of full measure w.r.t. \(\mu _{0}\). Therefore, from the optimality of \(\gamma ^{0}\), we get
Hence, for any \(\psi \in C^{\infty }_{c}([0,T) \times \mathbb {R}^{d})\) we obtain
and integrating, in time, over [0, T] we get the result. \(\square \)
We are now ready to prove the main result.
Proof of Theorem 3.2
Let \(\{{{\,\mathrm{\varepsilon }\,}}_{k}\}_{k \in \mathbb {N}}\) be such that \(m^{{{\,\mathrm{\varepsilon }\,}}_{k}} \rightarrow m^{0}\) in \(C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d})\) and \(u^{{{\,\mathrm{\varepsilon }\,}}_{k}} \rightarrow u^{0}\) locally uniformly on \([0,T] \times \mathbb {R}^{2d}\). Then, appealing to Theorem 4.9 and Corollary 4.12 we deduce that \((u^{0}, m^{0})\) is a solution to the MFG system
which completes the proof. \(\square \)
5 Proof of Theorem 3.5
We recall that, in this section, we consider the MFG system
So, the variational problem associated with such a system is given by
where
with \(J^{{{\,\mathrm{\varepsilon }\,}}}_{t,T}(\gamma )=+\infty \) if \(\gamma \not \in H^{2}(0,T;\mathbb {R}^{d})\).
From the results on the previous section and the assumptions (C1)–(C3) on \(L_{0}: \mathbb {R}^{d} \times \mathcal {P}(\mathbb {R}^{2d}) \rightarrow \mathbb {R}\) given above, we deduce that we only need to study the tightness of the flow of measures \(\{\mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}\}_{t \in [0,T]}\) w.r.t. the second marginal. This can be done by a finer analysis of the Euler–Lagrange flow.
Lemma 5.1
Let \((x,v) \in \mathbb {R}^{2d}\) and let \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) be a solution to the variational problem associated with \(u^{{{\,\mathrm{\varepsilon }\,}}}(0,x,v)\). Then, \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) is a solution of the Euler–Lagrange equation
with boundary condition
Proposition 5.2
Assume (C1)–(C3) and (A1), (A2). Let \((x,v) \in \mathbb {R}^{2d}\) and let \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) be a solution to the variational problem associated with \(u^{{{\,\mathrm{\varepsilon }\,}}}(0,x,v)\). Then, there exists a constant \(C \ge 0\) such that for any \(\delta \in (0, 1)\) the following holds
Proof
Fix \((x, v) \in \mathbb {R}^{2d}\) and a solution \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) to the problem associated with \(u^{{{\,\mathrm{\varepsilon }\,}}}(0,x,v)\). In the following, for simplicity of notation we drop \({{\,\mathrm{\varepsilon }\,}}\) setting \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}=\gamma \) and we drop the notation of the scalar product \(\langle \cdot , \cdot \rangle \).
We begin by multiplying the Euler–Lagrange equation
by \(s^{2}\ddot{\gamma }(t)\) and integrate by parts. We obtain
which reduces to
by using the boundary condition on the Euler–Lagrange equation. From the Young’s inequality, we get
which yields to
So, appealing to (C1) we have that there exists a constant \(C(T) \ge 0\) such that
and, moreover, by the non-negativity of the term \(s^{2}\gamma ^{(iii)}(s)^{2}\) we finally get
Integrating, again, by parts the term \( \int _{0}^{T} {{\,\mathrm{\varepsilon }\,}}\gamma ^{(iii)}(s) s \ddot{\gamma }(s)\ \textrm{d}s \) we obtain
which implies that
and so
Hence, combining (5.1) and (5.2), we get
which implies the result. \(\square \)
Theorem 5.3
Assume (C1)–(C3) and (A1), (A2). The sequence \(\{\mu ^{{{\,\mathrm{\varepsilon }\,}}, \delta }_{t}\}_{\delta > 0}\) is tight, and the sequence \(\{\mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}\}_{{{\,\mathrm{\varepsilon }\,}}> 0}\) is relatively compact in \(C^{0}([0, T]; \mathcal {P}_{1}(\mathbb {R}^{2d}))\).
Proof
The tightness of the sequence \(\{\mu ^{{{\,\mathrm{\varepsilon }\,}}, \delta }_{t}\}_{\delta > 0}\) follows from Corollary 4.2 and Proposition 5.2.
Next, we show that \(\{\mu ^{{{\,\mathrm{\varepsilon }\,}}}_{t}\}_{{{\,\mathrm{\varepsilon }\,}}> 0}\) is relatively compact. Indeed, still from Corollary 4.3 and Proposition 5.2 we have that for any \(\delta > 0\) and any \(\delta \le s \le t \le T\)
which completes the proof appealing to Prokhorov Theorem and Ascoli–Arzela Theorem. \(\square \)
Remark 5.4
Note that, following the above reasoning one easily deduce that the main result of this section is not uniform w.r.t. T.
Now, by using similar techniques of Theorem 4.9 and Proposition 4.10 one can prove the following.
Proposition 5.5
Assume (C1)–(C3) and (A1), (A2). Then, we have that
- (i):
-
there exists a subsequence \({{\,\mathrm{\varepsilon }\,}}_{k} \downarrow 0\) such that \(u^{{{\,\mathrm{\varepsilon }\,}}_{k}}\) locally uniformly converges to \(u^{0}\).
- (ii):
-
Let \((t,x,v) \in [0,T] \times \mathbb {R}^{2d}\) be such that \(u^{0}\) is differentiable at (t, x) and let \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) be a minimizer of \(u^{{{\,\mathrm{\varepsilon }\,}}}(t,x,v)\). Then, \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}\) uniformly converges to a curve \(\gamma ^{0}\) such that \({\dot{\gamma }}^{{{\,\mathrm{\varepsilon }\,}}} \rightarrow {\dot{\gamma }}^{0}\) as \({{\,\mathrm{\varepsilon }\,}}\downarrow 0\) and \(\gamma ^{0}\) is the unique minimizer for \(u^{0}\) in (4.9).
Proposition 5.6
Assume (C1)–(C3) and (A1), (A2). Then, we have that
Moreover, \(m^{0} \in C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{d}))\) solves
in the sense of distributions.
Proof
From Proposition 5.5, let \({{\,\mathrm{\varepsilon }\,}}_{k} \downarrow 0\) be such that \(\mu ^{{{\,\mathrm{\varepsilon }\,}}_{k}} \rightarrow \mu ^{0}\) in \(C([0,T];\mathcal {P}_{1}(\mathbb {R}^{2d}))\). Then, since \(\mu _{0}\) is absolutely continuous w.r.t. Lebesgue measure by Proposition 5.5 we have that
Therefore, from (4.12), for \({{\,\mathrm{\varepsilon }\,}}={{\,\mathrm{\varepsilon }\,}}_{k}\), as \(k \rightarrow \infty \) we get
Moreover, again by Proposition 5.5 we have that \(\gamma ^{0}_{t}\) is a minimizer for \(u^{0}(0,x)\) since it is the limit of \(\gamma ^{{{\,\mathrm{\varepsilon }\,}}}_{(x,v)}\) which is optimal \(u^{{{\,\mathrm{\varepsilon }\,}}}(0,x,v)\) and we are taking (x, v) in a subset of full measure w.r.t. \(\mu _{0}\). Therefore, from the optimality of \(\gamma ^{0}\) we get
Hence,
Then, for any \(\psi \in C^{\infty }_{c}([0,T) \times \mathbb {R}^{d})\) we obtain
and integrating, in time, over [0, T] we get the result. \(\square \)
Proof of Theorem 3.5
Let \(\{{{\,\mathrm{\varepsilon }\,}}_{k}\}_{k \in \mathbb {N}}\) be such that \(\mu ^{{{\,\mathrm{\varepsilon }\,}}_{k}} \rightarrow \mu ^{0}\) in \(C([0,T]; \mathcal {P}_{1}(\mathbb {R}^{2d})\) and \(u^{{{\,\mathrm{\varepsilon }\,}}_{k}} \rightarrow u^{0}\) locally uniformly on \([0,T] \times \mathbb {R}^{2d}\). Then, appealing to Proposition 5.5 and Proposition 5.6 we deduce that \((u^{0}, \mu ^{0})\) is a solution to the MFG system
which completes the proof. \(\square \)
References
Y. Achdou, P. Mannucci, C. Marchi, and N. Tchou. Deterministic mean field games with control on the acceleration. NoDEA, Nonlinear Differ. Equ. Appl., 27(3):32, 2020. Id/No 33.
O. Alvarez and M. Bardi. Ergodic problems in differential games. In Advances in dynamic game theory. Numerical methods, algorithms, and applications to ecology and economics. Most of the papers based on the presentations at the 11th international symposium on dynamics games and application, Tucson, AZ, USA, December 2004, pages 131–152. Boston, MA: Birkhäuser, 2007.
O. Alvarez and M. Bardi. Ergodicity, stabilization, and singular perturbations for Bellman-Isaacs equations, volume 960. Providence, RI: American Mathematical Society (AMS), 2010.
L. Ambrosio, N. Gigli, and G. Savare. Gradient flows in metric spaces and in the space of probability measures. Basel: Birkhäuser, 2005.
Z. Artstein. Invariant measures of differential inclusions applied to singular perturbations. J. Differ. Equations, 152(2):289–307, 1999.
Z. Artstein and V. Gaitsgory. The value function of singularly perturbed control systems. Appl. Math. Optim., 41(3):425–445, 2000.
Z. Artstein and A. Vigodner. Singularly perturbed ordinary differential equations with dynamic limits. Proc. R. Soc. Edinb., Sect. A, Math., 126(3):541–569, 1996.
M. Bardi and I. Capuzzo-Dolcetta. Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations. Boston, MA: Birkhäuser, 1997.
M. Bardi and P. Cardaliaguet. Convergence of some mean field games systems to aggregation and flocking models. Nonlinear Analysis, 204:112199, mar 2021.
G. Barles and J.-M. Roquejoffre. Ergodic type problems and large time behaviour of unbounded solutions of Hamilton-Jacobi equations. Commun. Partial Differ. Equations, 31(8):1209–1225, 2006.
A. Bensoussan, J. Frehse, and P. Yam. Mean field games and mean field type control theory. New York, NY: Springer, 2013.
P. Cannarsa, W. Cheng, C. Mendico, and K. Wang. Long-time behavior of first-order mean field games on Euclidean space. Dyn. Games Appl., 10(2):361–390, 2020.
P. Cannarsa, W. Cheng, C. Mendico, and K. Wang. Weak kam approach to first-order mean field games with state constraints. Journal of Dynamics and Differential Equations, 2021.
P. Cannarsa and C. Mendico. Mild and weak solutions of mean field game problems for linear control systems. Minimax Theory Appl., 5(2):221–250, 2020.
P. Cardaliaguet. Notes on mean field games from p. -l. lions lectures at collége de france. Unpublished, 2012.
P. Cardaliaguet. Long time average of first order mean field games and weak KAM theory. Dyn. Games Appl., 3(4):473–488, 2013.
P. Cardaliaguet and C. Mendico. Ergodic behavior of control and mean field games problems depending on acceleration. Nonlinear Anal., Theory Methods Appl., Ser. A, Theory Methods, 203:41, 2021. Id/No 112185.
R. Carmona and F. Delarue. Probabilistic theory of mean field games with applications I. Mean field FBSDEs, control, and games, volume 83. Cham: Springer, 2018.
A. Cesaroni, N. Dirr, and C. Marchi. Homogenization of a mean field game system in the small noise limit. SIAM J. Math. Anal., 48(4):2701–2729, 2016.
E. Cristiani, B. Piccoli, and A. Tosin. Multiscale modeling of granular flows with application to crowd dynamics. Multiscale Model. Simul., 9(1):155–182, 2011.
E. Cristiani, B. Piccoli, and A. Tosin. Multiscale modeling of pedestrian dynamics. Cham: Springer, 2014.
A. Fathi. Weak KAM Theorem and Lagrangian Dynamics. unpublished.
A. Fathi and E. Maderna. Weak KAM theorem on non compact manifolds. NoDEA, Nonlinear Differ. Equ. Appl., 14(1-2):1–27, 2007.
D. A. Gomes, E. A. Pimentel, and V. Voskanyan. Regularity theory for mean-field game systems. Cham: Springer; Rio de Janeiro: Sociedade Brasileira de Matemática Aplicada e Computacional (SBMAC), 2016.
M. Huang, R. P. Malhamé, and P. E. Caines. Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. Commun. Inf. Syst., 6(3):221–252, 2006.
M. Huang, R. P. Malhamé, and P. E. Caines. Large-population cost-coupled LQG problems with nonuniform agents: individual-mass behavior and decentralized \(\epsilon \)-Nash equilibria. IEEE Trans. Autom. Control, 52(9):1560–1571, 2007.
H. Ishii. Homogenization of the cauchy problem for hamilton-jacobi equations. Stochastic analysis, control, optimization and applications. A volume in honor of Wendell H. Fleming, on the occasion of his 70th birthday, pages 305–324, 1999.
B. A. J. and C. M. Topaz. Nonlocal aggregation models: a primer of swarm equilibria. SIAM Rev., 55(4):709–747, 2013.
D. V. Khlopin. Uniform Tauberian theorem in differential games. Mat. Teor. Igr Prilozh., 7(1):92–120, 2015.
J.-M. Lasry and P.-L. Lions. Jeux à champ moyen. I: Le cas stationnaire. C. R., Math., Acad. Sci. Paris, 343(9):619–625, 2006.
J.-M. Lasry and P.-L. Lions. Jeux à champ moyen. II: Horizon fini et contrôle optimal. C. R., Math., Acad. Sci. Paris, 343(10):679–684, 2006.
J.-M. Lasry and P.-L. Lions. Mean field games. Jpn. J. Math. (3), 2(1):229–260, 2007.
P.-L. Lions, G. Papanicolaou, and S. Varadhan. Homogenization of hamilton–jacobi equations. Unpublished.
P.-L. Lions and P. E. Souganidis. Homogenization of the backward-forward mean field games systems in periodic enviromets. Atti Accad. Naz. Lincei, Cl. Sci. Fis. Mat. Nat., IX. Ser., Rend. Lincei, Mat. Appl., 31(4):733–755, 2020.
M. Oliu-Barton and G. Vigeral. A uniform Tauberian theorem in optimal control. In Advances in dynamic games. Theory, applications, and numerical methods for differential and stochastic games. Most papers based on the presentations at the 14th international symposium, Banff, Canada, June 2010, pages 199–215. Boston, MA: Birkhäuser, 2012.
C. M. Topaz, A. L. Bertozzi, and M. A. Lewis. A nonlocal continuum model for biological aggregation. Bull. Math. Biol., 68(7):1601–1623, 2006.
C. Villani. Topics in optimal transportation, volume 58. Providence, RI: American Mathematical Society (AMS), 2003.
Acknowledgements
Cristian Mendico was partly supported by Istituto Nazionale di Alta Matematica (GNAMPA 2020 Research Projects) and by the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Roma Tor Vergata, CUP E83C23000330006. The author would like to thank Pierre Cardaliaguet for his fruitful comments and his careful reading of the manuscript.
Funding
Open access funding provided by Università degli Studi di Roma Tor Vergata within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
There are no conflict of interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mendico, C. A singular perturbation problem for mean field games of acceleration: application to mean field games of control. J. Evol. Equ. 23, 56 (2023). https://doi.org/10.1007/s00028-023-00905-y
Accepted:
Published:
DOI: https://doi.org/10.1007/s00028-023-00905-y