Abstract
We study the asymptotic organization among many optimizing individuals interacting in a suitable “moderate" way. We justify this limiting game by proving that its solution provides approximate Nash equilibria for large but finite player games. This proof depends upon the derivation of a law of large numbers for the empirical processes in the limit as the number of players tends to infinity. Because it is of independent interest, we prove this result in full detail. We characterize the solutions of the limiting game via a verification argument.
Similar content being viewed by others
1 Introduction
The theory of Mean Field Games (MFGs, henceforth) began with the pioneering works of Lasry and Lions [17] and Huang et al. [13] to describe the asymptotic organization among a large population of optimizing individuals interacting with each other in a mean-field way and subject to constraints of economic or energetic type. The mean-field interaction enables to reduce the analysis to a control problem for one single representative player, interacting with, and evolving in, the environment created by the aggregation of the other individuals. Intuitively, the system’s symmetries will force the players to obey a form of law of large numbers and satisfy a propagation of chaos phenomenon as the size of the population grows. The literature on MFGs is rapidly growing and the application of MFG theory is catching on in areas as diverse as Economics, Biology, Physics, and Machine Learning; hence, it is impossible to give an exhaustive account of the activity on the topic. For this reason, we refer the reader to the lecture notes by Cardaliaguet [3] and the two-volume monograph by Carmona et al. [6] for a comprehensive presentation of the MFG theory and its applications; the first reference presents the theory from an analytic perspective, whereas the second one from a probabilistic point of view.
However, in many practical situations (e.g., in evacuation planning and crowd management at mass gatherings), it stands to reason that a single person interacts only with the few people in the surrounding environment, i.e., each individual has her/his space. A possible mathematical way to describe this type of interaction is through an appropriate rescaling of a given reference function V, where V is a sufficiently regular probability density function; see, e.g., Oelschläger [21] and Morale et al. [19]. Denoting by x and y the positions of two individuals (out of a population of N) in a d-dimensional space, then their interaction can be modelled by:
where
The parameter \(\beta \in (0,1)\) describes how V is rescaled for the total number N of individuals and expresses the so-called moderate interaction among the individuals; see Oelschläger [21]. On the other hand, \(\beta = 0\) expresses an interaction of mean-field type, whereas \(\beta = 1\) generates the so-called nearest-neighbour interaction. This paper aims to analyze the asymptotic organization among many optimizing individuals moderately interacting with each other. To the best of our knowledge, the study of this type of asymptotic organization has been performed only in Aurell and Djehiche [1] and Cardaliaguet [4]. In the former work, authors introduced models for crowd motion, although in a more simplified setting. Indeed, they account for the moderate interaction among the individuals in the cost functional only, although they consider that the position of each pedestrian (in a crowd of N pedestrians) belongs to \(\mathbb {R}^{d}\). Also, in Cardaliaguet [4] only the payoff of a player depends in an increasingly singular way on the players which are very close to her/him. In addition, to avoid issues related to boundary conditions or problems at infinity, in the latter work data are assumed periodic in space. The fact that data are assumed periodic in space and (mostly) that the moderate interaction enters only in the cost functional has a consequence in proving the existence and uniqueness of solutions of the Partial Differential Equation (PDE) MFG system associated with our model; see the discussion here below in the introduction and Sect. 4.
The model The motion of a single-player \(X^{N,i}_t\), \(t \in [0, T]\), in a population of N individuals is assumed to be modelled as
Here, \(\varvec{\alpha }^{N} \doteq (\alpha ^{N,1},\ldots , \alpha ^{N,N})\) is a vector of strategies that we will specify below, b is a given deterministic function and \(W^{N,1}, \ldots , W^{N,N}\) are independent d-dimensional Wiener processes defined on some filtered probability space \((\Omega , \mathcal {F}, (\mathcal {F}_t)_{t \in [0,T]}, \mathbb {P})\). We will denote by \(\mathbf {X}^{N}_{{t}} \doteq (X^{N,1}_{{t}}, \ldots , X^{N, N}_{{t}})\) the vector of the positions at time t of the N individuals. In addition, \({X_0}^{N,i}\), \(i = 1, \ldots , N\), are \(\mathbb {R}^{d}\)-valued independent and identically distributed (i.i.d) random variables, independent of the Wiener processes, such that \({X_0}^{N,i}\overset{d}{\sim } \xi \) (notice that \(``\overset{d}{\sim }"\) stands for “distributed as") where \(\xi \) is an auxiliary random variable with law \(\mu _0\) with density \(p_0\), i.e. \(\mu _0\) is absolutely continuous with respect to (w.r.t) the Lebesgue measure. Eq. (1.2) says that each individual i partially controls its velocity through her/his strategy \(\alpha ^{N, i}\). However, the velocity depends on her/his position and on the other individuals’ in a neighbourhood of \(X^{N,i}\). Indeed, the functions \(V^{N}(\,\cdot \,)\) (see Eq. (1.1)) are mollifiers (see Appendix A for a precise definition) describing the intermediate regime between the mean-field and the nearest-neighbour interaction. For large N they have a relatively small support and therefore the individual i interacts, via the term \(V^{N}(X_s^{N,i}-X_s^{N,j})\), only with few players, indexed by j, in a neighbourhood of \(X_s^{N,i}\). In particular, the rate of convergence to zero of the support of \(V^{N}\) will be such that the number of players i is still very large, in the limit as N tends to infinity, but very small compared to the full population size N. It is worth mentioning that it is also possible to let a common disturbance affect all the individuals [13], commonly referred to in the MFGs literature as common noise; we refer to the second volume by Carmona et al. [6] for an overview of this theory. The common disturbance could be used—as also pointed out by Aurell and Djehiche [1]—to model an evacuation during, for instance, a fire or a earthquake.
We leave, however, the study of this case for future research.
Each player acts to minimize her/his own expected costs according to a given functional over a finite time horizon [0, T]. More precisely, player i evaluates a strategy vector \(\varvec{\alpha }^{N}\) according to the following cost functional
where \(\varvec{X}^{N}_{{t}}\) is the solution of Eq. (1.2) under \(\varvec{\alpha }^{N}\). Notice that the cost coefficients f and g are the same for all players. The cost functional \(J_{i}^{N}(\varvec{\alpha }^{N})\) can be interpreted practically in the following way; see, also, Aurell and Djehiche [1]. The first term penalizes the usage of energy, the second term, instead, the trajectories passing through densely crowded areas. Finally, the final cost \(g(\,\cdot \,)\) penalizes deviation from specific target regions. More details on the setting with all the technical assumptions will be given in the next sections.
For the class of games just introduced, we focus on the construction of approximate Nash equilibria [17] for the game with a finite number of individuals (i.e., for the N-player game) via the solution of the corresponding control problem for one single representative player (i.e., through the solution of the corresponding MFG). Hereafter, we will use the words “intermediate interactions" and “moderate interactions" interchangeably.
Our main contributions are as follows:
-
We introduce the limit model corresponding to the above N-player games as N tends to infinity, namely the MFG of moderate interaction. We formulate both the PDE approach to MFGs with moderate interaction and the stochastic formulation; see Definitions 4.1 and 4.7, respectively.
-
We prove that the PDE system (or the equivalent mild formulation; see Lemma 4.2) admits a solution \((0, \infty )\); see Theorem 4.4. Also, we prove that the same system admits a unique solution for T sufficiently small; see Theorem 4.5.
-
We prove the existence of a solution in the feedback form to the MFG of moderate interaction; see Theorem 4.8.
-
We derive, in the limit as the number of different processes in Eq. (1.2) tends to infinity, law of large numbers for the empirical processes, and we characterize the limit dynamics; see Theorem 5.1.
-
We prove that any feedback solution of the MFG induces a sequence of approximate Nash equilibria for the N-player games with approximation error tending to zero as N tends to infinity; see Theorem 6.1.
The MFG system of PDEs associated with our model takes the form of a backward Hamilton–Jacobi equation coupled with a forward Kolmogorov equation. In particular, it is a second-order MFG system with local coupling or of local type. Many authors have studied this type of system in the last years; see Lasry and Lions [16, 17], Porretta [23], Gomes et al. [12], Cardaliaguet and Porretta [5]. However, the framework in these works deviates from ours’ for two main reasons. First, the authors consider that the state space is the d-dimensional torus \(\mathbb {T}^{d}\) and not all the space \(\mathbb {R}^{d}\). Second, and most importantly, they do not consider dependence on the local density of measure in the dynamics; see the term b(x, p(t, x)) in the first equation in Eq. (4.1). We proveFootnote 1 the existence of solutions of the PDE MFG system for any \(T >0\) via the Brouwer-Schauder fixed point theorem. Instead, we will not be able to prove the uniqueness of such solutions under the standard monotonicity assumption for any \(T > 0\) but only for small T via the contraction principle, the difficulty arising precisely from the dependence on the local density in the dynamics.
The proof of the existence of a MFG solution is based on a verification argument. We identify the unique solution of the PDE system of the MFG with moderate interaction with the feedback control solution of the MFG in its stochastic formulation. In our case, the value function of the representative player is not “regular enough", and so, in order to apply Itô formula, some work based on standard mollification arguments will be needed; see Appendix 1, Sect. 1.
The proof of Theorem 5.1 on the characterization of the limit dynamics of the empirical processes is one of the main achievements of this work. It represents a version of the superb result of Oelschläger [21] on the study of the macroscopic limit of moderately interacting diffusion particles. Contrary to us, Oelschläger [21] does not assume the absolute continuity of \(\mu _0\) with respect to the Lebesgue measure. Admittedly, this would be an additional technicality that would not add to the present work’s conceptual advancements. On the other hand, we can show the validity of Theorem 5.1 under a more general assumption on the SDE drift in Eq. (1.2). In Oelschläger [21] a more strict Lipschitz condition on the drift (see Eq. (1.5) in his work) is imposed; this condition is used to prove the uniqueness of the solution of a certain (deterministic) equation that characterizes the limit dynamics of the empirical processes. We believe that this paper’s assumptions lead to a much more comprehensive understanding of the problem at hand. Because it is of independent interest, we will devote the entire Sect. 5 to the proof of the propagation of chaos result.
The proof of Theorem 6.1 of approximate Nash equilibria is based on weak convergence arguments and controlled martingale problems, whose use has a longstanding tradition; see, for instance, Funaki [11], Oelschlager [20], Huang et al. [13], as well Carmona et al. [6], Section 6.1 of the second volume. However, contrary to those works, we have to study the passage to the many player (particle) limit in the presence of a deviating player, which destroys the prelimit systems’ symmetry. We will use an argument based on relaxed controls.
Structure of the paper The rest of this paper is organized as follows. Section 2 introduces some terminology and notation and sets the main assumptions on the dynamics and on the cost functionals. Section 3 describes the setting of N-player games with moderate interaction, while Sect. 4 introduces the corresponding MFG. In Sect. 5, one of the main results, namely the derivation of a law of large numbers for the empirical processes, is stated and proved. Section 6 contains the result on the construction of approximate Nash equilibria for the N-player game from a solution of the limit problem. The technical results used in the paper are all gathered in the Appendix, including the aforementioned existence and uniqueness result for the PDE system and the proof of the existence of a MFG solution in Appendix 1, and bounds on Hölder-type semi-norm to prove the results of Sect. 5 in Appendix 1 and Appendix 1.
2 Preliminaries and Assumptions
Let \(d \in \mathbb {N}\) be the dimension of the space of private state and of the noise. We equip the spaces \(\mathbb {R}^{d}\), \(d \in \mathbb {N}\), with the standard Euclidean norm, which will be denoted by \(|\,\cdot \,|\). Instead \(T > 0\) is the finite time horizon.
For \(\mathcal {S}\) Polish space we let \(\mathcal {P}(\mathcal {S})\) denote the space of probability measures on \(\mathcal {B}(\mathcal {S})\), the Borel sets of \(\mathcal {S}\). For \(s \in \mathcal {S}\) we let \(\delta _s\) indicate the Dirac measure concentrated in s. If \(\mathcal {P}(\mathcal {S})\) is equipped with the topology of weak convergence of probability measures, then \(\mathcal {P}(\mathcal {S})\) is a Polish space. In particular, \(\text {C}([0,T] ; \mathcal {P}(\mathcal {S}))\) denotes the space of continuous flow of measures.
We set \(\mathcal {X} \doteq \text {C} ([0,T] ; \mathbb {R}^{d})\) and we equip it with the topology of uniform convergence; the space \(\mathcal {X}\) with this topology is a Polish space. Given \(N \in \mathbb {N}\), we will use the usual identification of \(\mathcal {X}^{N} = \times ^{N} \mathcal {X}\) with the space \( \text {C} ([0,T] ; \mathbb {R}^{d\cdot N})\); \(\mathcal {X}^{N}\) is equipped with the topology of uniform convergence. For \(\ell \in \mathbb {R}_{+}\), we denote by \(\text {C}_b^{\ell }(\mathbb {R}^{d} ; \mathbb {R}^{d})\) the set of \(\mathbb {R}^{d}\)-valued functions on \(\mathbb {R}^{d}\) with bounded \(\ell \)-th derivative, and by \(\text {C}_c^{\ell }(\mathbb {R}^{d}; \mathbb {R}^{d})\) the set of \(\mathbb {R}^{d}\)-valued functions on \(\mathbb {R}^{d}\) with compact support and continuous \(\ell \)-th derivative. We will use simply \(\text {C}_b(\mathbb {R}^{d})\), \(\text {C}_b^{\ell }(\mathbb {R}^{d})\) and \(\text {C}_c^{\ell }(\mathbb {R}^{d})\) when the functions are real-valued. Moreover, \(\text {C}^{\ell }([0,T]; \text {C}_b(\mathbb {R}^{d}))\) denotes the space of \(\text {C}_b(\mathbb {R}^{d})\)-valued functions on [0, T] with continuous \(\ell \)-th derivative; analogous definitions hold if \(\text {C}_b(\mathbb {R}^{d})\) is replaced with either \(\text {C}_b^{\ell }(\mathbb {R}^{d})\) or \(\text {C}_c^{\ell }(\mathbb {R}^{d})\).
Similarly, we denote by \(\text {C}([0,T] \times \mathbb {R}^{d}; \mathbb {R}^{d})\) the set of \(\mathbb {R}^{d}\)-valued continuous functions on \([0,T] \times \mathbb {R}^{d}\) and with \(\text {C}^{1, 2}([0,T] \times \mathbb {R}^{d}; \mathbb {R}^{d})\) the set of \(\mathbb {R}^{d}\)-valued continuous functions on \([0,T] \times \mathbb {R}^{d}\) with continuous first (resp. second) derivative with respect to the time (resp. space); analogous definitions (cfr. the characterizations in the previous paragraph) hold for the spaces \(\text {C}_b^{1, 2}([0,T] \times \mathbb {R}^{d}; \mathbb {R}^{d})\), \(\text {C}_c^{1, 2}([0,T] \times \mathbb {R}^{d}; \mathbb {R}^{d})\). Again, we will use simply \(\text {C}([0,T] \times \mathbb {R}^{d})\), \(\text {C}^{1,2}([0,T] \times \mathbb {R}^{d})\), \(\text {C}_b^{1, 2}([0,T] \times \mathbb {R}^{d})\), \(\text {C}_c^{1, 2}([0,T] \times \mathbb {R}^{d})\) when the functions are real-valued. In particular, notice that \(\text {C}([0,T];\text {C}_b(\mathbb {R}^{d})) \subset \text {C}_b([0,T]\times \mathbb {R}^{d})\).
As usual, \(\nabla \) and \(\Delta \) denote the gradient and the Laplacian operator, respectively. Finally, for the sake of simplicity, we write \(i \in [[N]]\) in place of \(i = 1, \ldots , N\).
Now let
The function b will denote the drift, while f and g will quantify the running and the terminal costs, respectively. Let us make the following assumptions:
-
(H1)
b and f are Borel measurable functions, continuous and such that there exist two constants \(C, L > 0\) for which it holds that
$$\begin{aligned} \begin{aligned}&|b(x, p)| + |f(x, p)| \le C,\\&|b(x,p) - b(y,q)| + |f(x,p) - f(y,q)| \le L(|x-y| + |p-q|) \end{aligned} \end{aligned}$$for all \(x, y \in \mathbb {R}^d\), \(p, q \in \mathbb {R}_{+}\).
-
(H2)
g is a Borel measurable function such that \(g, \partial _{x_i} g \in \text {C}_b(\mathbb {R}^{d})\), \(i \in [[d]]\).
-
(H3)
For each \(N\in \mathbb {N}\), for some \(\beta \in (0, 1/2)\) and some \(V \in \text {C}_{c}^{1}(\mathbb {R}^{d}) \cap \mathcal {P}(\mathbb {R}^{d})\) we have
$$\begin{aligned} V^{N}(x) \doteq N^{\beta } V(N^{\frac{\beta }{d}} x),\quad x \in \mathbb {R}^{{d}}, \end{aligned}$$(2.1)where, we remind, \(\text {C}_{c}^{1}(\mathbb {R}^{d})\) is the space of continuous functions on \(\mathbb {R}^d\) with compact support and continuous first derivatives, while \(\mathcal {P}(\mathbb {R}^{d})\) denotes the probability measures on \(\mathbb {R}^d\). In particular, \(\text {C}_{c}^{1}(\mathbb {R}^{d}) \cap \mathcal {P}(\mathbb {R}^{d})\) denotes the set of probability measures with a density that has compact support and that is differentiable.
-
(H4)
The law \(\mu _0 \in \mathcal {P}(\mathbb {R}^{d})\) is absolutely continuous with respect to the Lebesgue measure on \(\mathbb {R}^{d}\) and with density \(p_0 \in \text {C}_{b}(\mathbb {R}^{d})\) satisfying the following condition:
$$\begin{aligned} \int _{\mathbb {R}^{d}} e^{\lambda |x|} p_0(x)\,dx < \infty \end{aligned}$$for all \(\lambda > 0\).
3 N-Player Games
Let \(N \in \mathbb {N}\) be the number of players. Denote by \(X^{N,i}_t\) the private state of player i at time \(t \in [0,T]\). The evolution of the players’ state depends on the strategies they choose and on the initial distribution of states, which we indicate by \(\mu ^{N}_0\) (thus, \(\mu ^{N}_0 \in \mathcal {P}(\mathbb {R}^{N \times d})\)). We assume that \(\mu ^{N}_0\) can be factorized and that for each \(\mu _0\) hypothesis (H4) is in force. Here, we consider players using feedback strategies with full state information, i.e. strategies \(\alpha _t^{N,i} = \alpha (t, \varvec{X}_t^{N})\) where \(\alpha \in \text {C}_b([0,T] \times \mathbb {R}^{d\cdot N} ; \mathbb {R}^{d})\) that are uniformly bounded by some constant \(C>0\). Thus, let \(\mathcal {A}_{C}^{N, 1, fb}\) denote the set of all these individual strategies. A vector \({\varvec{\alpha }^{N}\doteq }(\alpha ^{N,1},\ldots ,\alpha ^{N,N})\) of individual strategies is called a strategy vector or strategy profile. We denote with \(\mathcal {A}_C^{N, fb}\) the set of all vectors \(\varvec{\alpha }^{N}\) of feedback strategies for the N-player game that are uniformly bounded by some constant \(C>0\). Given a vector of N-player feedback strategies \(\varvec{\alpha }^{N}\), consider the system of equations
where \(\varvec{X}^{N}_{{t}} = (X^{N,1}_{{t}}, \ldots , X^{N,N}_{{t}})\) and \(W^{N,1}, \ldots , W^{N,N}\) are independent Wiener processes defined on some filtered probability space \((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P})\) satisfying the usual conditions. The initial conditions \(X_0^{N, i}\) are i.i.d. \(\mathcal {F}_0\)-measurable random variables, each with law \(\mu _0 \in \mathcal {P}(\mathbb {R}^{d})\) and independent of the Wiener processes, the functions \(V^{N}(\,\cdot \,)\) are mollifiers (see hypothesis (H3)) through which we obtain the interaction of moderate type among the players. A solution of Eq. (3.1) under \(\varvec{\alpha }^{N}\) with initial distribution \(\mu _0^{N}\) is a triple \(((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P}), \varvec{W}^{N}, \varvec{X}^{N})\) where \((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P})\) is a filtered probability space satisfying the usual hypotheses, \(\varvec{W}^{N} = (W^{N,1},\ldots , W^{N,N})\) a vector of independent d-dimensional \((\mathcal {F}_t)\)-Wiener processes, and \(\varvec{X}^N = (X^{N,1}, \ldots , X^{N,N})\) a vector of continuous \(\mathbb {R}^{d}\)-valued \((\mathcal {F}_t)\)-adapted processes such that Eq. (3.1) holds \(\mathbb {P}\)-almost surely with strategy vector \(\varvec{\alpha }^N\) and \(\mathbb {P} \circ (\varvec{{X_0}}^{N})^{-1} = \mu _0^{N}\), each \(X_0^{N, i}\) for \(i\in [[N]]\) being independent of the Wiener processes. The i-th player evaluates a (feedback) strategy vector \(\varvec{\alpha }^{N}\) according to the cost functional
where \(\varvec{X}^{N}_{{t}} = (X^{N,1}_{{t}}, \ldots , X^{N,N}_{{t}})\) and \(((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P}), \varvec{W}^{N}, \varvec{X}^{N})\) is a solution of Eq. (3.1) under \(\mu _0^N\). The cost functional is well defined thanks to the hypothesis (H1).
Given a strategy vector \(\varvec{\alpha }^{N}\in \mathcal {A}^{N, fb}_C\) and an individual strategy \(\beta \in \mathcal {A}^{N, 1, fb}_C\), let \([\varvec{\alpha }^{N, -i}, \beta ]\in \mathcal {A}^{N, fb}_C\) indicate the strategy vector that is obtained from \(\varvec{\alpha }^N\) by replacing \(\alpha ^{N,i}\), the strategy of player i, with \(\beta \). The correct interpretation of optimization of the cost functional \(J_i^{N}(\varvec{\alpha }^{N})\) in Eq. (3.2)—classical in game theory—would be the concept of Nash Equilibrium. In the case of a large number of players, our goal will be to prove the validity of a weaker equilibrium concept, that is the concept of \(\varepsilon \)-Nash equilibrium, introduced in the theory of MFGs.
Definition 3.1
(\(\varepsilon \)-Nash equilibria) Let \(\varepsilon \ge 0\). A strategy vector \(\varvec{\alpha }^{N}\) is called an \(\varepsilon \)-Nash equilibrium for the N-player game if for every \(i \in [[N]]\)
for all admissible single player strategies \(\beta \), i.e., strategies that belong to \(\mathcal {A}_{C}^{N, 1, fb}\).
If \(\varvec{\alpha }^{N}\) is an \(\varepsilon \)-Nash equilibrium with \(\varepsilon = 0\), then \(\varvec{\alpha }^{N}\) is called Nash equilibrium.
In our framework, we consider strategy vectors \(\varvec{\alpha }^N\) belonging to \(\mathcal {A}^{N, fb}_C\), where we will later in the work fix the constant C to be equal to \(K\left( T,b,f,p_{0},g\right) \) defined in Eq. (4.13). We say that a single player strategy \(\beta \) is admissible (i.e. it is an admissible deviation from equilibrium) for a player \(i\in [[N]]\) if it belongs to \(\mathcal {A}^{N, 1, fb}_C\) where the constant C is intended to be fixed.
4 Mean Field Games
Let \(T>0\) be the finite time horizon and \(b, f, p_0, g\) as in Sect. 2. Let us introduce the PDE approach to MFGs with moderate interaction via the following coupled system of backward Hamilton–Jacobi Bellman equation and Kolmogorov forward equation, called PDE system:
for all \((x,p) \in \mathbb {R}^{d}\times \mathbb {R}_{+}\). Precisely, the first equation of the PDE system is the Hamilton–Jacobi Bellman equation with a quadratic cost for the value function u of the representative player. Instead, the second one is the Kolmogorov forward equation for the density \(p(t,\,\cdot \,)\) of the representative player. As said in the introduction, the PDE MFG system is of local type with the dependence on the local density p(t, x) appearing both on the dynamics, via the term b(x, p(t, x)), and on the running cost, via the term f(x, p(t, x)). In addition, the state space is \(\mathbb {R}^{d}\).
The notion of solution we consider for the PDE system is the one in Definition 4.1 below, where we let \(\mathcal {A}\) denote the following operator:
Definition 4.1
(MFG solution, PDE formulation) A weak solution of the PDE system is a pair (u, p) such that:
- (i):
-
u, \(\partial _i u\) and \(p \in \text {C}_b([0,T] \times \mathbb {R}^{d})\) for all \(i \in [[\,d\,]]\);
- (ii):
-
for all \(\varphi , \psi \in \text {C}^{1,2}_{c}([0,T] \times \mathbb {R}^{d})\) and all \(t \in [0, T]\) the following two equations
$$\begin{aligned}&\quad \left\langle u\left( t \right) ,\varphi \left( t\right) \right\rangle - \left\langle g,\varphi \left( T\right) \right\rangle +\int _{t}^{T}\left\langle u\left( s\right) ,\mathcal {A} \varphi \left( s\right) \right\rangle ds \nonumber \\&=\int _{t}^{T}\left\langle b(\,\cdot \,,p(s))\cdot \nabla u\left( s\right) -\frac{1}{2}\left| \nabla u\left( s\right) \right| . ^{2}+f(\,\cdot \,,p(s)),\varphi \left( s\right) \right\rangle ds, \qquad \nonumber \\&\quad \left\langle p\left( t\right) ,\psi \left( t\right) \right\rangle -\left\langle p_{0},\psi \left( 0\right) \right\rangle -\int _{0}^{t}\left\langle u\left( s\right) ,\mathcal {A}\psi \left( s\right) \right\rangle ds \end{aligned}$$(4.3)$$\begin{aligned}&= \int _{0}^{t}\left\langle {p(s)(-\nabla u(s)+b(\,\cdot \,,p(s))),\nabla }\psi \left( s\right) \right\rangle ds. \end{aligned}$$(4.4)hold.
We now state and prove that under the regularity condition (i) in Definition 4.1 the system in Eqs. (4.3)–(4.3) admits an equivalent mild formulation. To this end, set \(G(t, x-y)\) the density of \(x + W_t\), where \(W_t\) is a standard blackian motion, \(t \in [0, T]\) and \(x, y \in \mathbb {R}^{d}\), and introduce the notation \(\mathcal {P}_t\) for the associated semi-group,
defined on functions \(h \in \text {C}_b(\mathbb {R}^{d})\). By taking, for all \(t \in [0,T]\), in the Eqs. (4.3) and (4.3) the functions \(\varphi (t)\) and \(\psi ( t )\) as the function \(y \mapsto G(t, x-y)\,h(y)\), with x a given parameter, one can show the equivalence between the weak formulations of Eq. (4.3) and (4.3 and the following mild formulation. This is the content of the following lemma.
Lemma 4.2
Let (u, p) a pair with the regularity of point (i) in Definition 4.1. Then (ii) in the same definition is equivalent to the validity, for all \(t\in \left[ 0,T\right] \), of the following system:
and
where in the last integral we understand that
A solution of this integral system with the regularity of point (i) in Definition (4.1) is called a mild solution.
Proof
See Appendix 1, Sect. 1, where we give a sketch of the (less classical) proof for the backward equation (4.6). \(\square \)
Now, we prove that there exists (u, p) weak solution (cfr. Definition 4.1) of the PDE MFG system 4.1 in \((0, \infty )\). In order to do so, we use the Hopf-Cole transform for quadratic Hamiltonians (see, e.g. Remark 1.13 in Cardaliaguet and Porretta [5]) and we consider the following auxiliary system
Notice that if (w, p) is a weak solution of the previous system such that \(p, w, \partial _i w \in \text {C}_b([0,T]\times \mathbb {R}^{d})\), \(i \in [[d]]\), then \(w(t, x) \ge e^{-(\Vert g \Vert _{\infty } + T \Vert f \Vert _{\infty })}\) by strong maximum principle. Therefore, the ratio \(\frac{\nabla w}{w} \in \text {C}_b([0,T] \times \mathbb {R}^{d} ; \mathbb {R}^{d})\) with a bound that depends only on the infinity norms of the coefficients; precisely:
This observation justifies the following definition, analogous to Definition 4.1.
Definition 4.3
(MFG solution, PDE formulation - I) Let \(p_{0} \in \text {C}_{b}\left( \mathbb {R}^{d}\right) \) a given probability density and \(g\in \text {C}_{b}\left( \mathbb {R}^{d}\right) \), also given. A weak solution of the PDE system (4.9) is a pair (w, p) such that \(w, \partial _i w\) and \(p \in \text {C}_{b}\left( \left[ 0,T\right] \times \mathbb {R}^{d}\right) \) for all \(i \in [[ d ]]\), \(w\left( t,x\right) \ge e^{-\left( \left\| g\right\| _{\infty }+T\left\| f\right\| _{\infty }\right) }\) and the system is satisfied in the weak sense as in Definition 4.1.
In particular, the weak formulation in Definition 4.3 is equivalent to the validity, for all \(t \in [0,T]\), of the following system
and
where the quantity \(\nabla \mathcal {P}_{t-s}\) is defined in Lemma 4.2, Eq. (4.8). The proof of such equivalence is the same as in Lemma 4.2 and we decide to omit it for the sake of space.
To prove global existence of weak solutions, we need the following additional assumption on \(p_0\):
-
(H5)
There exists a continuous function \(\rho :\mathbb {R}^{d}\rightarrow \left( 0,\infty \right) \) such that
$$\begin{aligned} \lim _{\left\| x\right\| \rightarrow \infty }\rho \left( x\right) =0\quad \text {and}\quad p_{0}\left( x\right) \le \rho \left( x\right) \end{aligned}$$for all \(x\in \mathbb {R}^{d}\). Moreover \(p_{0}\in \text {C}_{b}^{\alpha }(\mathbb {R}^{d})\) for some \(\alpha >0\) and \(\rho ^{-1}\in \text {C}^{2}\left( \mathbb {R}^{d}\right) \) with \(\left\| \Delta \rho ^{-1}\right\| _{\infty }+\left\| \nabla \rho ^{-1}\right\| _{\infty }<\infty \).
Notice that the latter assumption on \(\rho ^{-1}\) is not restrictive. Indeed, smoothness of \(\rho ^{-1}\) can be obtained by regularization and the bounds on \(\left\| \Delta \rho ^{-1}\right\| _{\infty }\) and \(\left\| \nabla \rho ^{-1}\right\| _{\infty }\) are true if \(\rho \) decays slowly, monotonically and radially, which can always be assumed without loss of generality. We are now ready to prove the existence of a weak solution of the PDE system (4.9); this is the content of the following theorem, whose proof is relatively standard but some new details – up to our knowledge – are due to the fact that the space is \(\mathbb {R}^{d}\) instead of a bounded set.
Theorem 4.4
There exists a weak solution \(\left( w,p\right) \) on \(\left[ 0,T\right] \) of system (4.9). Moreover, the pair
is a weak solution of the system (4.1).
Proof
See Appendix 1, Sect. 1. \(\square \)
Now, we prove that the system (4.1) admits a unique solution for T sufficiently small via the contraction principle; indeed, the following theorem holds.
Theorem 4.5
(Local well posedness) There exists a unique weak (or mild) solution of the MFG system (4.6)–(4.7), for T sufficiently small.
Proof
See Appendix 1, Sect. 1. \(\square \)
Next, let \(T>0\) indicate (as before) the finite time horizon, and let \(b, f, p_0, g\) as in Sect. 2. If the PDE system in Eq. (4.1) has a unique weak (or mild) solution (u, p), then we denote by \(K(T, b, f, p_0, g)\) the following constant:
4.1 Feedback MFG with Given Density
We started the section by formulating the PDE approach to MFGs of moderate interaction. Here, instead, we introduce the corresponding stochastic (feedback first and open-loop in the next subsection) formulation.
Let \(K>0\). In order to make precise our definition of (feedback) MFG solution, we introduce the following notation:
- (i):
-
We denote by \(\mathcal {A}^{fb}_K\) the set of feedback controls for the MFG, which is defined as the set of functions \(\alpha \in \text {C}_b([0,T] \times \mathbb {R}^{d} ; \mathbb {R}^{d})\) bounded by K.
- (ii):
-
Next, given the function p as in Definition 4.1, given an admissible control \(\alpha \in \mathcal {A}^{fb}_K\), we consider the equation
$$\begin{aligned} {X_t = X_0 + \int _{0}^{t} (\alpha (s,X_s) + b(X_s, p(s, X_s)))\,ds + W_t,\quad t \in [0, T],} \end{aligned}$$(4.14)where \(X_0\) is a \(\mathcal {F}_0\)-measurable random variable distributed as \(\mu _0\) having density \(p_0\) while W is a d-dimensional Wiener process defined on some filtered probability space \((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P})\).
- (iii):
-
Finally, we consider the following cost functional
$$\begin{aligned} {J(\alpha ) \doteq \mathbb {E}\left[ \int _{0}^{T} \frac{1}{2}|\alpha (s,X_s)|^2 + f(X_s, p(s, X_s))\,ds + g(X_T)\right] } \end{aligned}$$and we say that \(\alpha ^{*}\in \mathcal {A}^{fb}_{K}\) is an optimal control if it is a minimizer of J over \(\mathcal {A}^{fb}_{K}\), i.e. if \(J(\alpha ^{*}) = \inf _{\alpha \in \mathcal {A}^{fb}_K} J(\alpha )\).
The notion of solution we will consider in the feedback case is then the following:
Definition 4.6
(MFG solution, stochastic feedback formulation) Let \(T>0\) be the finite time horizon and \(b, f, p_0, g\) as in (H1)-(H2) and (H4); see Sect. 2. Then a feedback MFG solution for bound \(K>0\) is a pair \((\alpha ^*,p)\) such that:
- (i):
-
\(p\in C_b([0,T]\times \mathbb {R}^d)\) and \(\alpha ^* \in \mathcal {A}^{fb}_K\);
- (ii):
-
Given \(p\in C_b([0,T]\times \mathbb {R}^d)\), \(\alpha ^* \in \mathcal {A}^{fb}_K\) is an optimal control for the cost functional \(J(\cdot )\) (in the sense of item (iii) above);
- (iii):
-
For any weak solution \((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P},X,W)\) of Eq. (4.14), \(X_t\) has law \(\mu _t\) with density \(p(t,\cdot )\) for every \(t\in [0,T]\).
Assume that the MFG system in Eq. (4.1) has a unique weak solution (u, p) and let K be any constant such that
where \(K(T, b, f, p_0, g)\) is the constant in Eq. (4.13). From an operative point of view, in order to find a (feedback) MFG solution in the sense of Definition 4.6, we look for an optimal control \(\alpha ^*\in \mathcal {A}^{fb}_K\) such that, given \(p\in C_b([0,T]\times \mathbb {R}^d)\) and given any weak solution \((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P},X^*,W)\) of Eq. (4.14) (controlled by \(\alpha ^*\) and with density p appearing in the drift), the law of \(X^*_t\) has density \(p^*\in C_b([0,T]\times \mathbb {R}^d)\) such that \(p^*\equiv p\).
Given the environment \((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P},W, p)\), i.e. a filtered probability space with Wiener process W and with a given distribution of players specified by its density function p, where p is as in Definition 4.1, we notice that path-wise uniqueness and existence of a strong solution of Eq. (4.14) is provided by Veretennikov [25]. Then, we define the unique solution X of Eq. (4.14) in the given environment \((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P},W, p)\) and with \(\alpha \doteq -\nabla u\), to be the state of the PDE system in Eq. (4.1) in the given environment with density p. Nevertheless, we decide to introduce and work with weak solutions in view of the approximation result of Sect. 6, where we exploit weak convergence of the laws of the N-player system and provide a stochastic representation of the limiting dynamics by means of the martingale problem of Stroock and Varadhan [24].
4.2 Open-Loop MFG with Given Density
We now introduce a more general notion of control, that of open-loop control, together with what we intend with a solution of the MFG in open-loop form.
Let \(K>0\). In order to make precise our definition of (open-loop) MFG solution, we introduce the following notation:
- (i):
-
We denote by \(\mathcal {A}_{K}\) the set of admissible open-loop controls for the MFG, which is defined as the set of tuples \((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P}, X, W, \alpha )\) where \(\alpha = (\alpha (t))_{t \in [0,T]}\) is \(\mathcal {F}_t\)-progressively measurable, continuous and bounded by K a.s. for all \(t\in [0,T]\), while \((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P}, X, W)\) is a weak solution of
$$\begin{aligned} {X_t = {X_0} + \int _{0}^{t} (\alpha (s) + b(X_s, p(s, X_s)))\,ds + W_t,\quad t \in [0, T]} \end{aligned}$$(4.15)where \(X_0 \overset{d}{\sim } \mu _0\), having density \(p_0\), is independent of the \(\mathcal {F}_t\)-Wiener process W. For the sake of brevity and where no confusion is possible we will denote a control for the MFG simply with \(\alpha \), in place of the full tuple.
- (ii):
-
We consider the following cost functional
$$\begin{aligned} {J(\alpha ) \doteq \mathbb {E}\left[ \int _{0}^{T} \frac{1}{2}|\alpha (s)|^2 + f(X_s, p(s, X_s))\,ds + g(X_T)\right] } \end{aligned}$$(4.16)and we say that \(\alpha ^{*} \doteq (\alpha ^{*}(t))_{t \in [0, T]} \in \mathcal {A}_{K}\) is an optimal control if it is a minimizer of J over \(\mathcal {A}_{K}\), i.e. if \(J(\alpha ^{*}) = \inf _{\alpha \in \mathcal {A}_K} J(\alpha )\).
Thereafter, we will denote by \(\mathbf {OC}\) the just-introduced optimal control problem. The notion of solution we will consider in the open-loop case is then the following:
Definition 4.7
(MFG solution, stochastic open-loop formulation) Let \(T>0\) be the finite time horizon and \(b, f, p_0, g\) as in (H1)–(H2) and (H4); see Sect. 2. Then a open-loop MFG solution for bound \(K>0\) is a pair \((\alpha ^*,p)\) such that:
- (i):
-
\(p\in C_b([0,T]\times \mathbb {R}^d)\) and \(\alpha ^* \in \mathcal {A}_K\), \(\alpha ^*\) standing for the full tuple:
$$\begin{aligned}(\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P}, X, W, \alpha ^*);\end{aligned}$$ - (ii):
-
Given \(p\in C_b([0,T]\times \mathbb {R}^d)\), \(\alpha ^* \in \mathcal {A}_K\) is an optimal control for problem \(\mathbf {OC}\) (in the sense of item (ii) above);
- (iii):
-
\((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P},X,W)\) is a weak solution of Eq. (4.15) such that \(X_t\) has law \(\mu _t\) with density \(p(t,\cdot )\) for every \(t\in [0,T]\).
As for the feedback case, given the environment \((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P},W, p)\) where p is as in Definition 4.1, given an admissible control \(\alpha \in \mathcal {A}_K\), we notice that path-wise uniqueness and existence of a strong solution of Eq. (4.15) is provided by Veretennikov [25] but we will continue working with weak solutions in view of the approximation result of Sect. 6.
We point out that feedback controls induce stochastic open-loop controls so, as a consequence, the computation of the infimum of \(J(\alpha )\) over the class of stochastic open-loop controls would, in principle, lead to a lower value with respect to performing the same computation over the set of stochastic feedback controls. However, thanks to Proposition 2.6 in El Karoui et al. [10], the two minimization problems are equivalent from the point of view of the value function.
We state now the main result of this section, the Verification Theorem, which gives an optimal control for \(\mathbf {OC}\). In particular, we are going to show that \(\alpha ^{*}\) is the optimal feedback control, namely the optimal strategy to play at time t for a given state x.
Theorem 4.8
(Verification Theorem) Consider the PDE system in Eq. (4.1) and let (u, p) be a weak (or mild) solution. Consider the optimal control problem \(\mathbf {OC}\) as in Definition 4.7-(iii) and set \(\alpha ^{*}(t) = \alpha ^{*}(t, x) \doteq -\nabla u(t, x)\). Then,
- (i):
-
\(\alpha ^{*}\) is an optimal control for \(\mathbf {OC}\);
- (ii):
-
For any weak solution \((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P},X^*,W)\) of Eq. (4.15) with \(\alpha (s) = \alpha ^{*}(s, X_s^{*})\), the state \(X^{*}_t\) has law \(\mu ^*_t\) with density \(p(t,\,\cdot \,)\) for every \(t \in [0, T]\).
Proof
Let \(\alpha \in \mathcal {A}_K\) and \(X^{\alpha } \doteq (X_t^{\alpha })_{t \in [0,T]}\) the solution of Eq. (4.15) controlled by \(\alpha \). Besides, let \(X_t^{*}\) as in Definition 4.16-(ii), i.e.,
Notice that, thanks to boundedness of the drift, the previous equation admits both a weak solution and, in any given environment \((\Omega , \mathcal {F}, (\mathcal {F}_t), \mathbb {P},W, p)\), a strong solution that is path-wise unique [25].
Proof of (i). Heuristically, should the function \(u \in \text {C}^{1,2}([0,T] \times \mathbb {R}^{d})\), then we could apply Itô formula to \(u(t, X_t^{\alpha })\) and obtain (in expectation)
where we use the fact that the function u satisfies the first equation of the PDE system in Eq. (4.1), which implies
Hence for any admissible control \(\alpha \) we would have \(J(\alpha ) \ge \mathbb {E}[u(0, X_0^{\alpha })]\). In particular, the above inequality becomes an equality for \(\alpha (s) = \alpha ^{*}(s,x) = -\nabla u(s,x)\), i.e. \(J(\alpha ^{*}) = \inf _{\alpha } J(\alpha ) = \mathbb {E}[u(0, X_0^{*})]\). This would prove that \(\alpha ^{*}\) is an optimal control for \(\mathbf {OC}\).
However, the function u is not “regular enough" to apply Itô formula and some work is needed to adapt the heuristic argument to u. Given the technicality of this part and being it based on standard mollification arguments, we decide to move the required computations in Appendix 1, Sect. 1.
Proof of (ii). Now, let \(\mu ^{*}_t\) be the law of \(X_t^{*}\) and let \(\varphi \in \text {C}_b^{2}(\mathbb {R}^{d})\) be a test function. By Itô formula,
Hence, taking expectations on both sides, we have
Theorem 4.5 guarantees that this equation has a unique weak (or mild) solution \(\mu _t\) with density \(p(t,\,\cdot \,)\); hence \(\mu \) and \(\mu ^*\) coincide and \(\mu _t^{*}\) has density \(p(t,\,\cdot \,)\) for every \(t \in [0, T]\). This concludes the proof. \(\square \)
5 Moderately Interacting Particles
Let \(N \in \mathbb {N}\) be the number of players and denote by \(X_t^{N,i}\) the private state of player i at time t, \(t \in \left[ 0, T\right] \). In this section, we assume that the evolution of the players’ states is given by Eq. (3.1) and, as said, we consider players using feedback strategies, i.e. \(\alpha ^{N,i}(s) = \alpha (s, {\mathbf {X}}_s^{N})\) with \(\alpha \) sufficiently smooth. In particular, we will assume – with the natural identification – that \(\alpha \in \text {C}_b([0,T] \times \mathbb {R}^{d\cdot N} ; \mathbb {R}^{d})\). Besides, b, \(V^{N}\) and \({X_0}^{N, i}\), \(i \in [[ N ]]\), satisfy the hypotheses \(\text {(H1)}\), \(\text {(H3)}\) and \(\text {(H4)}\) in Sect. 2. Before proceeding, notice that the function
defined component-wise as
is continuous and bounded. Since the blackian motion \(\varvec{W}^{N}_t \in \mathbb {R}^{d\cdot N}\) in Eq. (3.1) is non-degenerate, both existence of a weak solution and existence of a pathwise unique strong solution in any given environment \(((\Omega _{N}, \mathcal {F}_{N}, (\mathcal {F}_t^{N}), \mathbb {P}^N), \varvec{W}^{N}, V)\), where now in the N-player case the interaction among players is prescribed by V, holds for this system [25]. Let \(S^{N}_t\) be the empirical measure on \(\mathbb {R}^{d}\) of the players’ private states, that is,
\(S^N=(S_t^{N})\) is a continuous stochastic process with values in \(\mathcal {P}(\mathbb {R}^{d})\); hence it can be seen as a random variable with values in \(\text {C}([0,T];\mathcal {P}(\mathbb {R}^{d}))\) (notice that for the sake of notation we do not put the explicit dependence on \(\omega \in \Omega \) in these definitions). Therefore, \(\mathcal {L}(S_{t}^{N})\in \mathcal {P}(\mathcal {P}(\mathbb {R}^{d}))\) and \(\mathcal {L}(S^{N})\in \mathcal {P}(\text {C}([0,T];\mathcal {P}( \mathbb {R}^{d})))\), respectively.
The main goal of this section is the characterization of the convergence of the laws \((\mathcal {L}(S^{N}))_{N\in \mathbb {N}}\) in \(\mathcal {P}(\text {C}([0,T];\mathcal {P}(\mathbb {R}^{d})))\). This characterization result is the content of Theorem 5.1 here below.
Theorem 5.1
(Moderately interacting particles) [cfr. 22, oelschlager1985law] Grant \(\text {(H1)}\) and \(\text {(H3)}-\text {(H4)}\). Let \(\alpha \in \text {C}_b([0,T] \times \mathbb {R}^{d \times N} ; \mathbb {R}^{d})\) be given. Then,
- (i):
-
The sequence of laws \((\mathcal {L}(S^{N}))_{N\in \mathbb {N}}\) converges weakly in \(\mathcal {P}(C([0,T];\mathcal {P}(\mathbb {R}^{d})))\) to \(\delta _{\mu }\in \mathcal {P}(C([0,T];\mathcal {P}(\mathbb {R}^{d})))\) for a flow of probability measures \(\mu \in C([0,T];\mathcal {P}(\mathbb {R}^{d}))\); hence also \(S^{N}\) converges in probability to \(\mu \);
- (ii):
-
For each \(t\in [0,T]\), \(\mu _{t}\) is absolutely continuous with respect to the Lebesgue measure on \(\mathbb {R}^{d}\), with density \(p(t,\,\cdot \,)\); the flow of density functions satisfies
$$\begin{aligned} p \in {\text {C}_b([0,T] \times \mathbb {R}^{d})} \end{aligned}$$and it is the unique solution in this space of the equation
$$\begin{aligned} p\left( t\right) =\mathcal {P}_{t}p_0 +\int _{0}^{t}\nabla \mathcal {P} _{t-s}\left( p\left( s\right) \left( \alpha \left( s\right) +{b(\,\cdot \,,p(s))} \right) \right) ds. \end{aligned}$$(5.3)
The proof of the previous theorem is divided into four parts. The first one is the tightness of the sequence of laws \((\mathcal {L}(S^{N} ))_{N\in \mathbb {N}}\) in \(\mathcal {P}(\text {C}([0,T];\mathcal {P}(\mathbb {R} ^{d}))\); see Sect. 5.1. The second one is the collection of estimates on \(V^{N}*S_{t}^{N}\); see Sect. 5.2. The third one is the characterization of the limits: all the possible limits are a random solutions of the deterministic equation in Eq. (5.3), with the required regularity; see Sect. 5.3. The fourth one is the proof of the uniqueness of solutions of this deterministic equation.
5.1 Tightness of the Empirical Measure
On \(\mathcal {P}(\mathbb {R}^{d})\) the weak topology is generated by the following complete metric:
We refer to Oelschläger [21], Page 285, and Dudley [8], Theorem 18, for a complete proof of the previous result. Also, we consider the regularized empirical measures
In particular, these are probability densities, because they are non-negative functions with
Therefore, we consider the probability measure with density \(V^{N}*S_{t}^{N}\) as a random time-dependent element of \(\mathcal {P}(\mathbb {R}^{d})\) (for each t and a.s. on the probability space). In the next lemma, when we mention the laws \((\mathcal {L}(V^{N}*S^{N}))_{N\in \mathbb {N}}\) on \(\mathcal {P}(C([0,T];\mathcal {P}(\mathbb {R} ^{d})))\), we adopt this interpretation.
Lemma 5.2
(Tightness) The laws \((\mathcal {L}(S^{N}))_{N\in \mathbb {N}}\) are tight in \(\mathcal {P}(\text {C}([0,T];\mathcal {P}(\mathbb {R}^{d})))\). Similarly, the laws \((\mathcal {L}(V^{N}*S^{N}))_{N\in \mathbb {N}}\) are tight in \(\mathcal {P}(\text {C}([0,T];\mathcal {P}(\mathbb {R}^{d})))\).
Proof
Part 1. Recall that the initial conditions \({X_0}^{N, i}\), \(i \in [[N]]\), admit a density \(p_0\) which is integrable. Therefore,
for some constant \(C>0\), uniformly in \(N \in \mathbb {N}\). To establish the tightness in \(\text {C}([0,T];\mathcal {P}(\mathbb {R}^{d}))\), we have to show (see, for instance Karatzas and Shreve, 1998, Problem 2.4.11) that the following two conditions are satisfied:
- (i):
-
\(\mathbb {E}\left[ \sup _{t\in [0,T]}\int _{\mathbb {R}^{d}}|x|S_{t}^{N}(dx)\right] \le C\), \(t\in [0,T]\),
- (ii):
-
\(\mathbb {E}\left[ d_{w}(S_{t}^{N},S_{s}^{N})^{p}\right] \le C|t-s|^{1+\epsilon }\), \(t,s\in [0,T]\)
for some constants \(C>0\), \(p\ge 2\) and \(\epsilon >0\). In order to verify \( (i) \), we compute
where
Hence,
which implies
where we use the boundedness (uniformly in N) of \(\alpha \), b and \(\mathbb {E}\left[ |{X_0}^{N,i}|\right] \); the quantity \(C_{T}^{W}(d)\) only depends on T and d. As regards (ii), instead,
where we apply Jensen’s inequality, the 1-Lipschitz continuity of f, boundedness of \(\alpha \) and b and Burkholder-Davis-Gundy inequality, respectively. To conclude it suffices to choose \(p>2\).
Part 2 To prove the statement for the random flow of probability measures \(V^{N}*S_{t}^{N}\), let us first notice that, denoting by \(R>0\) a real number such that the support of V is included in \(B_{R}(0)\), the open ball of radius R around the origin, for all \(y\in \mathbb {R}^{d}\) we have
and thus
We conclude by going back to the previous estimate without the mollifier. Moreover, denoted \(V^{N,-}\left( x\right) =V^{N}\left( -x\right) \), if f has Lipschitz constant less or equal to one, then
namely \(V^{N,-}*f\) has also Lipschitz constant less or equal to one. Therefore,
and we are again led back to the previous estimate without the mollifier. \(\square \)
5.2 Estimates on Mollified Empirical Measures
In this subsection we obtain estimates on mollified empirical measures. More precisely, we first prove that the empirical measure \(S_t^{N}\) satisfies the following identity for a test function \(\varphi \in \text {C}_b^{1,2}([0, T] \times \mathbb {R}^{d})\):
where \(M_{t}^{N, \varphi }\) is a martingale to be defined below. Then, in Lemma 5.3 we obtain an identity in mild form for the empirical density; the latter is defined as any convolution of the empirical measure with a smooth mollifier. In our paper, we work with the following particular convolution:
where \(t \in [0, T]\) and \(x \in \mathbb {R}^{d}\). Then, in Lemma 5.4 we derive Hölder-type semi-norm bound for the martingale \(M_t^{N,\varphi }\), and in Lemma 5.6, instead, Hölder-type semi-norm bound for the empirical density (5.4). In particular, we will see that in order to understand the limit of \((\mathcal {L}(S^{N}))_{N \in \mathbb {N}}\) it is crucial to study rigorously the regularity properties of \(p^{N}\) that remain stable in the limit as N tends to infinity.
First, we obtain the identity for the empirical measure. Let \(\varphi \in \text {C}_b^{1,2}([0, T] \times \mathbb {R}^{d})\) be a test function. By Itô formula,
In particular, the previous expression can be rewritten in integral form as:
where \(M_{t}^{N, \varphi }\) is the martingale
Second, we obtain the identity in mild form for the empirical density. Henceforth, we will use the classical notational conventions used in the semigroups theory [see 22]. Sometimes, it may happen that we will indicate the explicit dependence on the state variable to clarify the results; see e.g. the second integral in the lemma here below.
Lemma 5.3
Let \(p^{N}\) as in Eq. (5.4). Grant assumptions of Theorem 5.1. Then,
where
Proof
For the reader convenience, let us first recall the definition of \(\mathcal {P}_{t}\); cfr. Eq. (4.5). If we set \(G(t, x-y)\) the density of \(x + W_t\), where \(W_t\) is a standard blackian motion, \(t \in [0, T]\) and \(x, y \in \mathbb {R}^{d}\), then \(\mathcal {P}_t\) is defined on functions \(h \in \text {C}_b(\mathbb {R}^{d})\) as
Now, consider for a given \(t \in [0, T]\) the identity in Eq. (5.5) with the following choice
with \(h\in \text {C}_{b}^{2}(\mathbb {R}^{d})\) and \(V^{N,-}\left( x\right) \doteq V^{N}\left( -x\right) \). Recall that the convolution commutes and hence \(\mathcal {P}_t (V^{N,-}*h) = (V^{N,-}*\mathcal {P}_{t}h)\). Besides, it holds that \(\nabla \mathcal {P}_{t}(V^{N,-}*h) = (V^{N,-} *\nabla \mathcal {P}_{t}h)\). Therefore,
By Fubini–Tonelli theorem and stochastic Fubini theorem, we can move the semigroup on the first argument and use integration by parts to obtain:
By the arbitrarily of h, this concludes the proof. \(\square \)
Now, let denote by \(\left[ f\right] _{\gamma }\) the Hölder semi-norm on \(\mathbb {R}^{d}\) and by \(\left\| f\right\| _{\gamma }\) the associated norm, i.e.:
where, as usual, \(\left\| f\right\| _{\infty }=\sup _{x\in \mathbb {R}^{d}}\left| f\left( x\right) \right| \). We state the following lemma.
Lemma 5.4
Let \(M_t^{N}(\,\cdot \,)\) be the martingale in Eq. (5.7) and \(\beta \in (0, 1/2)\) the constant as in the definition of \(V^{N}\); see Eq. (2.1). Then, there exists \(\gamma \in \left( 0,1\right) \) such that, for all \(p\ge 2\), there is a constant \(C_{p}>0\) such that \(\mathbb {E}\left[ \left\| M_{t}^{N}\right\| _{\gamma }^{p}\right] \le C_{p}\), for all \(N\in \mathbb { N}\) and \(t\in \left[ 0,T\right] \).
Proof
It is enough to check the sufficient conditions (C.3)–(C.4) of Lemma C.2 in Appendix 1.
Let \(\epsilon _N^{-1} = N^{\frac{\beta }{d}}\). Using Eq. (C.6), the bound in Eq. (C.3) reads
where, to ease notation, we set \(|| X^{N, i} ||_{\infty , T}\doteq \sup _{s \in [0,T]}| X_s^{N,i} |,\,\,i\in [[N]]\). The last expected value is finite thanks to \((\text {H4})\); therefore,
where (up to a constant) \(g\left( x\right) \doteq e^{-\frac{\left| x\right| }{8T}}\) is integrable at any power. Now, recall that \(\epsilon _{N}^{-1}=N^{\frac{\beta }{d}}\). Then
which is bounded for \(\beta <\frac{1}{2}\) by choosing \(\delta \) (depending on p) small enough.
As regards the bound in Eq. (C.3), we use estimate (C.7) with \( \gamma \) small enough compared to \(\delta \) so to have \(( \gamma -\delta \left( 1-\gamma \right) ) <0\). To ease notation and for the sake of space, we denote
and, as before, \(|| X^{N, i} ||_{\infty , T}\doteq \sup _{s \in [0,T]}| X_s^{N,i} |,\,\,i\in [[N]]\). We get
and the conclusion is the same as for the previous term. \(\square \)
Remark 5.5
Lemma 5.4 is a non-trivial achievement of this paper. Indeed, the Kolmogorov–Chentsov criterion (see Karatzas and Shreve 1998, Theorem 2.2.8) would provide with much fewer computations a similar result on bounded sets. However, the dominating constant would diverge when passing to the full space. Notice that we will need the passage to the full space in Lemma 5.6 below. For this reason, we use a more complicated strategy—summarized by the results in Appendix 1—based on Sobolev embedding theorem.
Lemma 5.6
Let \(p^{N}(t)\) as in Lemma 5.3. If \(\beta \in \left( 0, 1/2 \right) \) and \(\sup _{N}\left\| p^{N}(0)\right\| _{\gamma }^{2}<\infty \), then there exist \(p\ge 2\), \(\gamma \in \left( 0,1\right) \) and a constant \(C_{\gamma }>0\) such that \(\mathbb {E}\left[ \left\| p^{N}(t)\right\| _{\gamma }^{p}\right] \le C\).
Proof
Lemma 5.3 provides the following bound
where we use the first inequality of Lemma C.3 in Appendix 1 and the bound of Lemma 5.4. Therefore,
At this point, we need to find a bound for the last two expected values. We start from the first.
hence
As regards the second expected value, we similarly obtain
Therefore,
The conclusion follows by a generalized version of Gronwall’s lemma. \(\square \)
We are now ready to prove Theorem 5.1; its proof is the content of the next subsection.
5.3 Identification of the Limit
Let us denote by \(P_{N}\) and \(Q_{N}\) the laws of \(S^{N}\) and \(V^{N}*S^{N}\), respectively, on \(\text {C}([0,T];\mathcal {P}(\mathbb {R}^{d}))\), for each \(N\in \mathbb {N}\). By Lemma 5.2, we know that both the families \((P_{N})_{N\in \mathbb {N}}\) and \((Q_{N})_{N\in \mathbb {N}}\) are tight in \(\text {C}([0,T];\mathcal {P}(\mathbb {R}^{d}))\). In particular, their convergent sub-sequences have the same limit, in the following strong sense.
Lemma 5.7
Assume a subsequence \((P_{N_{k}})_{k\in \mathbb {N}}\) converges weakly to a probability measure P on \(\text {C}([0,T];\mathcal {P}(\mathbb {R}^{d}))\). Then also \((Q_{N_{k}})_{k\in \mathbb {N}}\) converges weakly to P.
Proof
To prove the lemma, we are going to show that every convergent subsequence of \((Q_{N_{k}})_{k\in \mathbb {N}}\) has limit P; indeed, this implies that \((Q_{N_{k}})_{k\in \mathbb {N}}\) converges to P. To this end, let \((Q_{N_{k}^{\prime }})_{k\in \mathbb {N}}\) be a subsequence of \((Q_{N_{k}})_{k\in \mathbb {N}}\) converging to a probability measure Q on \(\text {C}([0,T];\mathcal {P}(\mathbb {R}^{d}))\). In particular, for every positive integer m and every finite sequence \(t_{1}<...<t_{m}\in \left[ 0,T\right] \), both \(\pi _{\left( t_{1},...,t_{m}\right) }P_{N_{k}^{\prime }}\) and \(\pi _{\left( t_{1},...,t_{m}\right) }Q_{N_{k}^{\prime }}\) converge weakly on \(\mathcal {P}(\mathbb {R}^{d})^{m}\), where \(\pi _{\left( t_{1},...,t_{m}\right) }\) is the projection on the finite dimensional marginal at times \(\left( t_{1} ,...,t_{m}\right) \). The limits are, respectively, \(\pi _{\left( t_{1},...,t_{m}\right) }P\) and \(\pi _{\left( t_{1},...,t_{m}\right) }Q\). If we prove that they are equal, then \(P=Q\) as a consequence of Kolmogorov extension theorem (see e.g. Stroock and Varadhan, 2007, Theorem 1.1.10).
Now, by Skorokhod representation theorem, on a new probability space \(\left( \widetilde{\Omega },\widetilde{\mathcal {F}},\widetilde{\mathbb {P}}\right) \) we may consider a sequence \(\widetilde{S}_{t}^{N_{k}^{\prime }}\) of continuous processes with values in \(\mathcal {P}(\mathbb {R}^{d})\) and a continuous process \(\widetilde{\mu }_{t}\) with values in \(\mathcal {P}(\mathbb {R}^{d})\) such that their laws on \(\text {C}([0,T];\mathcal {P}(\mathbb {R}^{d}))\) are \(P_{N_{k}^{\prime }}\) and P respectively; and \(V^{N_{k}^{\prime }}*\widetilde{S}_{\cdot }^{N_{k}^{\prime }}\) has law \(Q_{N_{k}^{\prime }}\), which we know to be convergent, weakly, to Q. As remarked at the beginning of Appendix 1, given \(t\in \left[ 0,T\right] \), with probability one, \(\left\langle V^{N_{k}^{\prime }}*\widetilde{S} _{t}^{N_{k}^{\prime }},\varphi \right\rangle \) converges to \(\left\langle \widetilde{\mu }_{t},\varphi \right\rangle \) for all \(\varphi \in C_{c}\left( \mathbb {R}^{d}\right) \), and therefore for all \(\varphi \in \text {C}_{b}\left( \mathbb {R}^{d}\right) \) because \(\widetilde{\mu }_{t}\in \mathcal {P}\left( \mathbb {R}^{d}\right) \). Therefore, with \(\widetilde{\mathbb {P}}\)-probability one, \(V^{N_{k}^{\prime }}*\widetilde{S}_{t}^{N_{k}^{\prime }}\) converges to \(\widetilde{\mu }_{t}\) in the topology of \(\mathcal {P}(\mathbb {R}^{d})\). Hence, also the law of \(V^{N_{k}^{\prime }}*\widetilde{S}_{t}^{N_{k}^{\prime }}\) converges weakly to the law of \(\widetilde{\mu }_{t}\) in the topology of \(\mathcal {P}(\mathbb {R}^{d})\); namely \(\pi _{t}Q_{N_{k}^{\prime }}\) converges weakly to \(\pi _{t}P\). Similarly, if \(t_{1}<...<t_{m}\in \left[ 0,T\right] \), the \(\mathcal {P} (\mathbb {R}^{d})^{m}\)-valued random variable \(\left( V^{N_{k}^{\prime }} *\widetilde{S}_{t_{1}}^{N_{k}^{\prime }},...,V^{N_{k}^{\prime }} *\widetilde{S}_{t_{m}}^{N_{k}^{\prime }}\right) \) converges a.s. to \(\left( \widetilde{\mu }_{t_{1}},...,\widetilde{\mu }_{t_{m}}\right) \) in the topology of \(\mathcal {P}(\mathbb {R}^{d})^{m}\). Therefore, also the law of \(\left( V^{N_{k}^{\prime }}*\widetilde{S}_{t_{1}}^{N_{k}^{\prime } },...,V^{N_{k}^{\prime }}*\widetilde{S}_{t_{m}}^{N_{k}^{\prime }}\right) \) converges weakly to the law of \(\left( \widetilde{\mu }_{t_{1}} ,...,\widetilde{\mu }_{t_{m}}\right) \) in the topology of \(\mathcal {P} (\mathbb {R}^{d})^{m}\), which means that \(\pi _{\left( t_{1},...,t_{m}\right) }Q_{N_{k}^{\prime }}\) converges weakly to \(\pi _{\left( t_{1},...,t_{m}\right) }P\). \(\square \)
Now, let \((P_{N_{k}})_{k\in \mathbb {N}}\) be a convergent subsequence of \((P_{N})_{N\in \mathbb {N}}\) (which exists thanks to Lemma 5.2) with limit P on \(\text {C}([0,T];\mathcal {P}(\mathbb {R}^{d}))\). We shall prove the following two statements.
- (i):
-
The probability measure P is equal to \(\delta _{\mu }\) for a suitable \(\mu \in \text {C}([0,T];\mathcal {P} (\mathbb {R}^{d}))\) which does not depend on the subsequence \(\left( N_{k}\right) _{k\in \mathbb {N}}\); hence the full sequence \((P_{N} )_{N\in \mathbb {N}}\) will converge weakly to \(\delta _{\mu }\) and \(S^{N}\) will converge in probability to \(\mu \).
- (ii):
-
\(\mu \) satisfies the conditions in Theorem 5.1.
To this end, with the purpose of simplifying notations, we shall prove that the original sequence \((P_{N})_{N\in \mathbb {N}}\) admits a subsequence \((P_{N_{k}})_{k\in \mathbb {N}}\) which converges weakly to \(\delta _{\mu }\) for a unique \(\mu \in C([0,T];\mathcal {P}(\mathbb {R}^{d}))\) satisfying all the conditions of Theorem 5.1. The same argument applied to any subsequence \((P_{N_{k}})_{k\in \mathbb {N}}\) in place of the original \((P_{N})_{N\in \mathbb {N}}\), proves the claim above; this will be the content of Proposition 5.8.
Denote by \(\Lambda \subset \text {C}([0,T];\mathcal {P}(\mathbb {R} ^{d}))\) the set of all \(\left( \mu _{t}\right) _{t\in \left[ 0,T\right] }\) such that there exists \(p:[0,T]\times \mathbb {R}^{d}\rightarrow \mathbb {R}\) with the property that \(x\mapsto p\left( t,x\right) \) is continuous, bounded, non negative, \(\int _{\mathbb {R}^{d}}p\left( t,x\right) dx=1\) and \(\mu _{t}\left( dx\right) =p\left( t,x\right) dx\) for all \(t\in \left[ 0,T\right] \). Since
is continuous for every \(\varphi \in \text {C}_{b}\left( \mathbb {R}^{d}\right) \), p is measurable in \(\left( t,x\right) \) and weakly continuous in t, in the previous sense.
Given \(\alpha \in \text {C}_b([0,T] \times \mathbb {R}^{d} ; \mathbb {R}^{d})\), \(\varphi \in \text {C}_{c}^{1,2}\left( [0,T]\times \mathbb {R}^{d}\right) \) and \(\mu \in \Lambda \), set
where for the sake of space \(b\left( p(s)\right) \) denotes the function \(b\left( \,\cdot \,,p\left( s,\,\cdot \,\right) \right) \) and \(p\left( s,\,\cdot \,\right) \) is the density of \(\mu _{s}\). Moreover, we remind that \(\mathcal {A}\) is the operator defined in Eq. (4.2).
Proposition 5.8
Let \(\left( N_{k}\right) \) be a subsequence such that \(P_{N_{k}}\) converges in law to P on \(\text {C}([0,T];\mathcal {P}(\mathbb {R}^{d}))\). Then:
- (i):
-
\(P\left( \Lambda \right) =1\).
- (ii):
-
\(\int \left( \Phi _{\varphi }\left( \mu \right) \wedge 1\right) P\left( d\mu \right) =0\) for every \(\varphi \in \)C\(_{c} ^{1,2}\left( [0,T]\times \mathbb {R}^{d}\right) \).
Proof
The proof is divided in four steps. Before proceeding, notice that by Lemma 5.7 also \((Q_{N_{k}})_{k\in \mathbb {N}}\) converges weakly to P.
Step 1 On an auxiliary probability space, let \((\mu _{t})_{0 \le t \le T}\) be a process with law P. Given \(t\in \left[ 0,T\right] \), \(S_{t}^{N_{k}}\) converges in law to \(\mu _{t}\). Moreover, \(V^{N_{k}}*S_{t}^{N_{k}}\) satisfies the assumptions of Lemma D.2 of Appendix 1. Therefore \(P\left( \Lambda \right) =1\).
Step 2 For every \(\delta \in (0,1)\) and \(\mu \in \mathcal {P}(\mathbb {R}^{d})\), let \(\mathcal {P}_{\delta }\mu \) denote the following function:
Moreover, introduce for \(\varphi \in \text {C}_{c}^{1,2}\left( [0,T]\times \mathbb {R}^{d}\right) \) and \(\delta \in \left( 0,1\right) \), the regularized functional, defined on \(\mu \in C([0,T];\mathcal {P}(\mathbb {R}^{d}))\) (instead of \(\Lambda \))
It is easy to check the previous functional is continuous on \(\text {C} ([0,T];\mathcal {P}(\mathbb {R}^{d}))\). Therefore, being \(\Phi _{\varphi ,\delta }\left( \cdot \right) \wedge 1\) continuous and bounded,
Recall we know that \(P\left( \Lambda \right) =1\). For each \(\mu \in \Lambda \) and \(s\in \left[ 0,T\right] \) it holds that:
locally in the uniform topology, where p(s) is the density of \(\mu _{s}\); therefore,
locally in the uniform topology, and it is a bounded convergence. Hence, thanks to the local cut-off given by \(\varphi (s)\) we have:
By Lebesgue dominated convergence we conclude that
and thus again, by the same theorem,
Therefore
In the next step, we prove that this double limit, taken in the specified order, is zero.
Step 3 We have the following identity:
Choosing \(V^{N_{k},-}*\varphi \) as test function,
where \(M_t^{N_k, V^{N_k,-} *\varphi }\) denotes the martingale (5.6) in which N and \(\varphi \) have been replaced by \(N_k\) and \(V^{N_k,-} *\varphi \), respectively. Thus,
For the sake of space, we set for \(t\in [0,T]\):
Now, we compute the expected value on the right-hand side of Eq. (5.10).
In the previous equation, we use the following bound
due to Doob’s inequality. At this point, we have that the terms \((i)-(iii)\) in Eq. (5.12) converge to zero as \(N_{k}\rightarrow \infty \). By hypothesis \(\Vert V^{N_{k},-}*\nabla \varphi \Vert _{\infty }\) is bounded because \(\nabla \varphi \) is uniformly continuous and hence \(V^{N_{k},-} *\nabla \varphi \) converges uniformly to \(\nabla \varphi \). This implies that (5.12)-(i) converges to zero. Indeed,
The uniform converges of \(V^{N_{k},-}*\nabla \varphi \) and \(V^{N_{k},-} *\left( \alpha (\,\cdot \,) \cdot \nabla \varphi \right) \), the weak convergence of \(S_{s}^{N_{k}}\) (realized a.s. on an auxiliary probability space, by Skorohod theorem) and Lebesgue dominated convergence theorem implies that also the term (5.12)-(ii) converges to zero. The converges to zero of the third term is more delicate and it will be proved here below in the third step.
Step 4 Let us consider
We now compute the following two bounds (notice that we use the explicit expression). The first is given by:
whereas the second, since V has compact support, say 1, so that the support of \(V^{N_{k}}\) is \(\epsilon _{N_{k}}\), by
Therefore
which converges to zero as \(k\rightarrow \infty \) and then \(\delta \rightarrow \infty \) thanks to the first estimate of Lemma 5.6. \(\square \)
In order to complete the proof of Theorem 5.1 we have to prove that P is supported on a class of solutions of equation (5.3) where we may apply the uniqueness result of Appendix 1 now we know that P is supported on \(\Lambda \) and satisfies \(\int \left( \Phi _{\varphi }\left( \mu \right) \wedge 1\right) P\left( d\mu \right) =0\) for every \(\varphi \in \)C\(_{c}^{1,2}\left( [0,T]\times \mathbb {R}^{d}\right) \). On an auxiliary probability space \(\left( \widetilde{\Omega },\widetilde{\mathcal {F}},\widetilde{\mathbb {P} }\right) \) with expectation \(\widetilde{\mathbb {E}}\), let \((\widetilde{\mu }_{t})_{0 \le t \le T}\) be a process with law P. We know that
hence
with \(\widetilde{\mathbb {P}}\)-probability one. The set C\(_{c}^{1,2}\left( [0,T]\times \mathbb {R}^{d}\right) \) is separable in the natural metric and therefore we may find a dense countable family \(\mathcal {D}\subset \) C\(_{c}^{1,2}\left( [0,T]\times \mathbb {R}^{d}\right) \); it follows that we may reverse the quantifiers and get that with \(\widetilde{\mathbb {P}} \)-probability one identity (5.13) holds for all \(\varphi \in \mathcal {D}\). Obviously we can also write
since \(\widetilde{\mu }_{t}\) has density \(\widetilde{p}_{t}\), and also \(\mu _{0}\) has density \(p_{0}\) by assumption. From the density of \(\mathcal {D}\) and classical limit theorems we get that, with \(\widetilde{\mathbb {P}} \)-probability one, the previous identity holds for every \(\varphi \in \) C\(_{c}^{1,2}\left( [0,T]\times \mathbb {R}^{d}\right) \). Recall that we denote by \(G\left( t,x\right) \) the density of blackian motion in \(\mathbb {R}^{d}\) and by \(\mathcal {P}_{t}\) the associated heat semigroup. From the previous identity we deduce
for every \(\psi \in \text {C}_{c}^{2}\left( \mathbb {R}^{d}\right) \). Indeed, given \(t\in \left[ 0,T\right] \) and \(\psi \in \text {C}_{c}^{2}\left( \mathbb {R} ^{d}\right) \), consider the test function \(\varphi ^{(t)}(s)=\mathcal {P}_{t-s}\psi \) for \(s \in [0,t]\); by approximation by functions of class C\(_{c}^{1,2}\left( [0,T]\times \mathbb {R}^{d}\right) \), we deduce
which simplifies to
and therefore leads to equation (5.14) by simple manipulations. By the arbitrariness of \(\psi \) and the continuity in x of \(\widetilde{p}_{t}\) and of both \(\mathcal {P}_{t}f\) and \(\nabla \mathcal {P}_{t-s}f\) (this one only for \(s<t\)) for every continuous bounded f (here we also use the bound \(\left\| \nabla \mathcal {P}_{t-s}f\right\| _{\infty }\le \frac{C}{\left( t-s\right) ^{1/2}}\left\| f\right\| _{\infty }\) and the integrability of \(\frac{C}{\left( t-s\right) ^{1/2}}\)) we get
By the same arguments we deduce that \(\widetilde{p}\) is continuous in \(\left( t,x\right) \). Moreover, it is bounded uniformly in \(\left( t,x\right) \) by the identity itself, because \(\mathcal {P}_{\cdot }p_{0}\) is bounded, \(\alpha \) is bounded, b is bounded and again we use \(\left\| \nabla \mathcal {P} _{t-s}f\right\| _{\infty }\le \frac{C}{\left( t-s\right) ^{1/2}}\left\| f\right\| _{\infty }\). In conclusion \(\widetilde{p}\) is of class \(\text {C}_{b}\left( \left[ 0,T\right] \times \mathbb {R}^{d}\right) \). In Appendix 1 it is proved that in this class there is a unique solution of the previous mild equation, hence P is supported by a single element. This completes the proof of Theorem 5.1.
6 Approximate Nash Equilibria from the Mean Field Game
In this section we show that if we have a weak solution (u, p) of the PDE system in Eq. (4.1), then we can construct a sequence of approximate Nash equilibria for the corresponding N-player game. This is the content of the following theorem.
Theorem 6.1
Let \(N \in \mathbb {N}\), \(N > 1\). Grant (H1)-(H4). Suppose (u, p) is a weak solution of the PDE system in Eq. (4.1) and let \(\alpha ^{*}(t, x) \doteq - \triangledown u(t, x)\) the optimal control of the problem OC in the class \(\mathcal {A}^{fb}_{K}\) with K given by Definition (4.13). Set
and \(\varvec{\alpha }^{N} = (\alpha ^{N,1}, \ldots , \alpha ^{N,N}){\in \mathcal {A}^{N;fb}_K}\). Then for every \(\varepsilon > 0\), there exist \(N_0 = N_0(\varepsilon ) \in \mathbb {N}\) such that \(\varvec{\alpha }^{N}\) is an \(\varepsilon \)-Nash equilibrium for the N-player game whenever \(N \ge N_0\).
Proof
The proof is divided in three steps.
Step 1 Let \(((\Omega _{N}, \mathcal {F}_{N}, (\mathcal {F}_t^{N}), \mathbb {P}^N), \varvec{W}^{N}, \varvec{X}^{N})\) be a weak solution of Eq. (3.1) under strategy vector \(\varvec{\alpha }^{N}\). We note that the function F defined in (5.1) with \(\alpha (s, X_s^{N,i}) = - \triangledown u(s, X_s^{N,i})\) is continuous and bounded; this guarantees the existence of a weak solution of the the system in Eq. (3.1) for any \(N\in \mathbb {N}\) Let \(S^{N}_t\) (resp. \(S^{N}\)) denote the associated empirical measure on \(\mathbb {R}^{d}\) (resp. on the path space \(\mathcal {X}\)). We are going to show that
Theorem (5.1)-(i) enables us to prove the convergence result in Eq. (6.2) for the following simplified cost functional, where we do not change the notation for the sake of simplicity:
Symmetry of the coefficients allows us to re-write the previous cost functional in terms of \(S^{N}_{t}\), \(t \in [0,T]\) as
which converges, as \(N \rightarrow \infty \), to
where \(S^{\infty }\) is the deterministic limit in probability of the sequence of random empirical measures \((S^{N})_{N\in \mathbb {N}}\) given by Theorem (5.1)-(i).
We claim that \(S_t^{\infty } \equiv p(t,\,\cdot \,)\), \(t \in [0, T]\), with p the second component of the pair (u, p), i.e. the density of the solution of Eq. (4.15) as stated by the Verification Theorem 4.8. Theorem 5.1 states that, given \(\varvec{\alpha }^{N}\), the empirical measure \(S_t^{N}\) corresponding to the interacting system with this control converges to a flow of measures with density \(p^{\alpha }(t,\,\cdot \,)\), where we stress the dependence on \(\alpha \). In addition, Theorem 5.1-(ii) states that \(p^{\alpha }(t,\,\cdot \,)\) is the mild solution of Eq. (5.3). By applying the previous result to the optimal control we have that the corresponding empirical measure on \(\mathbb {R}^{d}\) converges to \(p^{\alpha ^{*}}(t,\,\cdot \,)\), mild solution of Eq. (5.3). Also p, the second component of (u, p), is a mild solution of this equation. The uniqueness Theorem 4.5 now implies that \(p^{\alpha ^{*}}(t,\,\cdot \,)\) coincides with \(p(t,\,\cdot \,)\). Hence, we can conclude that Eq. (6.2) holds.
Step 2 For each \(N \in \mathbb {N} \setminus \left\{ i\right\} \), let \(\beta ^{N,i} \in \mathcal {A}^{N;1;fb}_{K}\) such that
We are going to show the following result:
To this aim, we introduce the N-player dynamics in the case the first player only deviates from the Nash equilibrium. For \(N \in \mathbb {N}\), consider the system of equations:
where \(\beta ^{N,1} \in {\mathcal {A}^{N;1;fb}_{K}}\). We denote with \(S^{N;\beta }\doteq (S^{N;\beta }_t)_{t \in [0, T]}\) the empirical measure process on \(\mathbb {R}^{d}\) of the previous system.
Now, for each \(N \in \mathbb {N}\), let \(((\Omega _{N}, \mathcal {F}_{N}, (\mathcal {F}_t^{N}), \mathbb {Q}^N), {\varvec{W}^{N;\beta }}, \varvec{X}^{N;\beta })\) be a weak solution of Eq. (6.4). Since the presence of a deviating player destroys the symmetry of the pre-limit system, following Lacker [15] proof of Theorem 3.10 therein, we perform a change of measure to restore it. More precisely, we define as \(\mathbb {P}^{N}\) the probability measure under which \(\varvec{X}^{N;\beta }\) has the following dynamics:
where the \(\widehat{W}_t^{N,i;\beta }\) are \(\mathbb {P}^N\)-Wiener processes, i.e. \(\mathbb {P}^N\) is defined via \(\frac{d\mathbb {P}^{N}}{d\mathbb {Q}^{N}}\Big \vert _{t=T} \doteq Z_T^N\) where
where \(\varvec{\beta }^N=[\varvec{\alpha }^{N,-1},\beta ^{N,1}]\) and \(\varvec{W}^{N;\beta }=(W^{N,1;\beta },\ldots ,W^{N,N;\beta })\). We notice that \(Z^N\) is a well-defined \(\mathbb {Q}^N\)-martingale thanks to boundedness of the coefficients. Theorem 5.1-(i) ensures the convergence under \(\mathbb {P}^{N}\) of the \(S^{N;\beta }\) to \(S^{\infty } \equiv \delta _p\). Boundedness of the coefficients also gives uniform integrability of the sequence \(((Z^{N}_T)^{-1})_{N \in \mathbb {N}}\); therefore, the probability measures \(\mathbb {Q}^{N}(A) \doteq \mathbb {E}^{\mathbb {P}^{N}}\left[ {(Z^{N}_T)^{-1}}\mathsf {1}_{A}\right] \), \(A \in \mathcal {F}^{N}\), converge to zero whenever \(\mathbb {P}^{N}(A)\) converges to zero in the limit \(N \rightarrow \infty \). So the convergence (in law and also in probability) of \(S^{N;\beta }\) to \(S^{\infty }\) under \(\mathbb {P}^{N}\) implies its convergence (in law and also in probability) under \(\mathbb {Q}^{N}\) to the same (constant) limit.
Now, in order to gain more compactness in the space of admissible controls, we interpret the controls in Eq. (6.4) as stochastic relaxed controls (Appendix 1). To this end, we denote with \({\overline{B}_{K}(0)} \subset \mathbb {R}^{d}\) the closed ball of radius K around the origin and \(\mathcal {R}_K \doteq \mathcal {R}_{\overline{B}_{K}(0)}\). Then \({\mathcal {R}_{K}}\) is compact (Appendix 1). For \(N \in \mathbb {N}\), let \(\tilde{\beta }_{t}^{1}\) and \(\tilde{\alpha }_{t}^{*,i}\), \(i \in \{2, \ldots , N\}\), be \({\mathcal {R}_{K}}\)-valued random measures determined by:
We rewrite Eq. (6.4) in terms of these relaxed controls:
We do the following claims. Claim a.: the family \(\left( \mathbb {P}^{N} \circ (X^{N, 1; \beta }, {\tilde{\beta }^{N,1}}, S^{N;\beta })^{-1}\right) _{N \in \mathbb {N}}\) is tight in \(\mathcal {P}(\mathcal {X} \times {\mathcal {R}_{K}} \times \mathcal {P}(\mathbb {R}^{d}))\) and thus it admits a convergent subsequence. We denote by \((X^{\beta ^{*}}, \tilde{\beta }^{*,1},\,p)\) the limit of the subsequence that can be constructed by means of Skorokhod’s representation theorem on a suitable limiting probability space \((\Omega ^{\beta ^*},\mathcal {F}^{\beta ^*},\mathbb {Q}^{\beta ^*})\); Claim b.: the limit \(X^{\beta ^{*}}\) has the following representation:
on \((\Omega ^{\beta ^*},\mathcal {F}^{\beta ^*},\mathbb {Q}^{\beta ^*})\) where \(W^{\beta ^*}\) is a Wiener process, i.e. there exist a filtration \((\mathcal {F}^{\beta ^*}_t)\) and an \((\mathcal {F}^{\beta ^*}_t)\)-Wiener process \(W^{\beta ^*}\) on \((\Omega ^{\beta ^*},\mathcal {F}^{\beta ^*},\mathbb {Q}^{\beta ^*})\). such that \(X^{\tilde{\beta }^{*}}\) has representation (6.6). If both Claim a and Claim b hold, by setting \(\beta _t^{*} \doteq \int _{\overline{B}_{{K}}(0)} x \tilde{\beta }^{*,1}_{t}(dx)\), we have that \(J^{N}_i([\varvec{\alpha }^{N,-i}, {\beta ^{N,i}}])\) converges to
along the selected subsequence with \(J(\beta ^{*}) \ge J(\alpha ^{*})\). Equation (6.3) follows by taking the limit inferior of the sequence.
We now prove the two claims.
Proof of Claim a. Tightness of \((\mathbb {P}^{N} \circ (X^{N,1:\beta })^{-1})\) and of \((\mathbb {P}^{N} \circ (S^{N;\beta })^{-1})\) under \(\mathbb {Q}^{N}\) follows from their tightness under \(\mathbb {P}^{N}\). On the other hand, \((\mathbb {P}^{N} \circ ({\tilde{\beta }^{N,1}})^{-1})\) is tight in \(\mathcal {P}({\mathcal {R}_{K}})\) because \({\mathcal {R}_{K}}\) is compact. This implies that \(\left( \mathbb {P}^{N} \circ (X^{N, 1; \beta }, \tilde{\beta }^{N,1}, S^{N;\beta })^{-1}\right) _{N \in \mathbb {N}}\) is tight in \(\mathcal {P}(\mathcal {X} \times {\mathcal {R}_{K}} \times \mathcal {P}(\mathbb {R}^{d}))\).
Proof of Claim b. We use a characterization of solutions to Eq. (6.6) with fixed measure variable through a martingale problem in the sense of Stroock and Varadhan [24] (see El Karoui and Méléard [9] for a study of the martingale problems we employ). Let \(f \in \text {C}_{c}^{2}(\mathbb {R}^{d})\) and let us define the process \(M^{f}\) on \((\mathcal {X}\times {\mathcal {R}_{K}}, \mathcal {B}(\mathcal {X}\times {\mathcal {R}_{K}}))\) by
where \(t \in [0, T]\). We claim that \(\Theta ^{*}\doteq \mathbb {P} \circ (X^{\tilde{\beta }^{*}}, \tilde{\beta }^{*})^{-1}\in \mathcal {P}(\mathcal {X} \times \mathcal {R}_K)\) is a solution of the martingale problem associated to Eq. (6.7), i.e. such that for all \(f \in \text {C}_{c}^{2}(\mathbb {R}^{d})\), \(M^f\) is a \(\Theta ^*\)-martingale. The martingale property is intended on \((\mathcal {X} \times {\mathcal {R}_{K}}, \mathcal {B}(\mathcal {X} \times {\mathcal {R}_{K}}))\) with respect to the \(\Theta ^{*}\)-augmentation of the canonical filtration made right continuous by a standard procedure. However, to conclude it is sufficient to check that the martingale property holds with respect to the canonical filtration on \(\mathcal {X} \times {\mathcal {R}_{K}}\) (see, for instance, Problem 5.4.13 in Karatzas and Shreve (1998)). We denote by \((\mathcal {G}_t)_{t \in [0,T]}\) such a filtration show that the process in Eq. (6.7), which is bounded, measurable and \(\mathcal {G}_t\)-adapted, is a \(\Theta ^{*} \doteq \mathbb {P} \circ (X^{\tilde{\beta }^{*}}, \tilde{\beta }^{*})^{-1}\) martingale for all \(f \in \text {C}_{c}^{2}(\mathbb {R}^{d})\). This is equivalent to having
for every choice of \((t_1, t_2, Y) \in [0,T]^{2} \times \text {C}_b(\mathcal {X} \times {\mathcal {R}_{K}})\) such that \(t_1 \le t_2\) and Y is \(\mathcal {G}_{t_1}\)-measurable. To this aim, we define and compute the following function \(\Psi ^{p} = \Psi ^{p}_{(t_1, t_2, Y, f)}:\mathcal {P}(\mathcal {X} \times {\mathcal {R}_{K}})\rightarrow \mathbb {R}\):
The previous function, in particular, is continuous with respect to the weak convergence of measure since the integrands are bounded and continuous on \(\mathcal {X} \times {\mathcal {R}_{K}}\). Also, we define:
for \((\varphi ^{N}, \rho ^{N}) \in \mathcal {X}^{\times N} \times \mathcal {R}_{K}^{\times N}\), where \(\rho ^{N,i}\) and \(\varphi ^{N,i}\) are respectively the \(i^{th}\) component of \(\rho \) and \(\varphi \), and the extended empirical measure \(\overline{S}^{N;\beta }\) as
Here, \(X^{N, i; \beta }\) denotes the dynamics of player i in the system where the first player only deviates from the Nash equilibrium written in terms of relaxed controls \(\rho ^{N, i;\beta }\).
Now, by construction, it holds that
where \(\overline{\Theta }^*_N \doteq \mathbb {P}^{N} \circ (X^{N, i; \beta }, \rho ^{N,i;\beta })^{-1}\) and for every choice of \((t_1, t_2, \overline{Y}^i) \in [0,T]^{2} \times \text {C}_b(\mathcal {X}^{\times N} \times {\mathcal {R}_{K}^{\times N}})\) such that \(t_1 \le t_2\) and Y is \(\mathcal {G}^N_{t_1}\)-measurable, with \((\mathcal {G}^N_t)\) being the canonical filtration on \(\mathcal {B}(\mathcal {X}^{\times N} \times {\mathcal {R}_{K}^{\times N}})\). To conclude, it then suffices to show that the previous term converges to the expected value of \(\Psi ^{p}(\Theta ^{*})\) in the limit for \(N \rightarrow \infty \). Let us set the sequence \(\overline{Y}^{i}\) as \(\overline{Y}^{i}(\varphi ^{N}) \doteq Y(\varphi ^{N,i})\) and show that the following decomposition for the term in Eq. (6.9) holds:
Indeed, the first term is equal to:
whereas the second reads as:
In particular, \(\Psi _{(t_1, t_2, Y, f)}(\overline{S}^{N; \beta })\) corresponds to the integrals in Eq. (6.8) computed w.r.t. the extended empirical measure \(\overline{S}^{N; \beta }\). The term in Eq. (6.11) converges to \(\Psi ^{p}_{(t_1, t_2, Y, f)}(p)\) in the limit for \(N \rightarrow \infty \) thanks to the weak continuity of the involved functional and weak convergence of measures. Term in Eq. (6.12), instead, vanishes in the limit as \(N \rightarrow \infty \) thanks to Lemma D.2, since it can be bounded by: :
We conclude that \(\Theta ^{*}\in \mathcal {P}(\mathcal {X} \times \mathcal {R}_K)\) solves the martingale problem associated to Eq. (6.7). By an argument analogous to that in the proofs of Proposition 5.4.6 and Corollary 5.4.8 in Karatzas and Shreve (1998), we finally conclude that there exists a weak solution \(((\Omega ^{\beta ^*},\mathcal {F}^{\beta ^*},\mathbb {Q}^{\beta ^*}), X^{\tilde{\beta }^{*}},W^{\beta ^*})\) of Eq. (6.6).
Step 3 For every \(N \in \mathbb {N}, \)
By Step 1 and Step 2 there exists \(N_0(\varepsilon )\) such that
for all \(N\ge N_0(\varepsilon )\). This concludes the proof. \(\square \)
Notes
The authors warmly thank one of the two anonymous Referees for her/his suggestion to look at the Hopf-Cole reduction, to prove global in time existence, because of the quadratic structure of our Hamiltonian.
References
Aurell, A., Djehiche, B.: Mean-field type modeling of nonlocal crowd aversion in pedestrian crowd dynamics. SIAM J. Control Optim. 56(1), 434–455 (2018)
Brezis, H.: Functional Analysis. Sobolev Spaces and Partial Differential Equations. Springer Science & Business Media, Berlin (2010)
Cardaliaguet, P.: Notes from P-L lions’c lectures at the Collège de France. Technical report, Technical report (2012)
Cardaliaguet, P.: The convergence problem in mean field games with local coupling. Appl. Math. Optim. 76(1), 177–215 (2017)
Cardaliaguet, P., Porretta, A.: An introduction to mean field game theory. In: Mean Field Games, pp. 1–158. Springer, Berlin (2020)
Carmona, R., Delarue, F., et al.: Probabilistic Theory of Mean Field Games with Applications I–II. Springer, Berlin (2018)
Di Nezza, E., Palatucci, G., Valdinoci, E.: Hitchhiker’s guide to the fractional Sobolev spaces. Bull. Sci. Math. 136(5), 521–573 (2012)
Dudley, R.: Convergence of Baire measures. Stud. Math. 27, 251–268 (1966)
El Karoui, N., Méléard, S.: Martingale measures and stochastic calculus. Probab. Theory Relat. Fields 84(1), 83–101 (1990)
El Karoui, N., Nguyen, D., Jeanblanc-Picqué, M.: Compactification methods in the control of degenerate diffusions: existence of an optimal control. Stochastics 20(3), 169–219 (1987)
Funaki, T.: A certain class of diffusion processes associated with nonlinear parabolic equations. Zeitschrift für Wahrscheinlichkeitstheorie Verwandte Gebiete 67(3), 331–348 (1984)
Gomes, D.A., Pimentel, E.A., Voskanyan, V.: Regularity Theory for Mean-field Game Systems. Springer, Berlin (2016)
Huang, M., Malhamé, R.P., Caines, P.E., et al.: Large population stochastic dynamic games: closed-loop Mckean–Vlasov systems and the Nash certainty equivalence principle. Commun. Inf. Syst. 6(3), 221–252 (2006)
Kushner, H.J.: Numerical methods for stochastic control problems in continuous time. SIAM J. Control Optim. 28(5), 999–1048 (1990)
Lacker, D.: On the convergence of closed-loop nash equilibria to the mean field game limit. Ann. Appl. Probab. 30(4), 1693–1761 (2020)
Lasry, J.-M., Lions, P.-L.: Jeux à champ moyen. ii–horizon fini et contrôle optimal. Comptes Rendus Mathématique 343(10), 679–684 (2006)
Lasry, J.-M., Lions, P.-L.: Mean field games. Jpn. J. Math. 2(1), 229–260 (2007)
Lunardi, A.: Analytic Semigroups and Optimal Regularity in Parabolic Problems. Springer Science & Business Media, Berlin (2012)
Morale, D., Capasso, V., Oelschläger, K.: An interacting particle system modelling aggregation behavior: from individuals to populations. J. Math. Biol. 50(1), 49–66 (2005)
Oelschlager, K.: A martingale approach to the law of large numbers for weakly interacting stochastic processes. Ann. Probab. 12, 458–479 (1984)
Oelschläger, K.: A law of large numbers for moderately interacting diffusion processes. Zeitschrift für Wahrscheinlichkeitstheorie verwandte Gebiete 69(2), 279–322 (1985)
Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equations, vol. 44. Springer Science & Business Media, Berlin (2012)
Porretta, A.: Weak solutions to Fokker–Planck equations and mean field games. Arch. Ration. Mech. Anal. 216(1), 1–62 (2015)
Stroock, D.W., Varadhan, S.S.: Multidimensional Diffusion Processes. Springer, Berlin (2007)
Veretennikov, A.J.: On strong solutions and explicit formulas for solutions of stochastic integral equations. Math. USSR-Sbornik 39(3), 387 (1981)
Funding
Open access funding provided by Scuola Normale Superiore within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
M. Ghio and G. Livieri acknowledge the financial support of UniCredit Bank R &D group through the Dynamical and Information Research Institute at the Scuola Normale Superiore. All authors thank Prof. Fausto Gozzi (LUISS Guido Carli), Prof. Luciano Campi (University of Milan) and Prof. Markus Fisher (University of Padova) for useful suggestions.
Appendices
Appendix A: Some Well Known Results
For the reader convenience, we collect here some (well-known) results on convolutions, regularizations and mollifiers that have been used through the paper.
First, we remind some properties on convolution and regularization.
Proposition A.1
(Convolution and regularization) [2, Propositions 4.4.15, 4.4.19 and 4.4.20] The following statements on convolution hold true:
- (i):
-
Let \(f\in L^1(\mathbb {R}^d)\) and \(g\in L^p(\mathbb {R}^d)\), \(1\le p\le \infty \). Then \(f*g\) is well defined in \(L^p(\mathbb {R}^d)\).
- (ii):
-
Let \(\theta \in \text {C}_c(\mathbb {R}^d)\) and \(\varphi \in L^1_{loc}(\mathbb {R }^d)\). Then \(\Theta *\varphi \) is well defined in \(\text {C}(\mathbb {R}^d)\).
- (iii):
-
Let \(\theta \in \text {C}_c^k(\mathbb {R}^d)\) and \(\varphi \in L^1_{loc}( \mathbb {R}^d)\). Then \(\Theta *\varphi \) is well defined in \(\text {C}^k(\mathbb {R}^d)\), \(k\ge 1\), also \(k=\infty \).
In particular, in our work we used convolution of the type \(\theta *\mu \), where \(\theta \in \text {C}_c^{\infty }(\mathbb {R}^d)\) and \(\mu \in \mathcal {P}(\mathbb {R}^d)\). Therefore, since \(\mu \in L^1(\mathbb {R}^d)\) and \(\theta \in L^p(\mathbb {R}^d)\) for any \(1\le p\le \infty \), by item (i) of Proposition A.1 the convolution \(\theta *\mu \) is well defined in \(L^p(\mathbb {R}^d)\). Moreover, by items (ii) and (iii) of Proposition A.1, \( \theta *\mu \in \text {C}^k(\mathbb {R}^d)\) for any \(k\ge 1\), also \(k=\infty \). Also, we use scalar product of the type \(\langle \theta *\mu ,\varphi \rangle \), where \(\varphi \in L^2(\mathbb {R}^d)\). In particular, for any function \(g:\mathbb {R}^d\rightarrow \mathbb {R}\) if we denote \(g^-\doteq g(-\cdot )\), then
Second, we give the following definition and proposition.
Definition A.2
(Mollifiers) [2, Chapter 4.4] A sequence of mollifiers is any sequence of functions \((\theta _N)_{N\in \mathbb {N}}\) from \(\mathbb {R}^d\) to \(\mathbb {R}\) such that for each \(N \in \mathbb {N}\): \(\theta _N\in \text {C}^{\infty }_c(\mathbb {R}^d)\) with support in \(\overline{B}_{1/N}(0)\), \(\theta _N \ge 0\) and \(\int _{\mathbb {R}^d}\theta ^N(dx)=1\).
Proposition A.3
(Mollification) [2, Proposition 4.4.21] Let \(f\in \text {C}(\mathbb {R}^d)\). Then \(\theta _N*f\rightarrow f\) uniformly on compact sets.
Third, we give the following results on weak convergence.
Lemma A.4
(Weak convergence and the double index problem) Let \((\mu _N)_{N \in \mathbb {N}} \subset \mathcal {P}(\mathbb {R}^{d})\) a sequence converging weakly to \(\mu \in \mathcal {P}(\mathbb {R}^{d})\). Let \((f_N)_{N \in \mathbb {N}} \in \text {C}_b(\mathbb {R}^d)\) be a sequence converging to \(f \in \text {C}_b(\mathbb {R}^d)\) uniformly on compact sets and such that \(\sup _{n\in \mathbb {N}}\Vert f_N\Vert _{\infty }\le C<\infty \), \(\Vert f\Vert _{\infty }\le C<\infty \) for some \(C>0\). Then
Proof
The proof is based on the following decomposition, holding for any \(R>0\):
where \(\overline{B}_R(0)\subset \mathbb {R}^d\) is the closed ball of radius R centered at the origin. Hence
where \( \Vert \cdot \Vert _{\infty ,\overline{B}_R(0)}\) is the infinity norm on \(\overline{B}_R(0)\). Now let \(\varepsilon >0\) and choose \(R>0\) be such that
by the tightness of the family \((\mu _N)_{N \in \mathbb {N}}\). Then, by uniform convergence on compact sets of the sequence \((f_N)_{N \in \mathbb {N}}\) to f and by weak convergence of the \((\mu _N)_{N \in \mathbb {N}}\) to \(\mu \) there exists \(N_0\in \mathbb {N}\) such that the first and second terms are lower than \(\frac{\varepsilon }{4}\) for all \(N\ge N_0\). We conclude that for all \(\varepsilon >0 \) there exists \(N_0\in \mathbb {N}\) such that
for all \(N\ge N_0\). \(\square \)
Lemma A.5
Let \((\mu _N)_{N \in \mathbb {N}} \subset \mathcal {P}(\mathbb {R}^{d})\) a sequence converging weakly to \(\mu \in \mathcal {P}(\mathbb {R}^{d})\). Set \(f_N \doteq \theta _N*\mu _N\) for some mollifiers \(\theta _N\) and assume \( \lim _{N \rightarrow \infty } f_N = f\) in \(L^2(\mathbb { R}^d)\) for some \(f\in L^2(\mathbb {R}^d)\). Then \(\mu \) has density f with respect to the Lebesgue measure on \(\mathbb {R}^d\).
Proof
First, notice that \(\langle f_N, \varphi \rangle =\langle \theta _N *\mu _N\varphi \rangle = \langle \mu _N\theta _N^{-}*\varphi \rangle \) for any \(\varphi \in L^2(\mathbb {R}^d)\cap C(\mathbb {R}^d)\) and for each \(N \in \mathbb {N}\). Set \(\varphi _N \doteq \theta _N^-*\varphi \) for each \(N \in \mathbb {N}\). Now \(\langle f_N, \varphi \rangle \rightarrow \langle f, \varphi \rangle \) for any \(\varphi \in L^2(\mathbb {R}^d)\), by strong convergence in \(L^2(\mathbb {R}^d)\) of the \(f^N\), but also
by weak convergence of the \(\mu ^N\) and uniform convergence on compact sets of the \(\phi _N\) to \(\phi \) (Lemma A.4). Hence
for any \(\varphi \in L^2(\mathbb {R}^d)\cap \text {C}(\mathbb {R}^d)\). The same reasoning holds for any \(\varphi \in \text {C}_b(\mathbb {R}^d)\) hence we conclude. \(\square \)
Appendix B: Hamilton–Jacobi Equation, Kolmogorov Equation Equations and Mild Solutions
In Sect. 1 we study the decoupled Hamilton–Jacobi Bellman equation and Kolmogorov equation equations defining the PDE system in Eq. (4.1) via the mild formulation; see Theorem B.1, Theorem B.2. This enables us to prove the equivalence between the mild and weak formulations; see proof of Lemma 4.2 in Sect. 1. In Sect. 1 we prove Theorem 4.4, i.e. the existence of a global solution of the PDE system (see Theorem 4.4 in Sect. 4). On the other hand, in Sect. 1 we prove Theorem 4.5, i.e. the local uniqueness of a solution of the PDE system (see Theorem 4.5 in Sect. 4). Finally, in Sect. 1 we give the proof of Theorem 4.8.
1.1 B.1: The Hamilton–Jacobi and the Kolmogorov Equation Equation In Mild Form
Throughout this section, we assume that \(p_0, b, f, g\) satisfy the hypotheses (H1)–(H2) and (H4) in Sect. 2.
Theorem B.1
Given \(p_{0}\in \text {C}_{b}\left( \mathbb {R} ^{d}\right) \), given \(\alpha \in \text {C}_{b}\left( \left[ 0,T\right] \times \mathbb {R}^{d};\mathbb {R}^{d}\right) \), there exists at most one solution of equation
in the class \(\text {C}_{b}\left( \left[ 0,T\right] \times \mathbb {R}^{d}\right) \).
Proof
Assume by contradiction that \(p^{(i)}(t)\), \(i=1, 2\), are two solutions of Eq. (B.1) of class \(\text {C}_b([0,T]\times \mathbb {R}^{d})\) and set q(t) as their difference. By a generalized form of Gronwall’s lemma one has that \(\left\| q\left( t\right) \right\| _{\infty }=0\) for every \(t\in \left[ 0,T\right] \), from which the conclusion readily follows. The precise estimates can be found in the proof of Theorem 4.5 in Sect. (1). For the sake of space, we refer the reader to that proof; in particular one has to use the estimate for the map \(\Gamma _1\), first component of the map \(\Gamma \) defined in (B.7). \(\square \)
Theorem B.2
Given \(p \in \text {C}_{b}([0,T] \times \mathbb {R}^{d})\), If \(\alpha \in \text {C}_{b}([0,T]\times \mathbb {R}^{d};\mathbb {R}^{d})\), then there exists at most one solution u, in the class \(\text {C}_{b}\left( \left[ 0,T\right] \times \mathbb {R} ^{d}\right) \) and such that its partial derivatives are also of class \(\text {C}_{b}\left( \left[ 0,T\right] \times \mathbb {R}^{d}\right) \), of the following equation
Proof
Assume by contradiction that \(u^{(i)}(t)\), \(i = 1, 2\), are two solutions of Eq. (B.2) of class \(\text {C}_{b}\left( \left[ 0,T\right] \times \mathbb {R} ^{d}\right) \) and such that their partial derivatives are of class \(\text {C}_{b}\left( \left[ 0,T\right] \times \mathbb {R}^{d}\right) \). Set \(\theta ^{(i)} = \nabla u^{(i)}\), \(i = 1, 2\) and q(t) their difference. Using the estimates for the map \(\Gamma _2\), second component of the map \(\Gamma \) defined in (B.7), one has that \(\Vert q(t) \Vert _{\infty } = 0\) for every \(t \in [0, T]\), from which \(\theta ^{(1)}(t) = \theta ^{(2)}(t)\) for every \(t \in [0, T]\). Therefore, \(u^{(1)}(t) = u^{(2)}(t)\) because of Eq. (B.2). \(\square \)
1.2 B.2: Proof of Lemma 4.2
Proof
Let (u, p) be a weak solution of the PDE system in Eqs. (4.3)–(4.3), and consider Eq. (4.3). In particular, for a given \(t \in \left[ 0, T \right] \),
Using on \(\left[ t,T\right] \) the following test function
with \(\phi \in \text {C}^{1}([0,T] \times \text {C}^2_{b}(\mathbb {R}^{d}) \cap W^{2,2}(\mathbb {R}^{d}))\), we get
Notice that \(\mathcal {A} \mathcal {P} _{s-t}\phi =0\) and that \(\left\langle a,\mathcal {P}_{t}b\right\rangle =\left\langle \mathcal {P}_{t}a,b\right\rangle \) for every pair of functions \( a,b\in \text {C}_{b}\left( \mathbb {R}^{d}\right) \). Then
Because \(\phi \) can be chosen in an arbitrary way, we deduce the mild formulation of Eq. (4.6). The equation for p is similar, as well as the other direction. \(\square \)
1.3 B.3: Proof of Theorem 4.4
Throughout this section, we assume that \(p_0, b, f,\) and g satisfy the hypotheses (H1)–(H2) and (H4) in Sect. 2 and (H5) in Sect. 4. In addition, we shall repeatedly use the following well-known inequality:
for all \(f\in L^{\infty }\left( \mathbb {R}^{d}\right) \), with \(C_{d}=d^{1/2}\), which follows for instance from the formula \(\nabla \mathcal {P}_{t}f\left( x\right) =t^{-1}\mathbb {E}\left[ W_{t}f\left( x+W_{t}\right) \right] \) (elementary proved by differentiating the heat kernel):
We use the Brouwer–Schauder fixed point theorem to prove Theorem 4.4. Brouwer–Schauder fixed point theorem says that if K is a non empty, closed, bounded and convex subset of a Banach space V and \(\Phi \,:\,K \rightarrow K\) is a continuous map such that \(\Phi \left( K\right) \) is relatively compact in V, then \(\Phi \) has a fixed point in K.
We will apply this theorem to the space \(V=\text {C}_{b}(\left[ 0,T\right] \times \mathbb {R}^{d})\). Instead, in order to define the map \(\Phi \), let \(p \in V\) be given and let \(w = w_{p}\) be a weak solution of the first equation of the PDE system (4.9). Existence and uniqueness of such a solution is given by classical parabolic results; e.g., one proof can be done by contraction principle applied to the mild formulation in Eq. (B.5) below. In particular, \(w_p\) satisfies the following properties:
independently of \(p\in V\), with \(C_{1}\left( b,f,g,T\right) >0\) depending only on \(\Vert b \Vert _{\infty }, \Vert f \Vert _{\infty }\) and \(\Vert g \Vert _{\infty }\). One way to prove this fact is by using the following identity
and estimate B.4 of the heat semi-group’s gradient. At this point, we call \(\Phi \left( p\right) \) the solution of the following equation
Notice that this is not the second equation of the PDE system (4.9) with \(w=w_{p}\) because we keep the original p in \(b\left( \cdot ,p\left( s\right) \right) \). Existence of a global solution \(\Phi \left( p\right) \in V\) can be proved by iteration, using B.4 and \(\left\| \frac{\nabla w_{p}}{w_{p}}\right\| _{\infty }\le C_{w}\left( g,f,b,T\right) \). In addition, one gets
for a suitable constant \(C_{2}\left( b,f,g,p_{0},T\right) >0\) depending, again, only on \(\Vert b \Vert _{\infty }, \Vert f \Vert _{\infty }\) and \(\Vert g \Vert _{\infty }\). Therefore, the set
is bounded, closed, convex and invariant.
We prove now that the map \(\Phi \) satisfies the assumptions in the Brouwer-Schauder fixed point theorem. It is not difficult to prove that the map \(\Phi \) is continuous by using B.4 again. Instead, it is non straightforward to prove that \(\Phi \left( K\right) \) is relatively compact, due to the unboundedness of the space domain. In order to do so, we use the following compactness result, which is an easy variant of the Ascoli-Arzelà theorem.
Theorem B.3
Let \(\alpha (\cdot ), \beta (\cdot )\) and \(C_{1}(\cdot ), C_{2}(\cdot )\) be four positive and non-decreasing functions and \(\rho \) as in (H5); see Sect. 4. Let \(C_{3}>0\) a constant. Then the set \(\Xi _{\alpha ,C_{1},\beta ,C_{2},\rho ,C_{3}}\) of all functions \(f\in \text {C}_{b}\left( \left[ 0,T\right] \times \mathbb {R}^{d}\right) \) such that
is relatively compact in \(\text {C}_{b}\left( \left[ 0,T\right] \times \mathbb {R} ^{d}\right) \).
Before proceeding with the proof of Theorem B.3, we recall the following version of the Ascoli-Arzelà theorem.
Theorem B.4
Assume that that a family of functions \(F\subset \text {C}\left( \left[ 0,T\right] ; \text {C}_{b}\left( B_{M}\right) \right) \) satisfies the following two properties:
- (i):
-
\(\left\{ f\left( t\right) ;f\in F,t\in \left[ 0,T\right] \right\} \subset K_{M}\) for some compact set \(K_{M} \subset \text {C}_{b}\left( B_{M}\right) \)
- (ii):
-
F is uniformly equicontinuous in \(\text {C} \left( \left[ 0,T\right] ; \text {C}_{b}\left( B_{M}\right) \right) \), namely for every \(\epsilon >0\) there exists a \(\delta >0\) such that \(\left\| f\left( t\right) -f\left( s\right) \right\| _{\text {C}_{b}\left( B_{M}\right) }\le \epsilon \) for every \(f\in F\) and \(t, s\in \left[ 0,T\right] \) such that \(\left| t-s\right| \le \delta \).
Then F is relatively compact in \(\text {C}\left( \left[ 0,T\right] ; \text {C}_{b}\left( B_{M}\right) \right) \).
Proof of Theorem B.3
Notice that given any closed ball \(B_M \doteq \overline{B}_{M}(0) \subset \mathbb {R}^{d}\) of radius M around the origin, the space \(\text {C}_{b}\left( \left[ 0,T\right] \times B_{M}\right) \) and the space \(\text {C}\left( \left[ 0,T\right] ;\text {C}_{b}\left( B_{M}\right) \right) \) are equivalent. This is not longer true for \(\text {C}_{b}\left( \left[ 0,T\right] \times \mathbb {R}^{d}\right) \) and \(\text {C}\left( \left[ 0,T\right] ; \text {C}_{b}\left( \mathbb {R}^{d}\right) \right) \). Indeed, it holds that \(\text {C}\left( \left[ 0,T\right] ; \text {C}_{b}\left( \mathbb {R}^{d}\right) \right) \subset \text {C}_{b}\left( \left[ 0,T\right] \times \mathbb {R}^{d}\right) \). On any \(B_{M}\) we use Theorem B.4. Now, consider a sequence \(\left( p_{n}\right) _{n\in \mathbb {N}}\subset \Xi _{\alpha ,C_{1},\beta ,C_{2},\rho ,C_{3}}\). For every \(B_{M}\), denote by \( p_{n}^{M}\) the restriction of \(p_{n}\) to \(\left[ 0,T\right] \times B_{M}\). They belong to \(\text {C}_{b}\left( \left[ 0,T\right] \times B_{M}\right) \) which is equivalent to \(\text {C}\left( \left[ 0,T\right] ; \text {C}_{b}\left( B_{M}\right) \right) \). The space \(\text {C}_{b}^{\alpha }\left( B_{M}\right) \) has compact embedding into \(\text {C}_{b}\left( B_{M}\right) \) by Ascoli-Arzelà theorem. By (H1.1) in Theorem B.3, the set \(\left\{ p_{n}^{M}\left( t\right) ,n\in \mathbb {N},t\in \left[ 0,T\right] \right\} \) is bounded in \(\text {C}_{b}^{\alpha }\left( B_{M}\right) \), hence assumption (i) of Theorem B.4 is satisfied. On the other hand, by (H2.1) in Theorem B.3 the sequence \(\left( p_{n}^{M}\right) _{n\in \mathbb {N}}\) is uniformly equicontinuous in \(\text {C}\left( \left[ 0,T\right] ; \text {C}_{b}\left( B_{M}\right) \right) \). Hence, by Theorem B.4 we may extract a subsequence which converges in \(\text {C}\left( \left[ 0,T\right] ; \text {C}_{b}\left( B_{M}\right) \right) \). By a diagonal argument, we can find a function \(p\in \text {C}\left( \left[ 0,T\right] \times \mathbb {R}^{d}\right) \) and a subsequence \(\left( p_{n_{k}}\right) \) such that \(\left\| (p_{n}^{M}-p)_{|_{\left[ 0,T\right] \times B_{M}}} \right\| _{\infty }\rightarrow 0\) as \(k\rightarrow \infty \), for every M. Given \(\epsilon >0\), let \(M_{\epsilon }\) be such that
Since \(\left| p_{n_{k}}\left( t,x\right) \right| \le C_{3}\rho \left( x\right) \), we also have
In addition, since \(p_{n_{k}}\rightarrow p\) point-wise, we also have \(\left| p\left( t,x\right) \right| \le C_{3}\rho \left( x\right) \) and thus
Then \(p\in \text {C}_{b}\left( \left[ 0,T\right] \times \mathbb {R}^{d}\right) \) and \(\left\| p_{n_{k}}-p\right\| _{\infty } \le \epsilon \). Now, if corresponding to \(M_{\epsilon }\), we choose \(k_{0}\) such that for all \( k\ge k_{0}\) we have
Whence, we have proved uniform convergence on the full space \(\mathbb {R}^{d}\). \(\square \)
The following proposition allows us to conclude the proof of Theorem 4.4.
Proposition B.5
There exist four positive and non decreasing functions \(\alpha (\,\cdot \,) ,C_{1}(\,\cdot \,),\) \(\beta (\,\cdot \,), C_{2}(\,\cdot \,)\), \(\rho \) as in (H5) of Sect. 4 and a constant \(C_{3} > 0\) such that \(\Phi \left( K\right) \subset \Xi _{\alpha ,C_{1},\beta ,C_{2},\rho ,C_{3}}\).
Proof
Without loss of generality, we may assume \(\alpha <\frac{1}{2}\). To shorten notations, set
Notice that the following inequalities hold
From Eq. (B.6) we have
were we have used a gradient estimate in Hölder norm similar to those of Lemma C.3 below, but easier. Therefore, (H1.1) in Theorem B.3 is satisfied, even uniformly with respect to R. Let us see (H2.1) in Theorem B.3 with \(t>t^{\prime }\):
We use the following property: for small t,
Hence
Therefore, also the second condition in the definition of \(\Xi _{\alpha ,C_{1},\beta ,C_{2},\rho ,C_{3}} \) is satisfied, even uniformly with respect to R. The difficult property is
for every \(x\in \mathbb {R}^{d}\), \(t\in \left[ 0,T\right] ,p\in K\), for a suitable constant \(C_{3}>0\). The idea is to write an equation for \(\pi _{p}\left( t,x\right) :=\rho ^{-1}\left( x\right) \Phi \left( p\right) \left( t,x\right) \) and deduce that \(\left\| \pi _{p}\left( t\right) \right\| _{\infty }\le C_{3}\) for every \(t\in \left[ 0,T\right] ,p\in K\). We use the weak formulation
with a test function \(\varphi \) of the form \(\rho ^{-1}\psi \) with \(\psi \in C_{c}^{\infty }\left( \mathbb {R}^{d}\right) \). Then
namely, formally speaking,
Using
this leads to
Therefore
Recall that \(\left\| \Phi \left( p\right) \right\| _{\infty }\le C_{2}\left( g,f,b,p_{0},T\right) \) independently of \(p\in K\). Moreover recall that \(\left\| \Delta \rho ^{-1}\right\| _{\infty }+\left\| \nabla \rho ^{-1}\right\| _{\infty }<\infty \). From a generalized form of Gronwall lemma we deduce a uniform bound for \(\left\| \pi _{p}\left( t\right) \right\| _{\infty }\). \(\square \)
At this point, we can apply Brouwer–Schauder fixed point theorem and have existence of a weak solution \(\left( w,p\right) \). The proof that \(\left( u, p\right) :=\left( -\log w,p\right) \) satisfies the original system can then be done by means of mollifiers.
1.4 B.4: Proof of Theorem 4.5
Throughout this section, we assume that \(p_0, b, f, g\) satisfy the hypotheses (H1)–(H2) and (H4) in Sect. 2.
Proof
We are going to apply the contraction principle to the system in Eqs. (4.6)–(4.7). Setting \(\theta \doteq \nabla u\), for T small enough, it reads as
Now, consider the following Banach space:
and by \(\left\| \,\cdot \,\right\| _{T,\infty }\) the norm in each space \({\text {C}_b( \left[ 0,T\right] \times \mathbb {R}^{d})}\). On the product space \(X_{T}\) consider the norm
Define the map \(\Gamma : X_{T}\rightarrow X_{T}\) as
whose marginals are given by
Notice that the fact that \(\Gamma \left( p,\theta \right) \in X_{T}\) when \(\left( p,\theta \right) \in X_{T}\) is implicit in the following computations and thus it will not be explained a priori. It is based on the following estimates of the heat semi-group’s gradient (cfr. also the proof of Theorem 4.4 and the reference therein): \(\left\| \nabla \mathcal {P}_{t}F\right\| _{\infty }\le C_{0}t^{-1/2}\left\| F\right\| _{\infty }\) for some constant \(C_0\) and every \(F\in \text {C}_{b}\left( \mathbb {R}^{d}\right) \) and \(\left\| \nabla \mathcal {P}_{t}F\right\| _{\infty }\le C_{0}\left\| \nabla F\right\| _{\infty }\) for every \(F\in \text {C}_{b}\left( \mathbb {R}^{d}\right) \) such that \(\nabla F\in \text {C}_{b}\left( \mathbb {R}^{d}\right) \).
Now, let us investigate when \(\Gamma \) is a contraction. We have
and
respectively.
Summarizing, there exists a constant \(\widetilde{C}>0\), depending only on \( C_{0}\), C, L, such that
Therefore, to have a contraction we need a bound on \(\left\| \left( p,\theta \right) \right\| _{T,\infty }+\left\| \left( p^{\prime },\theta ^{\prime }\right) \right\| _{T,\infty }\). Proceeding as above we have
and
Using the bound on b and f, we get
Therefore, we have proved:
for some constant \(K>0\). Hence setting
if we take \(\left( p,\theta \right) \in \Lambda _{T,R}\) we get
In particular, there exist \(T_{0},R_{0}>0\) such that for every \(0<T\le T_{0}\) and \(0<R\le R_{0}\) we have
With any such choice of \(T,R>0\) we have
If \(\left( p,\theta \right) ,\left( p^{\prime },\theta ^{\prime }\right) \in \Lambda _{T,R}\) we have proved above
Hence, reducing T if necessary, we see that \(\Gamma \), as a map from the metric space \(\Lambda _{T,R}\) into itself, is a contraction. \(\square \)
1.5 B.5: Proof of Theorem 4.8-(i)
Proof
Let \(\epsilon >0\) and let be \((\theta _{\epsilon })_{\epsilon > 0}\) be a family of mollifiers. Now, define the function \(u_{\epsilon }:[0,T] \times \mathbb {R}^{d} \rightarrow \mathbb {R}\) by setting
In particular, taking the convolution of the Hamilton–Jacobi Bellman equation (4.1) with \(\theta _{\epsilon }\) it is not difficult to see that \(u_{\epsilon }\) satisfies the following equation
on \((0, T) \times \mathbb {R}^{d}\). The smoothing properties of convolution (see Proposition A.1) guarantees that \(D^2 u_{\epsilon }(t, x)\) is continuous; besides, from the Hamilton–Jacobi Bellman equation it follows that also \(\partial _t u_{\epsilon }\) is continuous, and therefore that \(u_{\epsilon } \in \text {C}^{1,2}((0,T) \times \mathbb {R}^{d})\). Applying Itô’s formula we obtain
where we defined
Hence,
We claim that by taking the limit as \(\epsilon \rightarrow 0\) in the previous equation we obtain the identity (4.17) as in the heuristic argument.
We first deal with terms that do not explicitly depend on time, then extend the argument to time-dependent terms. To this end, let \(v \in \text {C}(\mathbb {R}^{d})\); then, \(\theta _{\epsilon } * v \rightarrow v\) as \(\epsilon \rightarrow 0\) uniformly on compact sets (see Proposition A.2). Set now \(v_{\epsilon } \doteq \theta _{\epsilon } * v\). If v is bounded by a constant K, then the same holds for \(v_{\epsilon }\) and the constant bounding \(v_{\epsilon }\) is independent of \(\epsilon \). For all \(R > 0\) and for any probability measure \(\mu \in \mathcal {P}(\mathbb {R}^{d})\) we have
where \(\overline{B}_{R}(0) \subset \mathbb {R}^{d}\) denotes the closed ball of radius R around the origin and \(|\overline{B}_{R}(0)|\) its measure. In particular, the last term in (B.8) converges to zero as \(R \rightarrow \infty \).
Let now \(u(t,\,\cdot \,)\in \text {C}_{b}(\mathbb {R}^{d})\), bounded by a constant K, with u the first component of the solution of the PDE system in Eq. (4.1). Moreover, let \(\mu _t^{\alpha }\) the law of \(X_t^{\alpha }\). Then
for all \(t \in [0,T]\), so in particular for \(t = T\) and \(u(T) = g\).
Now, we show that a similar argument holds also for terms that have an explicit, continuous, dependence on the time variable. Let \(v \in \text {C}_{b}([0, T] \times \mathbb {R}^{d})\); then for each fixed \(t \in [0, T]\) we have that \(\theta _{\epsilon } * v(t) \rightarrow v(t)\) as \(\epsilon \rightarrow 0\) uniformly on compact sets (see, again, Proposition A.2). In particular, for all \(R > 0\) and for any probability measure \(\mu \in \mathcal {P}(\mathbb {R}^{d})\) we have:
The first term converges to zero as \(\epsilon \rightarrow 0\) provided that both \(v_{\epsilon }\) and v belongs to \(\text {C}([0,T] \times \mathbb {R}^{d})\); indeed, in this case we can compute the maximum over [0, T]. The second term converges to zero by an argument similar to that used in Eq. (B.8).
However, if \(v \in \text {C}_{b}([0, T] \times \mathbb {R}^{d})\), then \(v(t,\,\cdot \,) \in \text {C}_b(\mathbb {R}^{d})\) and \(v(\,\cdot \,,x) \in \text {C}([0,T])\); therefore, the compactness of [0, T] implies the uniform continuity of \(v(\,\cdot \,,x)\). Then, the fact that \(v(t,\,\cdot \,) \in \text {C}_b(\mathbb {R}^{d})\) and the uniform continuity of \(v(\,\cdot \,,x)\) imply the joint continuity of v. Indeed, let \((t, x) \in [0,T] \times \mathbb {R}^{d}\). For all \(\epsilon > 0\) there exist \(\delta > 0\) and \( \eta > 0\) such that
More precisely, let \(\delta >0\) be the constant related to the uniform continuity in time associated to \(\epsilon /2\) and \(\eta >0\) be the constant related to the continuity in space associated to \(\epsilon /2\). Then:
By the fact that all our terms satisfy the required continuity as v, by the boundedness of the admissible controls and by choosing \(\mu =\mu ^{\alpha }\) law of \(X^{\alpha }\) we conclude. \(\square \)
Appendix C: Hölder-Type Seminorm Bounds-1
This section collects some results for Hölder-type seminorm (see Definition in Eq. (5.8)) used in the proof of Theorem 5.1.
We start by fixing the fractional exponent \(s \in (0,1)\) and for any \(p \in [1, +\infty )\), we define \(W^{s,p}(\mathbb {R}^{d})\) as the space:
endowed with the following norm:
Let \(p\in [1,+\infty )\) and \(s\in (0,1)\) be such that \(sp > d\). Then, there exists a constant \(C > 0\), depending on d, s, p, such that
where \(\gamma \doteq (s p - d)/p\) and \(sp > d\). We refer to [7], Theorem 8.2, for a proof of the previous result. We state the following lemma
Lemma C.1
Let \(p \in [1, +\infty )\), \(s \in (0,1)\) be such that \(s p > d\), \(d \in \mathbb {N}\). Then,
Proof
We write \([f]_{p, s p}^{p}\) as
Then,
which concludes the proof. \(\square \)
Lemma C.2
Assume there exists a number \(\epsilon >0\) with the following property. For every \(p\ge 2\) there is a function \(g_{p}>0\) such that
for all \(\left| h\right| \le 1\) and \(x\in \mathbb {R}^{d}\). Then, there is \(\gamma >0\) such that, for every \(p\ge 2\), there is a constant \( C_{p}>0\) such that
Proof
It is sufficient to prove the thesis for arbitrarily large \(\bar{p}\ge 2\), since for smaller ones it follows from Hölder inequality. Choose \(s\in (0,\varepsilon )\); then take any \(\bar{p}\ge 2\) such that \(s\bar{p} > d\). We have to find \(\gamma > 0\) such that for every such \(\bar{p}\) there is a constant \(C_{\bar{p}}\) such that \(\mathbb {E}\left[ \left\| M_{t}^{N}\right\| _{\gamma }^{\bar{p}}\right] \le C_{\bar{p}}\) uniformly in \(t\in [0,T]\) and \(N\in \mathbb {N}\).
Thanks to the assumptions,
Moreover, thanks to Lemma C.1,
Now, using again the fact that \(\mathbb {E}\left[ \Vert M^N_t \Vert ^{\bar{p}}_{L^{\bar{p}}} \right] \le C\), we may apply inequality (C.1) and deduce the desired bound for \(\bar{\gamma }=(s\bar{p}-d)/\bar{p}\). A-priori this value of \(\gamma \) depends on the particular \(\bar{p}\) chosen above. However, it is sufficient to choose first a value \(\bar{p}_0\), such that \(s\bar{p}_0>d\) and prove that \(\mathbb {E}\left[ \Vert M^N_t \Vert ^{\bar{p}}_{\bar{\gamma }_0} \right] \le C_{\bar{p}_0}\); then for all \(\bar{p}>\bar{p}_0\), we prove the inequality with \(\bar{\gamma }=s-d/\bar{p}\) which is larger than \(\bar{\gamma }_0\), hence it holds also with Hölder exponent \(\bar{\gamma }_0\), which can be taken as the value of \(\gamma \) in the statement of the lemma. \(\square \)
Lemma C.3
Let \(N, d \in \mathbb {N}\), let \(\mathcal {P}_t\) be the semi-group associated to the density G(t, x) of \(x + W_t\) where \(W_t\) is a standard blackian motion, \(x \in \mathbb {R}^{d}\) and \(t \in {(} 0, T]\). Moreover, let \(V \in \text {C}_c^1(\mathbb {R}^d) \cap \mathcal {P}(\mathbb {R}^{d})\). Then
Moreover, if \(R>0\) denotes a number such that the support of V is contained in \(B_{R}(0)\), the open ball of radius R around the origin, and we write \(V^{N}\left( x\right) =\epsilon _{N}^{-d}V\left( \epsilon _{N}^{-1}x\right) \), then there exist two constants \(C_{T,R,V}>0\) and \(\lambda _{T,R,V}>0\) with the following property: for every \(\delta ,\gamma \in \left( 0,1\right) \), \(x\in \mathbb {R}^{d}\), \(\left| h\right| \le 1\) and \(t\in \left[ 0,T\right] \)
Proof
The first inequality is a well known properties of analytic semi-group (see, for instance, Lunardi [18]). We give a detailed proof of the last two equalities.
Step 1 We collect some preliminary fact. We recall that
and we find a bound for \(\nabla G_{t}\left( x\right) \) and \(|D^2 G_t(x)|\). Notice that
hence, being \(\sqrt{r}\exp \left( -\frac{1}{2}r\right) \le \exp \left( - \frac{1}{4}r\right) \),
Similarly, for suitable \(\lambda ,C>0\),
Step 2 In this step we prove that
for all \(x\in \mathbb {R}^{d}\) and \(t \in {(} 0, T]\), for a suitable constant \(C_{T,R,V}>0\). From the bound for \(\left| \nabla G_{t}\left( x\right) \right| \) in Step 1 we obtain
If \(\left| x\right| \le R+1\), we bound the integral from above by the integral on the full space, which is equal to one, and deduce
If \(\left| x\right| >R+1\) and \(\left| y\right| \le R\), then (we oversimplify to make expressions easier in the sequel) \(\left| x-y\right| ^{2}\ge \left| x-y\right| \ge \left| x\right| -R\). Therefore, for \(\left| x\right| >R+1\),
One show that there is \(C_{T}>0\) such that for \(t\in \left[ 0,T\right] \) and \(\left| x\right| >R+1\), one has
Indeed the left-hand-side is controlled (up to a constant) by \(\left( \frac{ \left| x\right| -R}{2t}\right) ^{d/2}e^{-\frac{1}{2}\frac{ \left| x\right| -R}{2t}} \) (because \(\left| x\right| -R\ge 1\)) and the function \(r^{d/2}e^{-\frac{1}{2}r} \) is bounded above by \(e^{-\frac{1}{4}r}\), up to a constant; finally, \(e^{-\frac{1}{4}\frac{\left| x\right| -R}{2t}} \le e^{-\frac{1}{8T}\left( \left| x\right| -R\right) }\).
Hence
Remaning the constant \(C_{T,R,V}^{\prime }\), the same bound is true for \( \left| x\right| \le R+1\), hence it is true for all x and all \(t \in {(} 0, T]\).
Step 3 We complete the proof of (C.6). In addition to the bound found in Step 2 we have
Arguing as above we get,
where if necessary we have renamed the constant \(C_{T,R,V}\). Now, taken \( \delta \in \left( 0,1\right) \), we use both inequalities for \(\left| \left( \nabla \mathcal {P}_{t}V^{N}\right) \left( x\right) \right| \) to get
Step 4 Finally we prove (C.7). We note first that
On the other hand, it holds:
because \(\left| h\right| \le 1\). Therefore, for every (small) \(\gamma \in \left( 0,1\right) \),
which completes the proof. \(\square \)
Appendix D: Hölder-Type Seminorm Bounds-2
Let \(N\in \mathbb {N}\). This section collects some results on Hölder type semi-norm for convolution of the type \(V^{N}*\mu _{N}\), where \(V^{N}\) satisfies to hypothesis (H3), i.e. \(V^{N}(x)=\epsilon _{N}^{-d}V(\epsilon _{N}^{-1}x)\) with \(\epsilon _{N}>0\), \(\lim _{N\rightarrow \infty }\epsilon _{N}=0\), \(V\in \text {C}^{1}(\mathbb {R} ^{d})\cap \mathcal {P}(\mathbb {R}^{d})\). In addition, \(\mu _{N}\in \mathcal {P} (\mathbb {R}^{d})\). In what follows, for pedagogical reasons, we first treat the case in which the probability measure \(\mu _{N}\) is deterministic, then we analyse the case in which \(\mu _{N}\) is stochastic; the results’ proofs in the latter case are less elementary. We make the following remark. If \(\mu \in \mathcal {P}(\mathbb {R}^{d})\), then \(V^{N}*\mu \in \text {C}^{1}(\mathbb {R}^{d})\). Moreover, if \((\mu _{N})_{N\in \mathbb {N} }\subset \mathcal {P}(\mathbb {R}^{d}) \) converges weakly to \(\mu \in \mathcal {P}(\mathbb {R}^{d})\) as \(N\rightarrow \infty \), then
Indeed, \(\left\langle V^{N}*\mu _{N},\varphi \right\rangle =\left\langle \mu _{N},V^{N,-}*\varphi \right\rangle \) where \(V^{N,-}\left( x\right) =V^{N}\left( -x\right) \); then \(V^{N,-}*\varphi \rightarrow \varphi \) uniformly on \(\mathbb {R}^{d}\) as \(N\rightarrow \infty \) and thus \(\left\langle \mu _{N},V^{N,-}*\varphi \right\rangle \) converges to \(\left\langle \mu ,\varphi \right\rangle \). Let, as usual, \(\overline{B} _{R}(0)\) be the closed ball of radius R centred around zero. Spaces like \(\text {C}_{\ell oc}^{\gamma }(\mathbb {R}^{d})\), namely with the \(\ell oc\) specification, are Polish spaces; the convergence in this spaces is the convergence in the corresponding topologies over \(\overline{B}_{R}(0)\) for each \(R>0\). In addition, let
and endow it with the natural metric which yields convergence in each \(\text {C}_{\ell oc}^{\gamma ^{^{\prime }}}(\mathbb {R}^{d})\). Recall that, by \(\left\| f\right\| _{\gamma }\) we mean the sum of the supremum norm \(\left\| f\right\| _{\infty }\) on full space \(\mathbb {R}^{d}\) plus the \(\gamma \)-Hölder seminorm on \(\mathbb {R}^{d}\).
Lemma D.1
Let \(\left( \mu _{N}\right) _{N\in \mathbb {N}}\subset \mathcal {P}\left( \mathbb {R}^{d}\right) \) be a sequence converging weakly to \(\mu \in \mathcal {P}(\mathbb {R}^{d})\). Set \(p_{N}=V^{N}*\mu _{N}\). Let \(\gamma \in (0,1)\) be such that there exists \(K>0\) for which
for all \(N\in \mathbb {N}\). Then \(\mu \) is absolutely continuous w.r.t. Lebesgue measure with density \(p\in \text {C}_{\ell oc}^{\gamma -}\left( \mathbb {R} ^{d}\right) \) and \(\left\| p\right\| _{\infty }\le K\). Moreover, \(p_{N}\rightarrow p\) in \(\text {C}_{loc}^{\gamma -}\left( \mathbb {R} ^{d}\right) \).
Proof
First, notice that for every \(R>0\) and \(\gamma ^{^{\prime }}<\gamma \) the space \(C^{\gamma }(\overline{B}_{R}(0))\) is compactly embedded into \(C^{\gamma ^{^{\prime }}}(\overline{B}_{R}(0))\). Take any subsequence \((p_{N_{k}} )_{k\in \mathbb {N}}\). Thanks to the previous compactness result, together with a diagonal procedure on a subsequence of radius \((R_{i})_{i\in \mathbb {N}}\), \(R_{i}\rightarrow \infty \) as \(i\rightarrow \infty \) and a sequence of exponents \(\gamma _{i}^{^{\prime }}<\gamma \) such that \(\gamma _{i}^{^{\prime }} \rightarrow \gamma \) as \(i\rightarrow \infty \), we may prove that there exists a subsequence \(\left( p_{N_{k}^{^{\prime }}}\right) _{k\in \mathbb {N}}\) which converges in \(\text {C}^{\gamma ^{^{\prime }}}(\mathbb {R}^{d})\) for every \(\gamma ^{\prime }<\gamma \), to a function \(p\in \text {C}_{\ell oc}^{\gamma ^{^{\prime }}}(\mathbb {R}^{d})\); a priori, the function p depends on the subsequence. Therefore (see the remark above)
for every \(\varphi \in \text {C}_{c}\left( \mathbb {R}^{d}\right) \). Hence, \(\mu \) is absolutely continuous with respect the Lebesgue measure with density p. Notice that the properties \(p\ge 0\) a.e. and \(p\in L^{1}(\mathbb {R}^{d})\) follow from the identity \(\left\langle \mu ,\varphi \right\rangle =\left\langle p,\varphi \right\rangle \) for every \(\varphi \in \text {C}_{c}\left( \mathbb {R}^{d}\right) \). This identify uniquely p, independently of the subsequence. Since the convergence in \(\text {C}_{\ell oc}^{\gamma ^{\prime } }\left( \mathbb {R}^{d}\right) \) is metric, we deduce that the whole sequence \(\left( p_{N}\right) \) converges to p in \(C_{\ell oc}^{\gamma ^{\prime } }\left( \mathbb {R}^{d}\right) \).
Finally, the previous convergence implies pointwise convergence, hence
This proves \(\left\| p\right\| _{\infty }\le K\). \(\square \)
Now, we state and prove the previous lemma in the case in which \((\mu _{N})_{N \in \mathbb {N}} \subset \mathcal {P}(\mathbb {R}^{d})\) is a random sequence. Recall that a random probability measure is a random variable from \((\Omega , \mathcal {F}, \mathbb {P})\) to \(\mathcal {P}(\mathbb {R}^{d})\), considered as a Polish space with a metric inducing weak convergence of measures. Instead, a random function p of class \(\text {C}_{\ell oc}^{\gamma }\left( \mathbb {R}^{d}\right) \) is a random variable from \(\left( \Omega ,\mathcal {F},\mathbb {P}\right) \) to \(\text {C}_{\ell oc}^{\gamma }\left( \mathbb {R}^{d}\right) \).
Lemma D.2
Let \(\left( \mu _{N}\right) _{N\in \mathbb {N}} \subset \mathcal {P}\left( \mathbb {R}^{d}\right) \) be a sequence of random probability measures converging in law, in the weak topology of \(\mathcal {P}(\mathbb {R}^{d})\), to a random \(\mu \in \mathcal {P} (\mathbb {R}^{d})\). Introduce the random differentiable functions \(p_{N} \doteq V^{N}*\mu _{N}\). Let \(\gamma \in (0,1)\), \(q\ge 2\) be such that there exists a constant \(K>0\) for which
for all \(N\in \mathbb {N}\). Then there exists a random function p of class \(\text {C}_{\ell oc}^{\gamma -}\left( \mathbb {R}^{d}\right) \) such that, with probability one, \(\mu \left( dx\right) =p\left( x\right) dx\); and for every \(q^{\prime }<q\) we have
Moreover, \(p_{N}\) converges to p in law, in the topology of \(\text {C}_{\ell oc}^{\gamma -}\left( \mathbb {R}^{d}\right) \); and when p is deterministic (so that \(p_{N}\) converges to p also in probability) we have
for every \(q^{\prime }<q\) and \(R>0\).
Proof
Let us denote by \(P_{N}\) the law of \(p_{N}\) on Borel sets of \(\text {C} ^{\gamma }(\mathbb {R}^{d})\), by \(\pi _{N}\) and \(\pi \) the laws of \(\mu _{N}\) and \(\mu \) on Borel sets \(\mathcal {P}(\mathbb {R}^{d})\), respectively. We know that \(\pi _{N}\) converges weakly to \(\pi \). Set
\(\mathcal {K}_{R}\) is pre-compact in \(\text {C}_{\ell oc}^{\gamma -} (\mathbb {R}^{d})\). By assumption (D.1) and Markov inequality,
Then the family \((P_{N})_{N\in \mathbb {N}}\) is tight in \(C_{\ell oc}^{\gamma -}(\mathbb {R}^{d})\). Let \((P_{N_{k}})_{k\in \mathbb {N}}\) be any subsequence converging weakly in the topology of \(\text {C}_{\ell oc}^{\gamma -} (\mathbb {R}^{d})\) to some measure P, which, in principle, depends a priori on the subsequence. More precisely, denote by \(Q_{N}\) the joint law of the vector \(\left( p_{N},\mu _{N}\right) \) on Borel sets of \(C_{\ell oc} ^{\gamma -}(\mathbb {R}^{d})\times \mathcal {P}(\mathbb {R}^{d})\). Since we already know that \(\mu _{N}\) converges weakly, hence it is precompact, we can extract \(\left( n_{k}\right) _{k\in \mathbb {N}}\) such that \(Q_{N_{{k}}}\) converges weakly to a probability measure Q on Borel sets of \(C_{\ell oc}^{\gamma -}(\mathbb {R}^{d})\times \mathcal {P}(\mathbb {R}^{d})\). The second marginal of Q is \(\pi \), the first marginal will be called P, as above. The first marginal of \(Q_{N_{{k}}}\) is \(P_{N_{k}}\) and converges weakly to P; the second marginal is \(\pi _{N_{k}}\) and converges weakly to \(\pi \). Notice that at this stage we do not know yet \(\mu \) has a density and that P is the law of such density. Concerning uniqueness, \(\mu \) is the unique limit point (in law) of \(\mu _{N}\), but P a priori is not the unique weak limit point of \(P_{N}\).
By Skorohod representation theorem, there exists a probability space \((\widetilde{\Omega },\widetilde{\mathcal {F}},\widetilde{\mathbb {P}})\), random variables \(\left( \widetilde{p}_{N_{{k}}},\widetilde{\mu }_{N_{{k}}}\right) \) and \(\left( \widetilde{p},\widetilde{\mu }\right) \) from \((\widetilde{\Omega },\widetilde{\mathcal {F}},\widetilde{\mathbb {P}})\) to \(C_{\ell oc}^{\gamma -}\left( \mathbb {R}^{d}\right) \times \mathcal {P}\left( \mathbb {R} ^{d}\right) \), with laws \(Q_{N_{{k}}}\) and Q respectively, such that \(\left( \widetilde{p}_{N_{k}},\widetilde{\mu }_{{N_{{k}}}}\right) \rightarrow \left( \widetilde{p},\widetilde{\mu }\right) \) as \(k\rightarrow \infty \) in \(\text {C}_{\ell oc}^{\gamma -}\left( \mathbb {R}^{d}\right) \times \mathcal {P}\left( \mathbb {R}^{d}\right) \), \(\widetilde{\mathbb {P}}\)-a.s. The link \(p_{N_{k}}=V^{N_{k}}*\mu _{N_{k}}\) is preserved under this change of basis: \(\widetilde{p}_{N_{k}}=V^{N_{k}}*\widetilde{\mu }_{N_{k}}\) with \(\widetilde{\mathbb {P}}\) probability one. Indeed, denoting by \(\widetilde{\mathbb {E}}[\,\cdot \,]\) the mathematical expectation on \((\widetilde{\Omega },\widetilde{\mathcal {F}},\widetilde{\mathbb {P}})\),
(the first identity is true because \(\left( \widetilde{p}_{N_{k} },\widetilde{\mu }_{N_{k}}\right) \) and \(\left( p_{N_{k}},\mu _{N_{k}}\right) \) have the same law; second identity is true because \(p_{N_{k}}=V^{N_{k}} *\mu _{N_{k}}\)). Hence \(\widetilde{p}_{N_{k}}=V^{N_{k}}*\widetilde{\mu }_{N_{k}}\), \(\widetilde{\mathbb {P}}\)-a.s.
The novelty on \((\widetilde{\Omega },\widetilde{\mathcal {F}} ,\widetilde{\mathbb {P}})\) is that we have the random variable \(\widetilde{p}\), not only \(\widetilde{\mu }\). Let us prove that the former is the density of the latter. From the remark above, with \(\widetilde{\mathbb {P}}\) probability one, since \(\widetilde{\mu }_{N_{k}}\) converges weakly to \(\widetilde{\mu }\) we have
for all \(\varphi \in \text {C}_{c}(\mathbb {R}^{d})\). But at the same time, being \(V^{N_{k}}*\widetilde{\mu }_{N_{k}}=\widetilde{p}_{N_{k}}\) and \(\widetilde{p}_{N_{k}}\) converges to \(\widetilde{p}\) in C\(_{\ell oc}^{\gamma -}\left( \mathbb {R}^{d}\right) \), we have
for all \(\varphi \in \text {C}_{c}(\mathbb {R}^{d})\). Therefore,
with \(\widetilde{\mathbb {P}}\) probability one. It implies that, \(\widetilde{\mathbb {P}}\)-a.s., the measure \(\widetilde{\mu }\) has density \(\widetilde{p}\in \text {C}_{\ell oc}^{\gamma -}\left( \mathbb {R}^{d}\right) \); the property that \(\widetilde{p}\) is a probability density follows from the same identity, by suitable choice of \(\varphi \in \text {C}_{c}\left( \mathbb {R}^{d}\right) \).
Call \(\Lambda \) the subset of \(\text {C}_{\ell oc}^{\gamma -}\left( \mathbb {R}^{d}\right) \times \mathcal {P}\left( \mathbb {R}^{d}\right) \) such that the first element is the density of the second. Call \(\Lambda _{2}\) the set of elements of \(\mathcal {P}\left( \mathbb {R}^{d}\right) \) that have a density of class \(\text {C}_{\ell oc}^{\gamma -}\left( \mathbb {R}^{d}\right) \). The sets \(\Lambda \) and \(\Lambda _{2}\) are in bijection. The two sets are measurable in the corresponding spaces and the bijection is bi-measurable. Therefore a probability measure on C\(_{\ell oc}^{\gamma -}\left( \mathbb {R}^{d}\right) \times \mathcal {P}\left( \mathbb {R}^{d}\right) \), concentrated on \(\Lambda \), corresponds uniquely to a probability measure on \(\mathcal {P}\left( \mathbb {R}^{d}\right) \) concentrated on \(\Lambda _{2}\), by this bijection. It follows that Q is uniquely determined by its second marginal \(\pi \), which is unique a priori. This proves that Q is independent of the subsequence \(\left( n_{k}\right) _{k\in \mathbb {N}}\) and thus the full sequence \((Q_{N})_{N\in \mathbb {N}}\) converges, to a single Q.
We can now prove that \(\mu \) has a density, \(\widetilde{\mathbb {P}}\)-a.s. We have proved that the law of \(\widetilde{\mu }\) is concentrated on \(\Lambda _{2} \); but, being Q the law of \(\left( \widetilde{p},\widetilde{\mu }\right) \) and having Q second marginal \(\pi \), the law of \(\widetilde{\mu } \) is \(\pi \). Hence \(\pi \), which is also the law of \(\mu \), is concentrated on \(\Lambda _{2}\). Namely, \(\mathbb {P}\)-a.e. realization of \(\mu \) has a density p, of class \(\text {C}_{\ell oc}^{\gamma -}(\mathbb {R}^{d})\). The random element \(\left( p,\mu \right) \) is the image of \(\mu \) under the bijection above, hence it has law Q. It follows, from the weak convergence of \((Q_{N})_{N\in \mathbb {N}}\) to Q, that \(p_{N}\) converges to p in law.
It remains to prove (D.2) and (D.3). Let us prove (D.2). The sequence of r.v.’s \(\left\{ \sup _{\left| x\right| \le n}\left| p\left( x\right) \right| ^{q^{\prime }}\right\} _{n\in \mathbb {N}}\) is non decreasing and non-negative, and converges a.s. to \(\sup _{x\in \mathbb {R}^{d}}\left| p\left( x\right) \right| ^{q^{\prime }}\), hence by Beppo-Levi theorem
Therefore (using also the fact that \(\widetilde{p}\) and p have the same law, the first marginal of Q above) it is sufficient to find a constant \(C>0\), independent of R, such that
for every \(R>0\). But we know that \(\sup _{\left| x\right| \le R}\left| \widetilde{p}^{N_{K}}\left( x\right) \right| ^{q^{\prime }}\) converges a.s. to \(\sup _{\left| x\right| \le R}\left| \widetilde{p}\left( x\right) \right| ^{q^{\prime }}\). Moreover, we know that there exists \(\gamma >1\) such that
(take \(\gamma =q/q^{\prime }\) and use assumption (D.1)). Hence, by Vitali convergence theorem, we get
Finally, (D.3) is proved similarly, under the additional assumption that p is deterministic. In this case \(p^{N}\) converges to p in probability, not only in law, in \(\text {C}_{\ell oc}^{\gamma -}(\mathbb {R}^{d})\). In particular, \(\sup _{\left| x\right| \le R}\left| p^{N}\left( x\right) -p\left( x\right) \right| ^{q^{\prime }}\) converges to zero in probability. Since \(\sup _{\left| x\right| \le R}\left| p^{N}\left( x\right) -p\left( x\right) \right| ^{q^{\prime }}\) is uniformly integrable, by Vitali theorem it converges to zero in average. \(\square \)
Appendix E: Relaxed Controls
In the proof of Theorem 6.1 we use the concept of relaxed controls. In this section we briefly recall the definition of relaxed controls are; for more details, see, for instance, El Karoui et al. [10] and Kushner [14]. Let \(\mathcal {S}\) be a Polish space and let \(\mathcal {R}_{\mathcal {S}}\) be the space of all deterministic \(\mathcal {S}\)-valued relaxed controls over the time interval [0, T], that is,
If \(r \in \mathcal {R}_{\mathcal {S}}\), then the time derivative of r exists almost everywhere as a measurable mapping \( \overset{\cdot }{r}_t : [0, T] \rightarrow \mathcal {P}(\mathcal {S})\) such that \(r(dy, dt) = \overset{\cdot }{r}_t(dy)\,dt\). The topology of weak convergence of measure turns \(\mathcal {R}_{\mathcal {S}}\) into a Polish space. In addition, the space \(\mathcal {R}_{\mathcal {S}}\) is compact if \(\mathcal {S}\) is compact. Finally, any \(\mathcal {S}\)-valued \((\mathcal {F}_t)\)-adapted process \(\alpha \) defined on some filtered probability space \((\Omega , \mathcal {F}, \mathbb {P})\) induces a \(\mathcal {R}_{\mathcal {S}}\)-valued random variable \(\rho \), the corresponding stochastic relaxed control, according to:
where \(B \in \mathcal {B}(\Gamma )\) with \(\Gamma \) the set of control actions, or action space, \(I \in \mathcal {B}([0, T])\) and \(\omega \in \Omega \). The random measure \(\rho \) is \((\mathcal {F}_t)\)-adapted in the sense that its restriction to \(\mathcal {S} \times [0,t]\) is \(\mathcal {F}_t\)-measurable for every \(t \in [0,T]\).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Flandoli, F., Ghio, M. & Livieri, G. N-Player Games and Mean Field Games of Moderate Interactions. Appl Math Optim 85, 38 (2022). https://doi.org/10.1007/s00245-022-09834-7
Accepted:
Published:
DOI: https://doi.org/10.1007/s00245-022-09834-7