A singular perturbation problem for mean field games of acceleration: application to mean field games of control

The singular perturbation of mean field game systems arising from minimization problems with control of acceleration is addressed, that is, we analyze the behavior of solutions as the acceleration costs vanishes. In this setting, the Hamiltonian fails to be strictly convex and coercive w.r.t. the momentum variable and, so, the classical results for Tonelli Hamiltonian systems cannot be applied. However, we show that the limit system is of MFG type in two different cases: we first study the convergence to the classical MFG system and, then, by a finer analysis of the Euler–Lagrange flow associated with the control of acceleration, we prove the convergence to a class of MFG systems, known as, MFG of control.


INTRODUCTION
The study of singular perturbation problem of control systems has a long history going back to [5,6,7] and references therein.Such a problem concerns the analysis of systems where some state variables evolve at a much faster time scale than the others.Generally, the solution of a typical singular perturbation problem leads to the elimination of the fast state variable and, consequently, to a reduction of the dimension of the system.Clearly, the limit problem keeps some informations on the fast part.
Besides classical control systems, other type of singular perturbation problems have been studied and we refer, for instance, to homogenization (e.g.[33], [27]) and the long time behavior (e.g.[23,22], [10]).More recently, such analysis have been extended to the case of differential games (e.g.[2,3], [35], [29]) and of mean field games (MFG) (e.g.[16], [12,13], [17], [19,34]).Based on this recent literature, in this paper we make a step further.Indeed, the goal of this work is twofold: first, we show a connection between the classical MFG system, where the underlying payoff is a calculus of variation problem, and the MFG with control of acceleration; secondly, we show how a MFG of control system can be recovered from a MFG system with control of acceleration.We will extend such analysis to the study of singular perturbation problems associated with sub-Riemannian structure and MFG defined on such structures in a future work.
We recall that MFG were introduced in [30,31,32] and [26,25] in order to describe the behavior of Nash equilibria in problems with infinitely many rational agents (we refer to [15] and references therein for more details).Since these pioneering works the MFG theory has grown very fast: we refer for instance to the survey papers and the monographs [24,11,18].The classical MFG system introduced in [30,31,32] describes systems in which each the typical payoff is represented by deterministic calculus of variation problem.MFG systems with control of acceleration, first introduced in [14,1], describe models where agents control their acceleration and the cost functional to minimize depends on higher order derivatives of admissible trajectories.Such problems naturally appear in the study of agent-based models which describe the collective behavior of various animal populations (e.g.[28,36]) or crowd dynamics (e.g.[20,21]).In this framework the study of the singular perturbation problem we perform in this paper finds a lot of applications: for instance, such an analysis can be applied to a MFG system of Cucker-Smale type, see for instance [9], to describe the behavior of a flock in which the control is increasingly cheap.
We describe now the problems we are going to solve in this paper.
(1) Convergence to the classical MFG system.We study the limit of the solution to the system      as the parameter ε goes to zero.Heuristically, the state equation associated with the above PDEs system is given by where α : [0, T ] → R d is a measurable control function and, from [14,1], we have that for any ε > 0 a typical player aims to minimize a cost functional of the form Moreover, still from [14,1], under suitable assumptions (listed below) on the function L 0 , we have that for any ε > 0 there exists a unique solution (u ε , µ ε ) to (1.3).
Following the previous considerations on a typical singular perturbation problem, in case of control of acceleration we expect that the fast variable, in this case the velocity of each player, is eliminated in the limit and all the informations are captured by the behavior of the space variable.Moreover, since the aim of such analysis is to establish a rigorous mathematical connection between the classical MFG system and the MFG system with control of acceleration, system (1.3) has such a particular form.Indeed, we observe that the function L 0 and the terminal costs g only depend on the space marginal of the measures {µ ε t } t∈[0,T ] .Such a marginal flow of measures captures the behavior of the fast variable in the limit and it is also the object of investigation in classical MFG since it represents the distribution of players in space at each time t ∈ [0, T ].
(2) Convergence to MFG of control system.In the second part, we analyze the limit of the solution to the system still as the parameter ε goes to zero.The main issue here is that both the data L and g depend on µ ε and we have to deal with the convergence of the whole measure.Note that, even though the limit control problem does not depend on velocity as a state variable we have that the second marginal of the limit measure, and so the Lagrangian function, still depends on it.For this reason, we expect the limit system to be of mean field game of control type.Next, we briefly explain the main result of this work and the method of proof.
(1) Towards the classical MFG system.We prove that (u ε , m ε ), where m ε t is the space marginal of the solution µ ε t for any t ∈ [0, T ], converges (up to subsequence) to a solution (u 0 , m 0 ) of the classical MFG system where H 0 : R d × R d → R is the Legendre Transform of the function L 0 .Observe that, the main difference between our result and the existing one concerning the homogenization problem in MFG ( [19], [34], [9]) is that the limit system is still of MFG type.Indeed, in [19], [34] and [9] it has been proved that in the limit the MFG structure of the problem is lost and, in particular, an explicit example of MFG system with potential coupling function is constructed in [34].
In order to prove our first main convergence result, we begin by showing that u ε is equibounded and m ε is tight (see Lemma 4.1 and Theorem 4.4).Thus, as a first consequence we get that, up to a subsequence, there exists m 0 ∈ C([0, T ]; . Then, we proceed with the analysis of the value function u ε : we show that u ε (t, •, v) is equi-Lipschitz continuous, u ε (•, x, v) is equicontinuous and u ε (t, x, •) has decreasing oscillation w.r.t.ε (see Lemma 4.6 and Proposition 4.7).We finally address the locally uniform convergence of u ε , showing that there exists a subsequence ε k ↓ 0 such that (u ε k , m ε k ) converges to a solution (u 0 , m 0 ) of (1.4) (see Theorem 4.9, Proposition 4.10 and Corollary 4.12).The main issues in proving the above results are due to the lack of strict convexity and the lack of coercivity of the Hamiltonian in system (1.3).The technic we use to study our singular perturbation problem is a combination of variational approach to Hamilton-Jacobi equation and optimal transport in order to overcome the issues mentioned above.
(2) Towards MFG of control system.Just for simplicity of notation, we restrict the attention to a Lagrangian of the form In this setting, we prove that (u ε , µ ε ) converges (up to subsequence) to a solution (u 0 , µ 0 ) to the MFG of control system where and Id(•) denotes the identity function.As observed before, the main difference with the previous study is the convergence of the whole measure µ ε which requires a finer study of the Euler-Lagrange flow associated with the problem of control of acceleration.We observe that equations (i), (ii) are in common with system (1.4) and they differ only in the measure argument of the function L 0 .However, system (1.5) has a third equation, (iii), which describes the evolution of the flow {µ 0 t } t∈[0,T ] : the second marginal, that is the one w.r.t. the velocity variable, is given by the push-forward of the optimal feedback function Du 0 by the first marginal {m 0 t } t∈[0,T ] .Heuristically, such an equation describes the evolution of the density distribution of controls w.r.t. the state of a typical player.For this reason system (1.5) is called MFG system of control.
In conclusion, we stress that the result can be generalized to any Lagrangian following the same arguments but with heavy notation that leads to an hard presentation of the ideas.
The paper is organized as follows.In Section 2 we fix the notation that will be used throughout the paper and we recall the main definitions and results from measure theory.In Section 3 we introduce the MFG system associated with the singular perturbation problem, we give the standing assumptions on the data and finally we state the main results (Theorem 3.2 and Theorem 3.4).Section 4 and Section 5 are devoted to the proofs of preliminary results needed to demonstrate Theorem 3.2 and Theorem 3.4, respectively.
Statements and Declarations: There are no associated data.Statements and Declarations: There are no conflict of interests.

NOTATIONS AND PRELIMINARIES
2.1.Notation.We write below a list of symbols used throughout this paper.
We denote by L p (A) (for 1 ≤ p ≤ ∞) the space of Lebesgue-measurable functions f with f p,A < ∞, where For brevity, f ∞ and f p stand for f ∞,R d and f p,R d respectively.• C b (R d ) stands for the function space of bounded uniformly continuous functions on stands for the space of bounded functions on R d with bounded uniformly continuous first and second derivatives.C k (R d ) (k ∈ N) stands for the function space of k-times continuously differentiable functions on R d , and 2.2.The Wasserstein spaces.We recall here the notations and definitions of Wasserstein spaces and Wasserstein distance, for more details we refer to [37,4].Let (X, d) be a metric space (in the paper, we use Denote by B(X) the Borel σ-algebra on X and by P(X) the space of Borel probability measures on X.The support of a measure µ ∈ P(X), denoted by spt(µ), is the closed set defined by We say that a sequence {µ k } k∈N ⊂ P(X) is weakly- * convergent to µ ∈ P(X), denoted by For p ∈ [1, +∞), the Wasserstein space of order p is defined as for some (and thus all) x 0 ∈ X.Given any two measures m and m ′ in P p (X), define The Wasserstein distance of order p between m and m ′ is defined by The distance d 1 is also commonly called the Kantorovich-Rubinstein distance and can be characterized by a useful duality formula (see, for instance, [37]) as follows Let X 1 , X 2 be metric spaces, let µ ∈ P(X 1 ) and let f : X 1 → X 2 be a µ measurable map.Then, we denote by f ♯µ ∈ P(X 2 ) the push-forward of µ through f defined by More generally, in integral form, it reads as

SETTING AND MAIN RESULTS
3.1.Convergence to classical MFG system.Let L 0 : R 2d × P 1 (R d ) → R satisfy the following.
(M1) L 0 is continuous w.r.t.all variables and for any m Observe that from (M2) one easily obtain and, without loss of generality, We consider the MFG system where m ε t = π 1 ♯µ ε t and π 1 : R 2d → R d denotes the projection onto the first factor, i.e., π 1 (x, v) = x.We assume the following on the boundary data of the system.(BC1) The measure µ 0 ∈ P(R 2d ) is absolutely continuous w.r.t.Lebesgue measure, we still denote by µ 0 its density, and it has compact support.
Let Γ be the set of C 1 curves γ : [0, T ] → R d , endowed with the local uniform convergence of the curve and its derivative, and given Then, from [14,1] we know that there exist a solution and for any t ∈ [0, T ] the probability measure µ ε t is the image of µ 0 under the flow That is, u ε solves the Hamilton-Jacobi equation in the viscosity sense and µ ε solves the continuity equation in the sense of distributions.
Remark 3.1.Note that for a.e.(x, v) ∈ R 2d there exists a unique solution to system (3.7), which we will denote by Moreover, the following holds.
that is, u 0 solves the Hamilton-Jacobi equation in the viscosity sense and m 0 is a solution of the continuity equation in the sense of distributions.(ii) For any t ∈ [0, T ] the probability measure m 0 t is the image of m 0 under the Euler flow associated with L 0 .Remark 3.3.Let (u ε , µ ε ) be a solution to (3.9).Assume that H 0 is of separated form, i.e., there exists a coupling function F : Moreover, assume that F is continuous w.r.t.all variables, that the map x → F (x, m) belongs to C 1 b (R d ) and that the functions F , g are monotone in the sense of Lasry-Lions, i.e.
Then, from [1,14] we know that that there exists a unique solution 3.12) and thus as (u ε , m ε ) is relatively compact then convergence of (u ε , m ε ) holds for the whole sequence.

3.2.
Convergence to MFG of control.We now consider the function L 0 : R d × P 1 (R 2d ) → R and we assume the following.
(C1) L 0 is continuous w.r.t.all variables and for any µ ∈ P 1 (R 2d ) the map x → L 0 (x, µ) belongs to C 1 (R d ).(C2) There exist two moduli θ : R + → R + and ω 0 : R + → R + such that and we assume (A1) the measure µ 0 ∈ P(R 2d ) is absolutely continuous w.r.t.Lebesgue measure, we still denote by µ 0 its density, and it has compact support.
•) uniformly continuous w.r.t.space and we have that Similarly to the previous part, we define the functional Then, from [14,1] we know that there exist a solution and for any t ∈ [0, T ] the probability measure µ ε t is the image of µ 0 under the flow That is, u ε solves the Hamilton-Jacobi equation in the viscosity sense and µ ε solves the continuity equation in the sense of distributions.

PROOF OF THEOREM 3.2
In order to prove Theorem 3.2 we proceed by steps analyzing the behavior of the value function u ε and that of the flow of probability measures {m ε t } t∈[0,T ] separately.First, we show that u ε is equibounded and we prove that, up to a subsequence, m ε converges to a flow of probability measure in C([0, T ]; P 1 (R d )).Then, we address the convergence of the value function, up to a subsequence, to a solution of a suitable Hamilton-Jacobi equation and we study the limit of its minimizing trajectories.Finally, we are able to characterize the limit flow of measures as solution of a continuity equation which coupled with the Hamilton-Jacobi equation, previously constructed, define the limit MFG system (3.12).Lemma 4.1.Assume (M1) -(M3) and (BC1), (BC2).Then we have that and for any ε > 0.
Proof.First, since u ε satisfy (3.10), from (3.4) and (BC) follows that for any On the other hand, let us recall that u ε solves the Hamilton-Jacobi equation (4.1) for a suitable choice of the real constant C ≥ 0. Indeed, we have that where the last inequality holds by Young's inequality.Thus, taking C = 2M 0 by (BC) we obtain So, we get the result by Comparison Theorem [8, Theorem 2.12].
Proof.On the one hand, from Lemma 4.1 we know that On the other hand, let (t, x, v) ∈ [0, T ] × R 2d and let γ ε be a minimizer for u ε (t, x, v).Then, by (3.4) we have that Therefore, combining the above inequalities we get where Q 1 depends only on M 0 , T and g(•, m ε T ) ∞,R d which is bounded uniformly in m ε T .Corollary 4.3.Assume (M1) -(M3) and (BC1), (BC2).Then, there exists a constant Q 2 ≥ 0 such that for any s 1 , s 2 ∈ [0, T ] with s 1 ≤ s 2 there holds Proof.We first recall that for any t ∈ [0, T ] we know that m ε t = π 1 ♯µ ε t where µ ε t is the image of µ 0 under the flow (3.7) whose space marginal we denote by γ ε (x,v) for (x, v) ∈ R 2d .Let s 1 , s 2 ∈ [0, T ] be such that s 1 ≤ s 2 .Then, by (2.2) we have that and thus, appealing to Corollary 4.2 and the Hölder inequality we obtain .
So, since µ 0 has compact support we get the result setting .
We are now ready to prove that the flow of probability measures m ε converges, up to a subsequence.First, we recall that for any t ∈ [0, T ] the measure m ε t is the space marginal of µ ε t which is given by the push-forward of the initial distribution µ 0 by the optimal flow (3.7), that is ) and (BC1), (BC2).Then, the flow of measures {m ε t } t∈[0,T ] is tight and there exists a sequence {ε k } k∈N such that m ε k converges to some probability measure m 0 in C([0, T ]; where µ ε t is given by push-forward of µ 0 under the flow (3.7), we know that So, we are interested in estimating the curve γ ε (x,v) for any (x, v), uniformly in ε > 0. In order to get it, from Corollary 4.2 we immediately deduce that Hence, for any t ≥ 0 we have that for some constant C 0 ≥ 0. Thus, since µ 0 has compact support we deduce that {m ε t } t∈[0,T ] has bounded second-order momentum, uniformly in ε > 0 and, consequently, {m ε t } t∈[0,T ] is tight.Therefore, by Prokhorov Theorem and Ascoli-Arzela Theorem, µ 0 has uniformly bounded support and by Corollary 4.3 m ε t is equicontinuous in time, there exists a sequence {ε k } k∈N and measure m 0 ∈ C([0, T ]; Next, we turn to the convergence of the value function u ε .Before proving it, we need preliminary estimates on the oscillation of the value function w.r.t.velocity variable and then w.r.t.time and space variable.In particular, we will show that the function u ε (t, x, •) has decreasing oscillation w.r.t.ε, which will allowed us to conclude that the limit function does not depend on v.
Proof.Fix R ≥ 0 and take (x, v), (x, w) ∈ R d × B R .Let γ ε be a minimizer for u ε (t, x, v) and define the curve where σ : [0, √ ε] → R 2d connects, in the sense of Lemma 4.5, (x, w) with (x, v).Then, we obtain Now, from Lemma 4.5 we know that (4.2) and, moreover, from the optimality of γ ε we get Then, as observed before from Corollary 4.2 we have that and also that the curve σ is bounded.Hence, by (M3) and Corollary 4.3 we deduce that there exists P (ε) ≥ 0, with P (ε) → 0 as ε ↓ 0, such that where we have used that the modulus θ in (M3) is bounded from the boundedness of γ ε and σ.Therefore, combining (4.2), (4.3) and (4.4) we get the result.
Proposition 4.7.Assume (M1) -(M3) and (BC1), (BC2).Then, for any R ≥ 0 there exists a modulus ω R : R + → R + and a constant C 1 ≥ 0, independent of R, such that for any ε > 0 the following holds: Proof.We begin by proving (4.6).Let (t, x, v) ∈ [0, T ] × R d × R d and let γ ε be a minimizer for u ε (t, x, v).Then, from (3.2) we get Hence, Corollary 4.2 yields to the conclusion.Next, we proceed to show (4.5).Let R ≥ 0 and take where the last inequality holds by (M3).Hence, from Corollary 4.2 we know that and thus θ(•) turns out to be bounded.Therefore, appealing to Corollary 4.3 we obtain (4.7) Then, by Dynamic Programming Principle we deduce that where we applied (3.4) and (4.6) to get the last inequality.Therefore, combining (4.7) and (4.8) the proof is complete.
Remark 4.8.Next, we study the behavior of the value function u ε as ε → 0 and before doing that we recall the following argument needed to get uniform convergence to a function which does not depend on v. Assume that there exists a nonnegative function Θ(δ 0 , ε 0 , R 0 ) such that and assume that for any Then: if u ε converge point-wise then u ε converges locally uniformly and the limit function does not depend on v.
Let m 0 ∈ C([0, T ]; P 1 (R d )) be the flow of measures obtained in Theorem 4.4 as limit of the flow m ε k in C([0, T ]; P 1 (R d )) for some subsequence ε k ↓ 0. Define the function u We will prove now that for the subsequence ε k the sequence of value functions u ε k locally uniformly converge to u 0 .Theorem 4.9.Assume (M1) -(M3) and (BC1), (BC2).Then, there exists a subsequence ε k ↓ 0 such that u ε k locally uniformly converges to u 0 .
Proof.We proceed to show first the point-wise convergence of u ε k to u 0 , for some subsequence ε k ↓ 0, and then, using Remark 4.8, i.e., constructing such a modulus Θ, we deduce that the convergence is locally uniform.
From Theorem 4.4, let ε k be the subsequence such that m Then, we have that where the last inequality holds by (M1) and the convergence of On the other hand, let R ≥ 0 and take Next, we distinguish two cases: first, when γ0 (t) = v and then when γ0 (t) = v.Indeed, if γ0 (t) = v, by the Euler equation and the C 2 -regularity of L 0 we have that γ ∈ C 2 ([0, T ]).
Hence, we can use γ 0 as a competitor for u ε k (t, x, v) and we get where the last inequality follows again from the convergence of m ε k in C([0, T ]; P 1 (R d )).If this is not the case, i.e., γ0 (t) = v, from Lemma 4.6 we deduce that Thus, in order to conclude it is enough to estimate u ε k (t, x, γ0 (t)) as in (4.10).Therefore, we obtain which implies that u ε k point-wise converges to u 0 .Finally, in order to conclude we need to show that the convergence is locally uniform.From (4.5), (4.6) and Lemma 4.6 we have that for any R ≥ 0 and any (t Therefore, setting by Remark 4.8 we deduce that the convergence is locally uniform and the proof is thus complete. After proving the convergence of u ε , we go back to the analysis of the flow of measures and in particular we will characterize it in terms of the limit function u 0 .In order to do so, we study the convergence of minimizers for u ε and appealing to such a result we will show that m 0 ∈ C([0, T ]; P 1 (R d )) solves a continuity equation with vector field D p H 0 (x, D x u 0 ), in the sense of distribution.Proposition 4.10.Assume (M1) -(M3) and (BC1), (BC2).Let (t, x, v) ∈ [0, T ] × R 2d be such that u 0 is differentiable at (t, x) and let γ ε be a minimizer for u ε (t, x, v).Then, γ ε uniformly converges to a curve γ 0 ∈ AC([0, T ]; R d ) and γ 0 is the unique minimizer for u 0 (t, x) in (4.9).
Proof.Let us start by proving that γ ε uniformly converges, up to a subsequence.By Corollary 4.2 we know that Thus, for any s ∈ [t, T ], by Hölder inequality we have that Therefore, γ ε is bounded in H 1 (0, T ; R d ) which implies that by Ascoli-Arzela Theorem there exists a sequence {ε k } k∈N and a curve γ 0 ∈ AC([0, T ]; R d ) such that γ ε k converges uniformly to γ 0 .We show now that such a limit γ 0 is a minimizer for u 0 (t, x).First, we observe that Then, as observed at the beginning of this proof γ ε is uniformly bounded in H 1 (0, T ).So by lower-semicontinuity of L and Theorem 4.4 we deduce that L 0 (γ 0 (s), γ0 (s), m 0 s ) ds + g(γ 0 (T ), m 0 T ).Moreover, for any R ≥ 0 taking (t, x, v) ∈ [0, T ] × R d × B R , from Theorem 4.9 we obtain and we recall that Hence, we get Therefore, passing to the limit as ε ↓ 0 from (4.11) we obtain which proves that γ 0 is a minimizer for u 0 (t, x).Since u 0 is differentiable at (t, x) ∈ R d there exists a unique minimizing trajectory and thus we have that the uniform convergence of γ ε holds for the whole sequence.Let u 0 be as in (4.9) and let (γ 0 t (•), γ0 t (•)) be the flow of Euler-Lagrange equations associated with the minimization problem in (4.9).Note that, since u 0 is Lipschitz continuous and µ 0 is absolutely continuous w.r.t. the Lebesgue measure we have that on spt(µ 0 ) the curve (γ 0 t (•), γ0 t (•)) is a minimizer for u 0 .We also recall that the measure µ ε is the image of µ 0 under the flow (3.7), which is optimal as observed in Remark 3.1 for u ε (0, x, v) for a.e.(x, v) ∈ R 2d , and thus, for any function ϕ ∈ C ∞ c (R d ) the measure m ε t is given by (4.12) We finally recall that by assumption µ 0 is absolutely continuous w.r.t.Lebesgue measure.
Corollary 4.12.Assume (M1) -(M3) and (BC1), (BC2).Then, we have that . Then, since µ 0 is absolutely continuous w.r.t.Lebesgue measure by Proposition 4.10 we have that which proves (4.13).Moreover, again by Proposition 4.10 we have that γ 0 t is a minimizer for u 0 (0, x) since it is the limit of γ ε (x,v) which is optimal u ε (0, x, v) and we are taking (x, v) in a subset of full measure w.r.t.µ 0 .Therefore, from the optimality of γ 0 we get and integrating, in time, over [0, T ] we get the result.
We are now ready to prove the main result.Proof of Theorem 3.2.Let {ε k } k∈N be such that m ε k → m 0 in C([0, T ]; P 1 (R d ) and u ε k → u 0 locally uniformly on [0, T ] × R 2d .Then, appealing to Theorem 4.9 and Corollary 4.12 we deduce that (u 0 , m 0 ) is a solution to the MFG system which completes the proof.

PROOF OF THEOREM 3.4
We recall that, in this section, we consider the MFG system So, the variational problem associated with such a system is given by where From the results on the previous section and the assumptions (C1) -(C3) on L 0 : R d × P(R 2d ) → R given above, we deduce that we only need to study the tightness of the flow of measures {µ ε t } t∈[0,T ] w.r.t. the second marginal.This can be done by a finer analysis of the Euler-Lagrange flow.Lemma 5.1.Let (x, v) ∈ R 2d and let γ ε be a solution to the variational problem associated with u ε (0, x, v).Then, γ ε is a solution of the Euler-Lagrange equation − ε ξ (iv) (t) + ξε (t) − D x L 0 (ξ(t), µ ε t ) = 0 with boundary condition ξ(T ) = 0, and − ε ξ (iii) (T ) + ξ(T ) + D x g(x, µ ε T ).Proposition 5.2.Assume (C1) -(C3) and (A1), (A2).Let (x, v) ∈ R 2d and let γ ε be a solution to the variational problem associated with u ε (0, x, v).Then, there exists a constant C ≥ 0 such that for any δ ∈ (0, 1) the following holds Proof.Fix (x, v) ∈ R 2d and a solution γ ε to the problem associated with u ε (0, x, v).In the following, for simplicity of notation we drop ε setting γ ε = γ and we drop the notation of the scalar product •, • .
Remark 5.4.Note that, following the above reasoning one easily deduce that the main result of this section is not uniform w.r.t.T .Now, by using similar techniques of Theorem 4.9 and Proposition 4.10 one can prove the following.

•
Denote by N the set of positive integers, by R d the d-dimensional real Euclidean space, by •, • the Euclidean scalar product, by | • | the usual norm in R d , and by B R the open ball with center 0 and radius R.• For a Lebesgue-measurable subset A of R d , we let L d (A) be the d-dimensional Lebesgue measure of A and 1 A : R n → {0, 1} be the characteristic function of A, i.e.,