1 Introduction

Consider a stochastic differential game with N players, indexed by \(i \in [[N]] :=\{ 0,\dots , N-1\}\), where the state \(X^i\) of the i-th player evolves according to the \({\mathbb {R}}^d\)-valued SDE

$$\begin{aligned} \textrm{d}X^i_t = \alpha ^i_t \,\textrm{d}t + \sqrt{2} \,\textrm{d}B^i_t. \end{aligned}$$

His/her cost, in the fixed time horizon [0, T], is given by

$$\begin{aligned} J^i(\alpha ) = \frac{1}{2} \, \mathbb {E} \int _0^T\! \big ( |\alpha ^i|^2 + \langle F^i X,X\rangle \big )\,\textrm{d}t + \langle G^i X_T,X_T\rangle , \end{aligned}$$
(1.1)

for some \(F^i = f^i \otimes I_d\) and \(G^i = g^i \otimes I_d\), with \(f^i \in C^0([0,T];\mathscr {S}(N))\) and \(g^i \in \mathscr {S}(N)\).Footnote 1 The \(B^i\)’s are independent \({\mathbb {R}}^d\)-valued Brownian motions, and the \(\alpha ^i\)’s are closed-loop controls in feedback form, that is \(\alpha ^i = \alpha ^i(t,X_t)\).

It is known (see for instance [12, Section 2.1.4], or [22]) that the value functions of the players, \(u^i = u^i(t,x)\) with \(i \in [[N]]\), \(t \in [0,T]\) and \(x = (x^0,\dots ,x^{N-1}) \in ({\mathbb {R}}^d)^N\) solve the so-called Nash system of Hamilton-Jacobi PDEs

$$\begin{aligned} {\left\{ \begin{array}{ll} -\partial _t u^i - \Delta u^i + \frac{1}{2} |D_i u^i |^2 + \sum _{j\ne i} D_j u^j D_j u^i = {\bar{F}}^i \\ u^i(T,\cdot ) = {\bar{G}}^i \end{array}\right. } \end{aligned}$$
(1.2)

where \({\bar{F}}^i = \frac{1}{2} \langle F^i\cdot ,\cdot \rangle \), \({\bar{G}}^i = \langle G^i \cdot ,\cdot \rangle \) and \(i \in [[N]]\). The equilibrium feedbacks are then given by \(\alpha ^i = -D_i u^i\). Since the dynamic of each player is linear, and the costs are quadratic in the state and control variables, it is well-known that the previous system can be recast into a system of ODEs of Riccati type, by making the ansatz that \(u^i\) are quadratic functions of the states. Here, we look for conditions on the solvability of such system, and we focus in particular on properties that are stable as the number of players goes to infinity, with the aim of addressing the limit problem with infinitely many players, whenever possible. The Laplacian appears in (1.2) by the presence of the independent noises \(B^i\), but it will not actually play any relevant role in our analysis. We keep it since the purely deterministic system is not known to be well posed, beyond the linear quadratic setting.

The study of differential games with many players has enjoyed a rapid development in the last two decades, since the introduction of the theory of Mean Field Games (MFGs) independently by Lasry and Lions [31] and Huang, Caines and Malhamé [25]. MFG theory provides an effective paradigm for studying dynamic games with many players that are both symmetric (that is, indistinguishable) and negligible. In such framework, one has a limit model, involving the equilibrium between a typical player versus a mass of agents, which is decentralised and symmetric: in fact, it is realised by a feedback of the state of the player only, it is identical for all the players, and it can be computed just by observing the population of players at the initial time. We refer to the book [12] and the lecture notes [11] for a recent account of the theory of MFGs.

If the MFG assumption is not fullfilled, that is, \(F^i\) and \(G^i\) do not depend just on the empirical measure of the other players, one enters into the broader framework of network games. There has been in the last few years an increasing interest in the understanding of the large population limits of equilibria among players whose interactions are described by graphs. Roughly speaking, when the number of interactions is “large”, the MFG description, or a suitable generalisation based on the notion of graphon effectively characterizes large population limits: see for instance [1, 5, 8, 30] and references therein. See also [16] for the analysis of a model on Erdős-Rényi graphs.

On the other hand, when the underlying network structure is sparse, very few results are available in the literature. In the dense regime, one expects all the players to have an individual negligible influence on a given player, because their running costs involve cumulative functions of many variables, while in sparse regimes, most of the players should have a small impact just because they are “far” with respect to the graph distance. This mechanism of independency between players that are far in the graph has been first observed in [29], by means of correlation estimates.

The framework addressed in [29] is somehow similar to ours. The linear-quadratic setting is considered, and under some symmetry assumptions on the underlying graph, Nash equilibria are computed explicitly (exploiting also the running costs \(\overline{F}^i\) to be identically zero, and \({\overline{G}}^i\) to have a specific structure). Then, a probabilistic information on the covariance of any two players’ equilibrium state processes is derived. A main goal in our work is to derive an analogous information in more analytic terms; that is, we wish to quantify the influence of the j-th player on the i-th one by estimating \(D_j u^i\), with the perspective of developing some ideas that could be applied also beyond the linear-quadratic framework.

We now describe our results more in detail. The first part of the paper is devoted to analyse a sort of sparse regime with a special structure, namely shift-invariance, where basically \(f^i\) and \(g^i\) coincide with \(f^{i-1}\) and \(g^{i-1}\), respectively, after the permutation of variables \(x^i \mapsto x^{i-1}\). Most importantly, we assume players to have nearsighted interactions, that means

$$\begin{aligned} |f^i_{hk}|, |g^i_{hk}| \lesssim \beta _{h-i}\beta _{k-i} \qquad \forall \, {h,k}, \end{aligned}$$

where \((\beta _k)_k\) is a suitable sequence in \(\ell ^1(\mathbb {Z})\). Since \(\beta _{k-i}\) decays as \(|k-i|\rightarrow \infty \), this means that \(f^i_{hk}\), which quantifies the influence of the h-th and k-th player on the i-th one, decreases as \(|h-i|\) and \(|k-i|\) increase. A prototypical example is given by the bi-directed chain, where \(f^{i}_{hk}\) is a sparse matrix with zero entries except for \(|h-i| \le 1\) and \(|k-i| \le 1\), so that the i-th player cost depends only on the \((i+1)\)-th and the \((i-1)\)-th state. See also [20] for results on models that build upon a similar structure.

The first statement we get, which sums up Theorems 2.13 and 2.14, is the following.

Theorem

There exists \(T^* > 0\) such that if \(T \le T^*\) then for any \(N \in \mathbb {N}\cup \{\infty \}\) there exists a smooth solution to system (1.2) such that, for any ij and \(m \in \mathbb {N}\),

$$\begin{aligned} \bigg \Vert \Big (\frac{\textrm{d}}{\textrm{d}t}\Big )^m D_{hk} u^i \bigg \Vert _{\infty } \lesssim \beta _{h-i} \beta _{k-i}. \end{aligned}$$

Note in particular that we get existence of a smooth classical solution to the infinite-dimensional Nash system (1.2) with \(N=\infty \), \(i \in \mathbb {Z}\). We stress that the key issue in the analysis of our problem is that, despite the cost functions \(f^i, g^i\) may depend on very few variables, the system itself is strongly coupled by the transport terms \(\sum _{j\ne i} D_j u^j D_j u^i\), which become in fact series when \(N = \infty \). The closed-loop structure of equilibria forces the equilibrium feedbacks \(\alpha ^j = -D_j u^j\) to be strongly “nonlocal”, that is, they depend on the full vector state x. Hence, decay estimates on \(D_j u^i\) which are stable as N increases are crucial to pass to the limit \(N \rightarrow \infty \). These are obtained here by a careful choice of \(\beta \): below, we will use the terminology self-controlled for the discrete convolution, or briefly c-self-controlled (where the “c” stands for “convolution”), that fits well with the structure of cyclic discrete convolution appearing in our problem. From the game perspective, the equilibrium feedbacks \(\alpha ^j\) turn out to be almost “local”, in the sense that the influence of “far” players is still negligible in the sense explained above. In other words, despite the strong coupling given by the full information structure of closed-loop equilibria, the property of “unimportance of distance players” is observed (see Remark 2.15 for further discussions).

While the shift-invariance condition can be actually dropped (see Sect. 2.4), one of the main restrictions of the previous result is that it guarantees short-time existence only. Note that, even with a finite number of players N, Riccati systems may fail to have long time solutions in general; therefore we look for further conditions on \(f^i\) and \(g^i\) such that existence holds for any time horizon T, and independently on N. To achieve this goal, we strengthen the previous assumption on nearsighted interactions, that now become of strongly gathering type, and require further directionality conditions. Section 2.5 contains precise definition of this notion and examples. The main existence result is stated in Theorem 2.26. Here, we exploit the possibility to relate a solution to system (1.2) to a flow of generating functions, which works well when \(N=\infty \) but has no clear adaption to the finite N setting.

Within the special setting of systems with cost of strongly gathering type and directionality, we are able to push further our analysis and study the long time limit \(T \rightarrow \infty \), that is, we show that the value functions \(u^i\) converge to solutions of the ergodic problem as the time horizon goes to infinity. To complete this program, estimates on solutions of the Riccati system are obtained at the level of the game with \(N=\infty \) players, uniformly with respect to T. The main result of this part of the work is stated by Theorem 2.42.

We conclude this second part by showing that equilibria of the infinite-dimensional game provide \(\epsilon \)-Nash closed-loop equilibria of suitable N player games, see Sect. 2.6.

In the last part of the work, we come back to the dense regime, and work without any symmetry assumption (like shift-invariance). Our goal is again to deduce some estimates on the Riccati system that do not depend on the number of players, under a mean-field-like condition

$$\begin{aligned} \sup _i \Big ( N \sum _{\begin{array}{c} h,k\\ k \ne i \end{array}} | f^i_{hk} |^2 + N \sum _{\begin{array}{c} k, k \ne i \end{array}} |f^k_{ki} |^2 + |f^i_{ii} |^2 \Big ) \lesssim 1, \end{aligned}$$

and the same for \(g^i\); the previous assumption imposes a smallness condition on the coefficients which is typical of the Mean-Field setting, that is when costs depend on the empirical measure of the players, though here we do not require any symmetry; see Remark 3.2 for further comments on the connection between this condition and the Mean-Field setting. It is crucial to observe that in this dense setting there is no hope to have a limit system of the same form. Indeed, in the sparse case we had an \(\ell ^1\) control of coefficients independent of N (given by \(\beta \)), so that the series \(\sum _{j\ne i} D_j u^j D_j u^i\) carries to the limit by dominated convergence, while now a dominated convergence cannot performed. In fact, at least in the symmetric case, where the costs are of the form \(F^i(x) = V(x^i, \frac{1}{N-1}\sum _{j\ne i}\delta _{x^j}\)), the correct limiting object is the so-called Master Equation, which has been the object of the seminal work [10]. The convergence of \(u^i\) to a limit function defined on the space of probability measures has been shown under monotonicity assumptions of \(f^i\), and its regularity is obtained as a consequence of the stability properties of the MFG system, which characterizes the limit model.

Here, we wish to deduce the estimates on \(u^i\) that allow for a passage to the limit \(N\rightarrow \infty \) directly on the Riccati system (that is, on the Nash system). For short time horizon, one can basically reproduce the same approach of the previous section (cf. Theorem 2.17). To achieve stability for arbitrary values of T, we impose a bound from below on the matrices \((f^i_{ij})_{ij}\) and \((g^i_{ij})_{ij}\):

$$\begin{aligned} (f^i_{ij})_{ij},\ (g^i_{ij})_{ij} \ge -\kappa I, \quad \kappa > 0. \end{aligned}$$
(1.3)

If \(\kappa \) is small enough, then the existence of a solution to the Nash system is guaranteed for large T, by a mechanism that produces an analogous bound from below on the matrix \((D_{ij} u^i)_{ij}\). Recalling that the equilibrium feedbacks are given by \(\alpha ^j = - D_j u^j\), this is equivalent to the one-sided Lipschitz condition

$$\begin{aligned} \sum _j \langle \alpha ^j(t,x) - \alpha ^j (t,y), x^j-y^j \rangle \le \delta |x-y|^2. \end{aligned}$$
(1.4)

Our main estimate is contained in Theorem 3.1. Interestingly, the monotonicity (or mild non-monotonicity) condition (1.3) generates the nice structural property (1.4) of the equilibrium drift vector \((-D_ju^j)_j\); it is worth observing that, when \(\kappa = 0\) and in the symmetric MFG case \(f^i(x) = V(x^i, (N-1)^{-1} \sum _{j \ne i} \delta _{x^j})\), (1.3) is “almost” equivalent (that is, up to a correction term of order 1/N) to the displacement monotonicity condition used in [23] to get well-posedness of the Master Equation.

We believe that the estimates obtained here, combined with propagation of chaos arguments, allow for a convergence result of (closed loop) equilibria in the \(N\rightarrow \infty \) limit. We recall that the lack of such estimates so far has motivated the investigation of regularity of solutions to the Master equation, that allows to bypass the difficult analysis of the Nash system; hence they can be of interest by themselves. Analogous estimates for solutions to the Nash system beyond the Linear-Quadratic setting, and their application to the convergence problem will be discussed in the forthcoming work [14].

We now discuss some some further related references. In [3, 19], the convergence problem of open-loop equilibria in symmetric N-person linear-quadratic games is addressed, while [17] deals with the selection problem for models without uniqueness. See also [4] for a convergence result in a finite state model. Linear-quadratic MFGs are studied in several works, see for instance [6, 7, 24, 32, 33] and references therein. Further contributions on the convergence problem are contained in [9, 18, 21, 27, 28], while recent results on the structure of many players cooperative equilibria can be found in [13, 15, 26].

As a final comment, despite trying to pursue the full generality of the linear-quadratic differential game framework, we carefully analysed here a few case studies, with the aim of highlighting some properties that could be investigated beyond the linear-quadratic setting, and directly on the Nash system of PDEs. With this purpose, we tried to avoid as much as possible the use of explicit formulas. Even though these were quite crucial to study the long time behaviour in the sparse regime, there are a few takeaways here that can be starting points for further investigations. On one hand, we are addressing the short-time existence of the infinite-dimensional Nash system with non-quadratic cost functions, by estimating the derivatives of \(u^i\) by means of the coefficients \(\beta \) constructed here. On the other hand, analogous bounds on \(u^i\) in the mean-field-like setting, again by working directly on the Nash system and without making use of limit objects such as the Master Equation, will appear in [14].

2 Shift-Invariant Games

2.1 General Setting

For \(a \in \mathbb {Z}\) and \(b \in \mathbb {Z}_+\), we will use the notation \([a]_b\) to identify the unique natural number \(r \in [[b]]\) such that \(r = a \ \textrm{mod}\,b\). The entries of a \(N \times N\) matrix will be indexed over [[N]]. We will denote by \(L^{(N)} \in \mathscr {S}(N)\) the lower shift matrix \(\textrm{mod}\,N\), defined by

$$\begin{aligned} L^{(N)}_{hk} = \delta _{h, [k+1]_N} \quad \forall \, h,k \in [[N]], \end{aligned}$$

where \(\delta \) is the Kronecker symbol.

Our main assumption in this section, on the structure of the running cost, is that \(f^i\) and \(g^i\) are shift-invariant, in the sense of the following definition.

Definition 2.1

A collection of matrices \((M^i)_{i\in [[N]]} \subset \mathscr {S}(N)\) is shift-invariant if

$$\begin{aligned} M^{[i+1]_N} = L^{(N)} M^i {L^{(N)}}{}^\textsf{T}\quad \forall \, i \in [[N]]. \end{aligned}$$
(SI)

Note that this is equivalent to requiring that for all \(i \in [[N]]\) and \(x \in {\mathbb {R}}^N\),

$$\begin{aligned}{} & {} \langle M^i (x^0, x^1, \ldots , x^{N-1}),(x^0, x^1, \ldots , x^{N-1})\rangle \\{} & {} \qquad = \langle M^{[i+1]_N} (x^1, \ldots , x^{N-1}, x^0),(x^1, \ldots , x^{N-1}, x^0)\rangle . \end{aligned}$$

Example 2.2

A very basic shift-invariant case is that with \(g^0 = 0\) and

$$\begin{aligned} f^0 = w \otimes w, \end{aligned}$$
(2.1)

with

$$\begin{aligned} w = \frac{1}{\ell -1}\,(\ell -1,\underbrace{-1, \dots , -1}_{\ell -1 \ \text {times}}, \underbrace{0, \dots , 0}_{N-\ell \ \text {times}}), \quad \ell \le N; \end{aligned}$$

that is, by (SI),

$$\begin{aligned} \langle F^iX_t,X_t\rangle = \bigg |X^i_t - \frac{1}{\ell -1} \sum _{j=[i+1]_N}^{[i+\ell -1]_N} X^j_t \bigg |^2. \end{aligned}$$
(2.2)

When \(\ell = N\), the cost is actually of Mean-Field type, that is, it penalizes the deviation of the private state \(X^i\) from the mean of the vector X.

Here \(f^0\) induces an underlying directed circulant graph structure \(G_\ell \) to the problem; indeed, by assumption (SI)

$$\begin{aligned} A = \left( f^i_{ij}\right) _{i,j \in \left[ \left[ N\right] \right] } - \textrm{diag}\,\left( f^i_{ii}\right) _{i\in \left[ \left[ N\right] \right] } = \left( f^i_{ij}\right) _{i,j \in \left[ \left[ N\right] \right] } - I_N \end{aligned}$$

can be considered as the asymmetric and circulant adjacency matrix of \(G_\ell \), so that (2.2) reads

$$\begin{aligned} \langle F^iX_t,X_t\rangle = \bigg |X^i_t - \frac{1}{\#\{ j: (i,j) \in G_\ell \}} \sum _{j:\ (i,j) \in G_\ell } X^j_t \bigg |^2. \end{aligned}$$
(2.3)

The same is true in the more general case when

$$\begin{aligned} w = (1,-w_1,\dots ,-w_{\ell -1}), \quad \text {with}\ w_j \ge 0 \ \, \forall \, j, \ \ \sum _{j=1}^{\ell -1} w_j = 1; \end{aligned}$$
(2.4)

here the \(w_j\)’s are regarded as normalized weights, and we have

$$\begin{aligned} \langle F^iX_t,X_t\rangle = \bigg |X^i_t - \sum _{j:\ (i,j) \in G_\ell } w_j X^j_t \bigg |^2, \end{aligned}$$

which generalizes (2.3).

Note finally that a sort of “directionality” is encoded in the above examples, that is, each player i is affected by the “following” ones \(j > i\) in the chain. This is not yet important at the current stage, namely we may allow for

$$\begin{aligned} w = \frac{1}{\ell -1}\,(\ell -1,\underbrace{-1, \dots , -1}_{\ell -1 \ \text {times}}, \underbrace{0, \dots , 0}_{N-\ell -m \ \text {times}}, \underbrace{-1, \dots , -1}_{m \ \text {times}}), \quad \ell +m \le N; \end{aligned}$$
(2.5)

It is only from Sect. 2.5 that \(m = 0\) will be required.

2.2 The Evolutive Infinite-Dimensional Nash System

The main object of our study is the Nash system

$$\begin{aligned} {\left\{ \begin{array}{ll} -\partial _t u^i - \Delta u^i + \frac{1}{2} |D_i u^i |^2 + \sum _{j\ne i} D_j u^j D_j u^i = {\bar{F}}^i &{} \text { on }(0,T) \times ({\mathbb {R}}^d)^N\\ u^i(T,\cdot ) = {\bar{G}}^i \end{array}\right. } \end{aligned}$$
(2.6)

When \(N = \infty \), we need to be a bit careful about the notion of solution. In this case, \(x \in \mathcal {X} = \ell ^\infty (\mathbb {Z};{\mathbb {R}}^d)\). Then, we mean that (2.6) admits a classical solution in the following sense.

Definition 2.3

A sequence of \({\mathbb {R}}\)-valued functions \((u^i)_{i\in \mathbb {Z}}\) defined on \([0,T] \times \mathcal {X}\) is a classical solution to the Nash system (2.6) on \([0,T] \times \mathcal {X}\) if the following hold:

  1. (S1)

    each \(u^i\) is of class \(C^1\) with respect to \(t \in (0,T)\) and \(C^2\) with respect to \(x \in \mathcal {X}\), in the Fréchet sense;

  2. (S2)

    for each \(i \in \mathbb {N}\), the Laplacian series \(\Delta u^i = \sum _j \Delta _j u^i\) and the series \(\sum _{j\ne i} D_j u^j D_j u^i\) uniformly converge on all bounded subsets of \([0,T] \times \mathcal {X}\);

  3. (S3)

    system (2.6) is satisfied pointwise for all \((t,x) \in (0,T) \times \mathcal {X}\);

  4. (S4)

    \(u^i(T,\cdot ) = {\bar{G}}^i\) for all \(i \in \mathbb {N}\).

Remark 2.4

Here we made a choice of Banach space, namely \(\mathcal {X} = \ell ^\infty (\mathbb {Z}; {\mathbb {R}}^d)\). This seems a natural choice for a twofold reason. First, it is the largest \(\ell ^p\) space contained in the limit set \(({\mathbb {R}}^d)^\infty \cong ({\mathbb {R}}^d)^{\mathbb {Z}}\), so it seems to provide a quite general setting for a Nash system for infinitely many players. Second, in a linear-quadratic setting as the one we are considering, it is fairly clear that (see the upcoming discussion about the standard ansatz \(u^i(t,x) = \frac{1}{2} \langle (c^i(t) \otimes I_d)x,x\rangle _{{\mathbb {R}}^{Nd}} + \eta ^i(t)\)) the uniform convergence of \(\Delta u^i\) when \(N = \infty \) is strictly related to \((c^i_{jj})_{j \in \mathbb {Z}}\) being summable; therefore the variable x is expected to live in the dual space of \(\ell ^1\), that is \(\ell ^\infty \). This is even more evident if one considers the case (and the results we present can indeed be adapted to it) of a more general diffusion such as the one arising from common noise, \(\sum _j \Delta _j u^i + \beta \sum _{jk} {{\,\textrm{tr}\,}}(D^2_{jk} u^i)\).

As it is customary in linear-quadratic N-player games, we look for solutions of the form

$$\begin{aligned} u^i(t,x) = \frac{1}{2} \langle (c^i(t) \otimes I_d)x,x\rangle _{{\mathbb {R}}^{Nd}} + \eta ^i(t), \end{aligned}$$
(2.7)

for some functions \(c^i :[0,T] \rightarrow \mathscr {S}(N)\) such that \(c^i(T) = g^i\) and \(\eta ^i :[0,T] \rightarrow {\mathbb {R}}\) which vanish at T. We have \(D_j u^i(t,x) = {e^j}{}^\textsf{T}(c^i(t) \otimes I_d) x\), where \(e^j = e_j \otimes I_d\), \(\{e_j\}_{j=0}^{N-1}\) being the canonical basis of \({\mathbb {R}}^N\). Hence, from (2.6) we obtain

$$\begin{aligned}{} & {} x^\textsf{T}\bigg ( {-\frac{1}{2}}\, \dot{c}^i \otimes I_d \!+\! \frac{1}{2} (c^i \otimes I_d)e^i{e^i}{}^\textsf{T}(c^i \otimes I_d) \!+\! \sum _{j\in [[N]]\setminus \{i\}} (c^j \otimes I_d)e^j{e^j}{}^\textsf{T}(c^i \otimes I_d) \!-\! \frac{1}{2} F^i \bigg ) x\\{} & {} \qquad = {{\,\textrm{tr}\,}}(c^i \otimes I_d) + {\dot{\eta }}^i; \end{aligned}$$

that is,

$$\begin{aligned} x^\textsf{T}\bigg (\bigg ( {-\frac{1}{2} \dot{c}^i} + \frac{1}{2} c^i e_i{e_i}^\textsf{T}c^i + \sum _{j\in [[N]]\setminus \{i\}} c^j e_j{e_j}^\textsf{T}c^i - \frac{1}{2} f^i \bigg ) \otimes I_d \bigg ) x = d {{\,\textrm{tr}\,}}c^i + {\dot{\eta }}^i. \end{aligned}$$

As this must hold for all \(x \in {\mathbb {R}}^{Nd}\), it follows that

$$\begin{aligned} {\dot{\eta }}^i = - d\sum _{j \in [[N]]} c^i_{jj}, \end{aligned}$$

and

$$\begin{aligned} {- \dot{c}^i} + c^i e_i{e_i}^\textsf{T}c^i + \sum _{j\in [[N]]\setminus \{i\}} \big ( c^j e_j{e_j}^\textsf{T}c^i + c^i e_j{e_j}^\textsf{T}c^j \big ) = f^i, \end{aligned}$$
(2.8)

as \(x^\textsf{T}A x = 0\) for all x if and only if \(A + A^\textsf{T}= 0\) for any square matrix A.

Now, given the shift invariance of \(f^i\) and \(g^i\), one expects a solution to (2.8) to enjoy the same property, hence we look for a solution such that

$$\begin{aligned} c^i = \big ({L^{(N)}\big )^i c\, \big ({L^{(N)}}{}^\textsf{T}}\big )^i, \end{aligned}$$
(2.9)

for some \(c :[0,T] \rightarrow \mathscr {S}(N)\). Clearly, this makes \(\eta ^i\) independent of i, as we will have \(\eta ^i = \eta :=\int _\cdot ^T {{\,\textrm{tr}\,}}c\). By plugging (2.9) into (2.8) and letting \(i=0\) one obtains the following system of ODEs for the entries of c:

$$\begin{aligned} -\dot{c}_{hk} - c_{0h} c_{0k} + \sum _{j \in [[N]]} \big ( c_{0,[h-j]_N} c_{jk} + c_{0,[k-j]_N} c_{jh} \big ) = f_{hk}, \quad c_{hk}(T) = g_{hk},\nonumber \\ \end{aligned}$$
(2.10)

where \(f :=f^0\) and \(g :=g^0\).

As we are interested in the limit problem of infinitely many players, which we expect to be indexed by \(\mathbb {Z}\) since we have an undirected structure, it is convenient to shift the indices in such a way that \(i=0\) “stays in the middle”; that is, we let \(i=-N',\dots ,N''\), instead of \(i=0,\dots ,N\), where, for example, \(N'=N'' = (N-1)/2\) if N is odd, and \(N' = N/2 = N''+1\) if N is even. Therefore, we rewrite system (2.10) as

$$\begin{aligned}{} & {} -\dot{c}_{hk} + c_{0h}c_{0k} + \sum _{j\ne 0} \left( c_{0,h-j}c_{jk} + c_{hj} c_{0,k-j} \right) = f_{hk}, \nonumber \\{} & {} c_{hk}(T) = g_{hk}, \qquad i=-N',\dots ,N'', \end{aligned}$$
(2.11)

where all indices are understood \(\textrm{mod} \ N\) and between \(-N'\) and \(N''\). Now it is immediate to identify a convenient limit system by letting \(N \rightarrow \infty \).

2.3 Self-Controlled Sequences for the Discrete Convolution

We can recognise a structure of a cyclic discrete convolution in the sums in (2.10); that is,

$$\begin{aligned} \sum _{j=0}^{N-1} c_{0,[h-j]_N}c_{jk} = (c_{\cdot 0} \star _N c_{\cdot k})_h, \qquad \sum _{j=0}^{N-1} c_{hj} c_{0,[k-j]_N} = (c_{h\cdot } \star _N c_{0\cdot })_k. \end{aligned}$$
(2.12)

We wish to exploit this fact in order to prove existence of the solution to system (2.11) for small T, for any (possibly infinite) N.

Our main tool will be the following.

Definition 2.5

A nonnegative sequence \(\beta \in \ell ^2(\mathbb {Z})\) is said to be convolution-self-controlled, or c-self-controlled, if \(\beta \star \beta \lesssim \beta \); that is,

$$\begin{aligned} \sum _{j\in \mathbb {Z}} \beta _j \beta _{i-j} \le C \beta _i \quad \forall \, i \in \mathbb {Z}, \end{aligned}$$
(2.13)

for some constant \(C>0\) independent of i.

We will mainly consider positive c-self-controlled sequences in \(\ell ^1(\mathbb {Z})\) that are even and “weakly decreasing”, in the sense that \(\beta _i = \beta _{-i}\) and there exists \(K>0\) such that \(\beta _j \le K\beta _i\), for all \(i,j \in \mathbb {N}\) with \(j \ge i\). Such sequences, which we will refer to as regular, indeed exist, as shown by the following result.

Lemma 2.6

For any \(\epsilon > 0\), there exists a positive sequence \(\beta \in \ell ^1(\mathbb {Z})\), with \(\Vert \beta \Vert _2 < \epsilon \) and such that \(\beta \star \beta \le 4\beta \). In particular, one can choose \(\beta \) of the form

$$\begin{aligned} \beta _i = \beta _i(\alpha ) :=\frac{2\alpha }{\alpha ^2 + i^2} \left( 1 - (-)^i e^{-\alpha \pi } \right) , \quad i \in \mathbb {Z}, \end{aligned}$$

for some \(\alpha = \alpha (\epsilon ) > 0\), so that \(\beta \) is even and \(\beta _j \le \coth (\frac{\alpha \pi }{2}) \beta _i\) for all \(i,j \in \mathbb {N}\) with \(j \ge i\).

Proof

It is well-known that the Fourier coefficients of a function \(f \in L^2((-\pi ,\pi ))\), given by

$$\begin{aligned} {\hat{f}}_j :=\frac{1}{2\pi } \int _{-\pi }^{\pi } f(x) e^{-ijx} \textrm{d}x, \quad j \in \mathbb {Z}, \end{aligned}$$

satisfy the following property: if \(f,g \in L^2((-\pi ,\pi ))\), then \(\widehat{fg}_j = ({\hat{f}} \star {\hat{g}})_j\) for all \(j \in \mathbb {Z}\). Let now be \(f_\alpha :=e^{-\alpha |\cdot |}\), for any \(\alpha > 0\). It is elementary to compute, for each \(j \in \mathbb {Z}\),

$$\begin{aligned} \widehat{f_\alpha }_j = \frac{2\alpha }{\alpha ^2 + j^2} \left( 1 - (-)^j e^{-\alpha \pi } \right) > 0, \end{aligned}$$
(2.14)

whence \((\widehat{f_\alpha } \star \widehat{f_\alpha })_j = \widehat{f_\alpha ^2}_{\!j} = \widehat{f_{2\alpha }}_j \le 4 \widehat{f_\alpha }_j\), the last inequality being straightforward to check using the explicit expression (2.14). Also, by Parseval’s identity \(\Vert \widehat{f_\alpha }\Vert _{\ell ^2(\mathbb {Z})}^2 = \Vert f_\alpha \Vert _{L^2(-\pi ,\pi )}^2 = \alpha ^{-1}(1-e^{-2\alpha \pi }) \rightarrow 0\) as \(\alpha \rightarrow +\infty \), so that \(\beta = \widehat{f_\alpha }\) has the desired properties for any choice of \(\alpha = \alpha (\epsilon )\) sufficiently large. Finally, note that \(\beta _j \le \coth (\frac{\alpha \pi }{2}) \beta _i\) for all \(i,j \in \mathbb {N}\) with \(j \ge i\). \(\square \)

Remark 2.7

(Variations on a c-self-controlled sequence) Clearly any positive multiple of a c-self-controlled sequence stays self-controlled. This allows to have self-controlled sequences of arbitrarily large \(\ell ^\infty \)-norm, although with a larger constant C in (2.13). On the other hand, one can also build c-self-controlled sequences which decay exponentially faster; indeed, if \(\beta \) is c-self-controlled, then for any \(\gamma > 0\) so is the sequence defined by setting \({\tilde{\beta }}_i :=\beta _i e^{-\gamma |i|}\), with the same implied constant C. This is easily proven as follows. Suppose that \(i\le 0\); then

$$\begin{aligned} \begin{aligned} ({\tilde{\beta }} \star {\tilde{\beta }})_i&= e^{\gamma i} \sum _{j> 0} \beta _j \beta _{i-j} e^{-2\gamma j} + e^{\gamma i} \sum _{i\le j \le 0} \beta _j \beta _{i-j} + e^{-\gamma i} \sum _{j< i} \beta _j \beta _{i-j} e^{2\gamma j} \\&\le e^{\gamma i} \bigg ( \sum _{j > 0} \beta _j \beta _{i-j} + \sum _{i\le j \le 0} \beta _j \beta _{i-j} \bigg ) + e^{-\gamma i} \sum _{j < i} \beta _j \beta _{i-j} e^{2\gamma i} \\&= e^{\gamma i} \sum _{j\in \mathbb {Z}} \beta _j \beta _{i-j} \\&\le C\beta _i e^{\gamma i}. \end{aligned}\end{aligned}$$

The case \(i\ge 0\) is analogous.

The next result will be useful to deal with convolution of the form (2.12). It essentially states that c-self-controllability is preserved by suitable perturbations, which include all perturbations with compact support.

Lemma 2.8

Let \(\beta \) be c-self-controlled and let \(\theta = (\theta _{hk})_{h,k \in \mathbb {Z}}\) be a nonnegative sequence such that \(\theta \lesssim \beta \otimes \beta \).Footnote 2 Let d be the sequence given by \(d :=\beta \otimes \beta + \theta \). Then

$$\begin{aligned} (d_{\cdot 0} \star d_{\cdot k})_h \lesssim \beta _{h}\beta _{k} \quad \forall \, h,k \in \mathbb {Z}. \end{aligned}$$

Proof

It suffices to compute

$$\begin{aligned} (d_{\cdot 0} \star d_{\cdot k})_h = \beta _0(\beta \star \beta )_h \beta _k + (\theta _{\cdot 0} \star \beta )_h \beta _k + \beta _0 ( \beta \star \theta _{\cdot k})_h + ( \theta _{\cdot 0} \star \theta _{\cdot k} )_h \lesssim \beta _h\beta _k. \end{aligned}$$

\(\square \)

Remark 2.9

A straightforward implication of the above inequality is that \((d_{\cdot 0} \star d_{\cdot k})_h \lesssim d_{hk}\).

2.4 Short-Time Existence for Nearsighted Interactions

We are now ready for our first existence and uniqueness results (Theorem 2.13 and 2.14 below), which is a direct consequence of the following proposition.

By a game with nearsighted interactions we are meaning that \(|f| \vee |g| \le \theta \) pointwise (that is, index-wise) for some \(\theta \in \ell ^1(\mathbb {Z}^2)\) satisfying the hypotheses of Lemma 2.8; said differently, we are meaning that \(|f| \vee |g| \lesssim \beta \otimes \beta \) for some regular c-self-controlled \(\beta \).

Remark 2.10

Any compactly supported sequence \(h :\mathbb {Z}^2 \rightarrow [0,+\infty )\) is nearsighted. Indeed, given a positive c-self-controlled sequence \(\beta \), define \({\tilde{\beta }} :=(\max _{(i,j) \in \textrm{spt}\beta } \frac{h_{ij}}{\beta _i\beta _j})\beta \); then \(h \lesssim {\tilde{\beta }} \otimes {\tilde{\beta }}\).

Proposition 2.11

Let \(N \in \mathbb {N}\) and \(\beta \) be a regular c-self-controlled sequence. Let \(f \in C^0([0,T];\mathscr {S}(N))\) and \(g \in \mathscr {S}(N)\) satisfy \(|f| \vee |g| \lesssim \beta \otimes \beta \). Define \(\mathcal {C}_N :=C^0([0,T])^{2N+1}\) and write \(c = (c_{ij})_{i,j=-N}^{N} \in \mathcal {C}_N\). For \(d = \beta \otimes \beta + |g| \vee \sup _{[0,T]} |f|\), set

$$\begin{aligned} \mathcal {K}_N :=\prod _{i,j = - N}^N \big \{ w \in C^0([0,T]):\ \Vert w\Vert _\infty \le 2d_{ij} \big \}. \end{aligned}$$
(2.15)

Let \(J_N :\mathcal {K}_N \rightarrow \mathcal {C}_N\) be the map given, for each \(i,j = -N,\dots ,N\), by

$$\begin{aligned} J_N(c)_{ij}(t) :=g_{ij} + \int _t^T \Big ( f_{ij} + c_{0i}c_{0j} - ( c_{\cdot 0} \star _{2N+1} c_{\cdot j})_i - ( c_{i\cdot } \star _{2N+1} c_{0\cdot } )_j \Big ). \end{aligned}$$
(2.16)

Then there exist \(T^* > 0\), depending on \(\beta \) but independent of N, such that

$$\begin{aligned} T \le T^* \quad \implies \quad J_N(\mathcal {K}_N) \subseteq \mathcal {K}_N. \end{aligned}$$

Proof

Let \(c \in \mathcal {K}_N\). If \(i \ge 0\),

$$\begin{aligned} \begin{aligned} \left\| (c_{\cdot 0} \star _N c_{\cdot j})_i\right\| _\infty&= \bigg \Vert \sum _{k=-N+i}^N c_{0,i-k}c_{jk} + \sum _{k=-N}^{-N+i-1} c_{0,i-k-2N-1}c_{jk} \bigg \Vert _\infty \\&\le 4\sum _{k=-N+i}^N d_{0,i-k}d_{jk} + 4\sum _{k=-N}^{-N+i-1} d_{0,i-k-2N-1}d_{jk} \\&\le 4(d_{\cdot 0} \star d_{\cdot j})_i + 4(d_{\cdot 0} \star d_{\cdot j})_{i-2N-1} \\&\lesssim {(\beta _i + \beta _{i-2N-1})\beta _j}, \end{aligned} \end{aligned}$$

where the last estimate comes from Lemma 2.8. As \(\beta \) is regular and \(|i-2N-1| = 2N + 1 - i > i\), we have that \(\beta _{i-2N-1} \lesssim \beta _i\), and thus \(\left\| (c_{\cdot 0} \star _N c_{\cdot j})_i\right\| _\infty \lesssim d_{ij}\), with an implied constant which does not depend on N. The same holds for \(-N \le i < 0\) by a symmetrical argument. Analogously, \(\left\| ( c_{i\cdot } \star _N c_{0\cdot } )_j \right\| _\infty \lesssim d_{ij}\) and clearly \(\Vert c_{0i}c_{0j}\Vert _\infty \le 4d_{0i}d_{0j} \le 4(d_{\cdot 0} \star d_{\cdot j})_i \lesssim d_{ij}\). Therefore,

$$\begin{aligned} \Vert J_N(c)_{ij}\Vert _\infty \le |g_{ij}| + C T d_{ij} \le (1+C T ) d_{ij}, \end{aligned}$$
(2.17)

where the constant C depends only on \(\beta \). It follows that for \(T>0\) small enough, depending on \(\beta \), one has \(\Vert J_N(c)_{ij}\Vert _{\infty } < 2d_{ij}\) for all \(i,j = -N,\dots ,N\). \(\square \)

Remark 2.12

We stated and proved Proposition 2.11 for an odd number of players. This is just a matter of having expressions that look more symmetrical, yet there is no preference about the parity of the number of players, so that the above result holds, mutatis mutandis, also if the number of players is even. It is also clear that, with a very much analogous proof, the thesis of Proposition 2.11 also holds for \(N = \infty \), where one defines, in a natural way,

$$\begin{aligned} \mathcal {K}_\infty :=\prod _{i,j \in \mathbb {Z}} \big \{ w \in C^0([0,T]):\ \Vert w\Vert _\infty \le 2d_{ij} \big \} \end{aligned}$$

and

$$\begin{aligned} J_\infty (c)_{ij}(t) :=g_{ij} + \int _t^T \!\big ( f_{ij} + c_{0i}c_{0j} - ( c_{\cdot 0} \star c_{\cdot j})_i - ( c_{i\cdot } \star c_{0\cdot } )_j \big ). \end{aligned}$$

Theorem 2.13

Under the hypotheses of Proposition 2.11, there exists \(T^* > 0\) such that if \(T \le T^*\) then for any \(N \in \mathbb {N}\cup \{\infty \}\) there exists a unique smooth solution to system (2.11) such that, for any \(i,j \in -N',\dots ,N''\) and \(m \in \mathbb {N}\),

$$\begin{aligned} \bigg \Vert \Big (\frac{\textrm{d}}{\textrm{d}t}\Big )^m c_{ij} \bigg \Vert _{\infty } \lesssim \beta _{i} \beta _{j}, \end{aligned}$$
(2.18)

where the implied constants depend only on \(T^*\), f, g and m.

Proof

A fixed point of the map \(J_N\) defined in (2.16) is a solution. We deal with the case of \(J_N\) with \(N = \infty \), as the case with \(N \in \mathbb {N}\) can be included therein. Note that \(\mathcal {K}_\infty \) can be considered as a closed ball of the Banach space \(\ell ^\infty _d(\mathbb {Z}^2; C^0([0,T]))\); that is, the space of functions from \(\mathbb {Z}^2\) to C([0, T]) with finite norm

$$\begin{aligned} |\!|\!|\cdot |\!|\!|_\infty :=\sup _{i,j \in \mathbb {Z}} d_{ij}^{-1} \Vert \cdot _{ij}\Vert _\infty . \end{aligned}$$

We prove that the map \(J_\infty \) is a contraction on \(\mathcal {K}_\infty \), provided that T is sufficiently small with respect to d. The conclusion will follow from Proposition 2.11, Remark 2.12 and the contraction mapping theorem; then, once one have a continuous solution, by the structure of equations (2.11), one bootstraps its regularity up to \(C^\infty \), while estimate (2.18) for \(m > 1\) follows by induction differentiating (2.11) and estimating as in the proof of Proposition 2.11. Let now \(c,{\bar{c}} \in \mathcal {K}_\infty \). We have, for ij fixed,

$$\begin{aligned} \begin{aligned} \big \Vert J_\infty ({\bar{c}})_{ij} - J_\infty (c)_{ij} \big \Vert _\infty&\le T \Big ( \big \Vert ( {\bar{c}}_{\cdot 0} \star _{N} {\bar{c}}_{\cdot j})_i - ( c_{\cdot 0} \star _{N} c_{\cdot j})_i \big \Vert _\infty \\&\quad \ + \big \Vert c_{0i}{\bar{c}}_{0j} - c_{0i}c_{0j} - \big ((\bar{c}_{i\cdot } \star _{N} {\bar{c}}_{0\cdot } )_j - ( c_{i\cdot } \star _{N} c_{0\cdot } )_j \big ) \big \Vert _\infty \Big ). \end{aligned}\nonumber \\ \end{aligned}$$
(2.19)

We have

$$\begin{aligned} \begin{aligned} \big \Vert ( {\bar{c}}_{\cdot 0} \star {\bar{c}}_{\cdot j})_i - ( c_{\cdot 0} \star c_{\cdot j})_i \big \Vert _\infty&\le \sum _{k\in \mathbb {Z}} \Big ( \Vert {\bar{c}}_{k0}\Vert _\infty \Vert {\bar{c}}_{i-k,j} - c_{i-k,j}\Vert _\infty + \Vert {\bar{c}}_{k0} - c_{k0}\Vert _\infty \Vert c_{i-k,j}\Vert _\infty \Big ) \\&\lesssim d_{ij} |\!|\!|{\bar{c}}-c|\!|\!|; \end{aligned}\end{aligned}$$

that is, \(|\!|\!|( {\bar{c}}_{\cdot 0} \star {\bar{c}}_{\cdot j})_i - ( c_{\cdot 0} \star c_{\cdot j})_i|\!|\!| \lesssim |\!|\!|{\bar{c}}-c|\!|\!|\) and analogously for the second term in (2.19). Hence \(|\!|\!|J_\infty ({\bar{c}}) - J_\infty (c)|\!|\!| \lesssim T |\!|\!|{\bar{c}} - c|\!|\!|\). \(\square \)

Theorem 2.14

Suppose that \(f^i\) and \(g^i\) are shift-invariant and there exists a regular c-self-controlled \(\beta \) such that \(|f^0| \vee |g^0| \le C(\beta \otimes \beta )\), \(C > 0\). There exists \(T^* > 0\), depending only on C and \(\beta \), such that if \(T \le T^*\) then there exists a smooth classical solution to the infinite-dimensional Nash system (2.6) with \(i \in \mathbb {Z}\) on \([0,T] \times \mathcal {X}\). Furthermore, such a solution is unique in the class of functions of the form (2.7).

Proof

Let c be the solution given by Theorem 2.13. For \( x = (x^i)_{i\in \mathbb {Z}} \in \ell ^\infty (\mathbb {Z}; {\mathbb {R}}^d)\), define

$$\begin{aligned} U(t,{ x}) = \frac{1}{2} \sum _{i,j \in \mathbb {Z}} c_{ij}(t) x^i\! \cdot x^j + \int _t^T \sum _{i\in \mathbb {Z}} c_{ii}(s)\,\textrm{d}s, \end{aligned}$$

where we denoted by \(\cdot \) the standard scalar product on \({\mathbb {R}}^d\). U is well-defined for \( x \in \ell ^\infty (\mathbb {Z}; {\mathbb {R}}^d)\), and continuous in t, because the series normally converge thanks to estimate (2.18); for the same reason, also

$$\begin{aligned} t \mapsto \partial _t^k U(t, x) = \frac{1}{2} \sum _{i,j \in \mathbb {Z}} \Big ( \frac{\textrm{d}}{\textrm{d}t} \Big )^k c_{ij}(t) x^i\! \cdot x^j - \sum _{i\in \mathbb {Z}} \Big ( \frac{\textrm{d}}{\textrm{d}t} \Big )^{k-1} c_{ii}(t), \quad k \in \mathbb {N}, \end{aligned}$$

are well-defined and continuous. Finally, for \( h \in \ell ^\infty (\mathbb {Z}; {\mathbb {R}}^d)\), note that (omitting the dependence on t)

$$\begin{aligned} U( x + h) - U( x) = \sum _{i,j \in \mathbb {Z}} c_{ij} x^i h^j + \frac{1}{2} \sum _{i,j \in \mathbb {Z}} c_{ij} h^i\! \cdot h^j, \end{aligned}$$

thus \(U(t,\cdot )\) is infinitely many times Fréchet-differentiable in \(\ell ^\infty (\mathbb {Z};{\mathbb {R}}^d)\). Define now \( u = (u^i)_{i\in \mathbb {Z}}\) by setting

$$\begin{aligned} u^0 :=U, \qquad u^{i+1}(t, x) :=u^i(t,\sigma x), \quad i \in \mathbb {Z}, \end{aligned}$$

where \((\sigma x)_j :=x_{j-1}\) for \(j \in \mathbb {Z}\). We have

$$\begin{aligned} D_j u^i (t, x) = D_j [ u(t,\sigma ^i x) ] = D_{j-i} u(t, x) = \sum _{k \in \mathbb {Z}} c_{j-i,k}(t) x^k, \quad i,j \in \mathbb {Z}, \end{aligned}$$

hence \(\sum _{j\in \mathbb {Z}} D_ju^j D_j u^i\) locally uniformly converges by estimate (2.18). Hence, by construction, u solves (2.6) in the desired sense. \(\square \)

Remark 2.15

(Unimportance of distant players) What essentially allows the Nash system to have a solution in infinite dimensions is what we call an unimportance of distant players, in the sense that the farther a player is from a given one (say the 0-th player) the smaller the impact it has on it is. This is seen in the fact that, on any bounded \(\mathcal {B} \subset \mathcal {X}\),

$$\begin{aligned} \Vert D_j U\Vert _{\infty , [0,T] \times \mathcal {B}} = \bigg \Vert \sum _{i\in \mathbb {Z}} c_{ij} x^i \bigg \Vert _{\infty , [0,T] \times \mathcal {B}} \lesssim \beta _j \end{aligned}$$

is infinitesimal as \(|j| \rightarrow \infty \).

2.5 Beyond Shift-Invariance

We have made the shift-invariance hypothesis to reduce our system of infinitely many equations for c to one equation. Nevertheless, the reader should be aware that the above results can be adapted to a more general setting.

One can suppose that we only have a shift-invariant control on the data; that is

$$\begin{aligned} |f^i_{hk}| \vee |g^i_{hk}| \lesssim \beta _{h-i}\beta _{k-i}. \end{aligned}$$
(2.20)

In this case, the most natural limit of (2.8) is indexed over \(\mathbb {N}\) and it suffices to replace \(\mathcal {K}_\infty \) with

$$\begin{aligned} \tilde{\mathcal {K}}_\infty :=\big \{ w=(w^i_{hk})_{i,h,k\in \mathbb {N}} \in \ell ^\infty (\mathbb {N}^3;C^0([0,T])):\ \Vert w^i_{hk}\Vert _\infty \le 2d_{|h-i|,|k-i|} \ \forall \, i,h,k \in \mathbb {N}\big \}, \end{aligned}$$

which is a closed ball in \(\ell ^\infty _{{\tilde{d}}}(\mathbb {N}^3;C^0([0,T]))\), letting \({\tilde{d}}^i_{hk} :=d_{|h-i|,|k-i|}\). Then, for instance, one obtains the following result.

Theorem 2.16

Assume (2.20). There exists \(T^* > 0\) such that if \(T \le T^*\) then for any \(N \in \mathbb {N}\cup \{\infty \}\) there exists a unique smooth solution to system (2.8) such that, for any \(i,h,k \in [[N]]\) and \(m \in \mathbb {N}\),

$$\begin{aligned} \bigg \Vert \Big (\frac{\textrm{d}}{\textrm{d}t}\Big )^m c^i_{hk} \bigg \Vert _{\infty } \lesssim \beta _{|h-i|}\beta _{|k-i|}, \end{aligned}$$

where the implied constants depend only on \(T^*\), f, g and m.

Also, one can fix the dimension \(N \ge 2\) and consider \(\beta ^N \in ({\mathbb {R}}_+)^{2N+1}\) given by

$$\begin{aligned} \beta ^N_j :={\left\{ \begin{array}{ll} 1 &{} \text {if} \ j=0 \\ \dfrac{1}{N-1} &{} \text {if} \ |j| \in \{ 1,\dots ,N-1 \}, \end{array}\right. } \end{aligned}$$

which is c-self-controlled in the sense that for \(|j| \in [[N]]\)

$$\begin{aligned} (\beta ^N \star \beta ^N)_j = \sum _{|k| \in [[N]]} \beta ^N_k \beta ^N_{j-k} = \beta ^N_j + \frac{1}{N-1} \sum _{|k| \in [[N]] \setminus \{0\}} \beta ^N_{j-k} \le \beta ^N_j + \frac{3}{N-1} \le 4 \beta ^N_j. \end{aligned}$$

In this case one can look for a solution to (2.8) starting with assumption (2.20) with \(\beta = \beta ^N\); that is,

$$\begin{aligned} |f^i_{hk}| \vee |g^i_{hk}| \lesssim {\left\{ \begin{array}{ll} 1 &{} \text {if} \ h = i= k\\ N^{-1} &{} \text {if} \ h \ne i = k \ \text {or vice versa} \\ N^{-2} &{} \text {if} \ h \ne i \ \text {and}\ k \ne i. \end{array}\right. } \end{aligned}$$
(2.21)

What we get is the following statement.

Theorem 2.17

Let \(N \in \mathbb {N}\), \(N\ge 2\) and assume (2.21). There exists \(T^* > 0\) such that if \(T \le T^*\) then there exists a unique smooth solution to system (2.8) such that, for any \(i,h,k \in [[N]]\) and \(m \in \mathbb {N}\),

$$\begin{aligned} \bigg \Vert \Big (\frac{\textrm{d}}{\textrm{d}t}\Big )^m c^i_{hk} \bigg \Vert _{\infty } \lesssim {\left\{ \begin{array}{ll} 1 &{} \text {if} \ h = i= k\\ N^{-1} &{} \text {if} \ h \ne i = k \ \text {or vice versa} \\ N^{-2} &{} \text {if} \ h \ne i \ \text {and}\ k \ne i. \end{array}\right. } \end{aligned}$$

where the implied constants depend only on \(T^*\), f, g and m.

Notice that this can be regarded as a result in a mean-field-like setting, as assumption (2.21) is consistent with the fact that we expect the j-th derivative of a mean-field-like cost for the i-th player to scale by a factor of \(N^{-1}\) whenever \(j \ne i\). For further discussion and results related to this specific setting, we invite the reader to proceed directly to the next Sect. 3.

2.6 Long-Time Existence for Shift-Invariant Directed Strongly Gathering Interactions

It is clear that the previous construction follows the standard Cauchy-Lipschitz local (in time) existence argument, and the existence (and uniqueness) of a solution can be as usual continued up to a maximal time \(T^*\), that is when the quantity \(\max _{i,j} \beta _i^{-1}\beta _j^{-1} |c^0_{ij}|\) blows up. So far, we cannot exclude in general that such blow up time \(T^*\) is finite.

Upcoming Definition 2.18 introduces an additional assumption on the running and terminal costs which will allow us to prove long-time existence of a solution to the infinite-dimensional Nash system. This assumption requires some rather abstract properties of the generating function that is built upon \(f_{hk}\). We will state the assumption first, and then discuss its implications on the game structure. We anticipate that, loosely speaking, the i-th player’s cost is affected by the states of the players \(j>i\) only in the chain (directedness) and that the term \(f_{hh}\) is “dominant” with respect to \(f_{hk}\), \(h \ne k\) (strong gathering).

Given \(m \le n\), we will identify \(M \in \mathscr {S}(m) \subset \mathscr {S}(n)\) by considering \({\mathbb {R}}^m \simeq {\mathbb {R}}^m \times \{0\} \subset {\mathbb {R}}^m \times {\mathbb {R}}^{n-m} = {\mathbb {R}}^n\) and then extending \(M = 0\) on \({\mathbb {R}}^{n-m}\); that is, identify \(M \in \mathscr {S}(m)\) with \(\left( \begin{array}{cc} M &{} 0 \\ 0 &{} 0 \end{array}\right) \in \mathscr {S}(n)\). Also, given \(M \in \mathscr {S}(m)\) and \(M' \in \mathscr {S}(n)\) we will say that \(M=M'\) if \(M'\) equals the above-mentioned extension of M over \({\mathbb {R}}^n\).

Definition 2.18

Let \(\mathscr {M} = (M^{(N)})_{N \in \mathbb {N}}\) be a sequence of matrices, with \(M^{(N)} \in \mathscr {S}(N)\). We say that \(\mathscr {M}\) is directed if there exists \(\ell \in \mathbb {N}\) such that \(M^{(N)} = M^{(\ell )} \in \mathscr {S}(\ell )\) for all N large enough.

Given \(\varrho > 1\), we say that \(M \in \mathscr {S}(\ell )\) is \(\varrho \)-strongly gathering if the polynomial

$$\begin{aligned} \mu (z,w) = \sum _{h,k=0}^{\ell -1} M_{hk} z^h w^k, \quad z,w \in \mathbb {C}\end{aligned}$$

is such that \(\mu (z,0) \notin (-\infty , 0)\) if \(z \in \varrho \bar{\mathbb {D}}\).Footnote 3

Remark 2.19

We anticipate here the main idea behind this definition. We wish to consider the generating function of the coefficients \(c_{hk}\) in (2.11) with \(N = \infty \); that is, formally, \(\Xi (t,z,w) :=\sum _{h,k \in \mathbb {Z}} c_{hk}(t)z^hw^k\). This is a priori singular on \(zw=0\), nevertheless we are going to show that if \(((f_{hk})_{|h|,|k| \le N})_{N\in \mathbb {N}}\) and \(((g_{hk})_{|h|,|k| \le N})_{N \in \mathbb {N}}\) are directed, then one can assume that \(c_{hk} = 0\) if \(h \wedge k < 0\), so that \(\Xi \) is analytic and satisfies a “functional Riccati equation” (see (2.28) below).

At this point the strong gathering condition, which will be put on f (see upcoming Assumptions (\(\varvec{\star }\))), has a twofold utility. First, it ensures that the functional Riccati equation has a solution which is defined in a neighborhood of \((z,w) = (0,0)\) and has real coefficients \(c_{hk}\); this is basically due to the fact that the principal branch of the square root of the function \(\mu (\cdot ,0)\) associated to f is well-defined. Second, as we require \(\varrho > 1\) (and not just \(\varrho > 0\)), it ensures that the \(c_{hk}\)’s will be summable by standard properties of the derivatives of an analytic function.

We also point out that \(\varrho > 1\) is not necessary in order for summability to be satisfied, while \(\varrho = 1\) is not a priori sufficient. This makes the latter a limiting condition that we have found as interesting as difficult to study in the generality we would have wished for; we have devoted to it a brief discussion in Sect. 2.9.

Remark 2.20

The reader could find a first sight quite strange that the crucial condition in the notion of strong gathering regards \(\mu (\cdot ,0)\), which sees only the first column \((M_{h0})_{h \in [[\ell ]]}\) of the matrix M. From a technical point of view, we claimed in Remark 2.19 that it is all we need to solve the functional Riccati equation (2.28). From an interpretative point of view, instead, recalling that we will require f to be strongly gathering, it is interesting to note that the coefficients \(f_{h0}\) quantify the interaction of the 0-th player with the others; in the shift-invariant setting the 0-th player is basically our reference player for the game, hence we are somehow saying that the solvability of the Nash system is related to a condition which sees only the “direct” interactions between the reference player and the others, and not the “indirect” influence of the interactions between pairs of other players.

Remark 2.21

The term directed is related to the fact that the i-th player’s cost is affected by the states of the following players \(j>i\) in the chain, and this fact might not be immediately clear from the previous definition, which just requires the matrices \(M^{(N)}\) to be extensions of a fixed matrix, not depending on N. One may then have a look to the matrices of the form \(M = w \otimes w\) in (the end of) Example 2.2. In particular, \(w = w^{(N)}\) in (2.5) gives rise to a directed family only when \(m = 0\).

Moreover, in the situations described in Example 2.2, the associated sequence of matrices as the dimension N diverges is not strongly gathering. Indeed, even though \(\ell \) stays bounded, one has that \(\mu \) is a polynomial with \(\mu (1,0) = 0\). In fact, as said in Example 2.19, those situations can be seen as limit settings corresponding to taking \(\varrho = 1\) and we will comment on this case later on in Sect. 2.9.

The validity of the strong gathering assumption as it is, can be achieved in different ways; two basic settings are given in the following examples.

Example 2.22

If we want to stick to a matrix \(f^0\) of the form (2.1), in order to have \(({f^0}{(N)})_{N \in \mathbb {N}}\) (where N is the dimension) directed with strongly gathering limit one can require that \(\ell \) is independent of N large and

$$\begin{aligned} \sum _{j=1}^{\ell -1} w_j = 1-\epsilon , \quad \epsilon > 0, \end{aligned}$$

so that \(\mu (z,0) \ge \epsilon \) if \(|z| \le 1\). Put it differently, it suffices to consider

$$\begin{aligned} w = (\nu , -w_1, \dots , -w_{\ell -1}), \quad \text {with} \ w_j \ge 0 \ \forall \, j, \ \sum _{j=1}^{\ell -1} w_j = 1, \ \text {and} \ \nu > 1; \end{aligned}$$

this means that we are considering an underlying graph where each node is directly connected with itself as well, and such link has a negative weight. As in this model a positive weight is associated to the tendency, in order to reduce their cost, of each player to get closer to their neighbors, a negative connection with themself is to be interpreted as a drift towards self-annihilation. More prosaically, this means that the state of each player will also tend to the common position \(0 \in {\mathbb {R}}^d\).

Regarding this example, it is also worth pointing out that the common attractive position \(0 \in {\mathbb {R}}^d\) cannot be any other arbitrary point, in the sense that the structure of problem is not invariant under translation of the coordinates. This is due to the fact that we are considering a graph whose nodes have outdegree different from 1.

Example 2.23

Another setting with directedness and strong gathering is that of

$$\begin{aligned} q^0 = \nu E^0 + w \otimes w, \end{aligned}$$

where w can be given by (2.4) with \(\ell \) independent of N, \(\nu > 0\) and \(E^0x = x^0\) for all \(x = (x^0, \dots , x^{N-1}) \in ({\mathbb {R}}^d)^N\). Here

$$\begin{aligned} \langle F^iX_t,X_t\rangle = \nu |X^i_t |^2 + \bigg |X^i_t - \sum _{j:\ (i,j) \in G_\ell } w_j X^j_t \bigg |^2, \end{aligned}$$

and with a translation of the coordinates we can lead to this situation also costs like

$$\begin{aligned} J^i(\alpha ) = \frac{1}{2} \, \mathbb {E} \int _0^T \!\bigg ( |\alpha ^i|^2 + \nu |X^i_t - y|^2 + \bigg |X^i_t - \sum _{j:\ (i,j) \in G_\ell } w_j X^j_t \bigg |^2 \,\bigg ), \end{aligned}$$

for any given \(y \in {\mathbb {R}}^d\). This example, more genuinely than the previous one, shows that the strong gathering assumption entails that we are giving some sort of preference about where the players should aggregate. This will strengthen the attractive structure yielded by a graph satisfying only assumption (SI) (Example 2.2), thus providing more stability to our game.

From this section on in this part, the following assumptions will be in force, declined in suitable manners which will be specified.

Assumptions

(\(\varvec{\star }\)) The matrices \(f^i\) and \(g^i\) are shift-invariant and \(f^0 = f^0(N)\) and \(g^0 = g^0(N)\) are directed with limits \(f,g \in \mathscr {S}(\ell )\) for some \(\ell \in \mathbb {N}\). The matrix f is \(\varrho \)-strongly gathering for some \(\varrho > 0\) and g is compatible on I with f for some interval \(I \subset {\mathbb {R}}\), in the following sense: given

$$\begin{aligned} \varphi (z,w) :=\sum _{h,k=0}^{\ell -1} f_{hk} z^h w^k, \qquad \Psi (z,w) :=\sum _{h,k=0}^{\ell -1} g_{hk} z^h w^k \end{aligned}$$

and setting

$$\begin{aligned} \xi :=\sqrt{\phi (\cdot ,0)},^{4} \qquad \psi :=\Psi (\cdot ,0), \end{aligned}$$

Footnote 4 we have

$$\begin{aligned} \inf _{t \in I} |\psi \tanh (t\xi ) + \xi | > 0 \quad \text {on} \ \varrho \mathbb {D}. \end{aligned}$$
(2.22)

Remark 2.24

Note that for any \(t \in {\mathbb {R}}\), one has that \(\psi \tanh (t\xi ) + \xi \) is bounded on D, as \(\xi (z)t\) can be a singular point only if \(\xi (z)^2 = \phi (z,0) < 0\), which contradicts that \(z \in D\). Also note that condition (2.22) holds with \(I = {\mathbb {R}}\) if, e.g., \(g = 0\) or \(|\psi | \ge |\xi |\).

Considering system (2.11), we see that for any N sufficiently large with respect to \(\ell \) we have \(f_{hk} = 0 = g_{hk}\) if either h or k is negative; hence the limit system as \(N \rightarrow \infty \) will be given by

$$\begin{aligned} -\dot{c}_{hk} - c_{0h} c_{0k} + \sum _{j \in \mathbb {Z}} \big ( c_{0,h-j} c_{jk} + c_{0,k-j} c_{jh} \big ) = f_{hk}, \quad c_{hk}(T) = g_{hk}, \end{aligned}$$
(2.23)

where \(f_{hk}\) and \(g_{hk}\) extend to \(h,k \in \mathbb {Z}^2\) by letting \(f_{hk} = 0 = g_{hk}\) if \((h,k) \notin [[\ell ]]^2\).Footnote 5 Given this system, another reduction is possible. Since \(f_{hk} = 0 = g_{hk}\) whenever \(h \wedge k < 0\), the assumption that \(c_{hk} = 0\) if \(h \wedge k < 0\) is not a priori incompatible with the structure of system (2.24). Therefore we will look for a solution with this additional property; in this way, the coefficients which are not a priori null, \((c_{hk})_{h,k \in \mathbb {N}}\), will define in a natural way a symmetric operator \(c(t) \otimes I_d\) on \(\mathcal {X}\) which vanishes on \(\ell ^\infty (\mathbb {Z}_{<0};{\mathbb {R}}^d)\), so that it can also be seen as a trivial extension to \(\mathcal {X}\) of a symmetric operator on \(\ell ^\infty (\mathbb {N};{\mathbb {R}}^d)\). This reduces the system of ODEs in (2.23) to

$$\begin{aligned} -\dot{c}_{hk} - c_{0h} c_{0k} + \sum _{j = 0}^h c_{0,h-j} c_{jk} + \sum _{j = 0}^k c_{0,k-j} c_{jh} = f_{hk}, \quad c_{hk}(T) = g_{hk}, \end{aligned}$$
(2.24)

which will be the object of our following study, and will eventually provide a solution with the particular form presented in the upcoming definition.

Definition 2.25

A quadratic shift-invariant directed (QSD) solution to (2.6) is a classical solution of the form

$$\begin{aligned} u^i(t,x) = \frac{1}{2} \sum _{h,k \in \mathbb {N}} c_{hk}(t) \langle x^{h+i},x^{k+i}\rangle _{{\mathbb {R}}^d} + \int _t^T {{\,\textrm{tr}\,}}c(s)\,\textrm{d}s, \end{aligned}$$
(2.25)

for some \(c :[0,T] \rightarrow \ell ^1(\mathbb {N}^2) \subset \ell ^1(\mathbb {Z}^2)\).Footnote 6

Theorem 2.26

Under Assumptions (\(\star \)) with \([0,T] \subseteq I\), there exists a unique QSD solution to (2.6) on \([0,T] \times \mathcal {X}\).

The following lemmata, in whose statement the hypotheses of Theorem 2.26 will be implied, provide the steps of our proof of Theorem 2.26.

Lemma 2.27

There exists a unique sequence \((c_{hk})_{h,k \in \mathbb {N}} \subset C([0,T]) \cap C^\infty ((0,T))\), with \(c_{hk} = c_{kh}\), which solves the infinite dimensional system (2.24) on [0, T].

Proof

We perform the change of variables \(t \mapsto T-t\) and prove that there exists a unique solution on [0, T] with initial condition \(c(0)=g\) to the forward system

$$\begin{aligned} \dot{c}_{hk} - c_{0h} c_{0k} + \sum _{j = 0}^h c_{0,h-j} c_{jk} + \sum _{j = 0}^k c_{0,k-j} c_{jh} = f_{hk} \end{aligned}$$
(2.26)

which smoothly extends to \({\mathbb {R}}\). Then, the solution to (2.24) on [0, T] with \(c(T)=0\) will be given by the restriction to [0, T] of \({\hat{c}}(T-{\cdot })\), where \({\hat{c}}\) is the unique solution to (2.26) on \({\mathbb {R}}\) with \({\hat{c}}(0)=g\). Note that \({\hat{c}}_{00}\) is the solution to the Riccati equation \(\dot{{\hat{c}}}_{00} + {\hat{c}}_{00}^2 = f_{00}\), where \(f_{00} = \varphi (0,0) > 0\) by strong gathering, hence

$$\begin{aligned} {\hat{c}}_{00}(t) = \nu \, \frac{\nu \sinh (\nu t) + \mathfrak {g} \cosh (\nu t)}{\mathfrak {g} \sinh (\nu t) + \nu \cosh (\nu t)}, \quad \mathfrak {g} :=g_{00}, \ \nu :=\sqrt{f_{00}}. \end{aligned}$$

All the other \({\hat{c}}_{hk}\)’s satisfy first-order ODEs with coefficient which are second-order polynomials depending only on \(f_{hk}\) and \({\hat{c}}_{h'k'}\) with \((h',k') \prec (h,k)\), where \(\prec \) denotes the strict Pareto preference.Footnote 7 Therefore, existence and uniqueness of the solution to the infinite system may be proved by induction. Indeed, suppose \(({\hat{c}}_{0k'})_{0\le k'<k} \subset C^\infty ({\mathbb {R}})\) are given, and note that

$$\begin{aligned} \dot{{\hat{c}}}_{0k} + 2{\hat{c}}_{00}{\hat{c}}_{0k} = f_{0k} - \sum _{j=1}^{k-1} {\hat{c}}_{0,k-j}{\hat{c}}_{0j}; \end{aligned}$$

then \({\hat{c}}_{0k}\) is unique and smooth as well. This proves the existence and uniqueness of \({\hat{c}}_{0k}\) for all \(k \in \mathbb {N}\); then, looking at this argument as the base step for a new induction over h, one proves analogously the existence and uniqueness of \({\hat{c}}_{hk}\) for all \(h \in \mathbb {N}\) and any \(k \in \mathbb {N}\). Finally, \(c_{hk} = c_{kh}\) since equations (2.10) are invariant with respect to the swap \((h,k) \mapsto (k,h)\). \(\square \)

The arguments below show that the coefficients \(c_{hk}\) can be thought as derivatives of a generating function \({\hat{\Xi }}\), which will play a fundamental role in the long-time analysis.

Lemma 2.28

The solution to (2.24) on [0, T] is given by

$$\begin{aligned} c_{hk}(t) = \frac{1}{h!\,k!} \, \frac{\partial ^{h+k}}{\partial z^h \partial w^k}\bigg |_{(0,0)} {\hat{\Xi }}(T-t), \quad \forall \, t \in [0,T] \end{aligned}$$
(2.27)

for some function \({\hat{\Xi }} :I \times \varrho \mathbb {D}^2 \rightarrow \mathbb {C}\), of class \(C^\infty \) with respect to \(t \in I\) and analytic in \(\varrho \mathbb {D}^2\), such that \({\hat{\Xi }}(\cdot ,z,w) = {\hat{\Xi }}(\cdot ,w,z)\) and \({\hat{\Xi }}(0,\cdot ,\cdot ) = \Psi \).

Proof

Suppose that, for all \(z,w \in D\) fixed, there exists a solution \({\hat{\Xi }}\) on I to

$$\begin{aligned} \partial _t {\hat{\Xi }}(t,z,w) - {\hat{\Xi }}(t,z,0) {\hat{\Xi }}(t,0,w) + \big ( \hat{\Xi }(t,z,0) + {\hat{\Xi }}(t,0,w) \big ) {\hat{\Xi }}(t,z,w) = \varphi (z,w),\nonumber \\ \end{aligned}$$
(2.28)

with the desired properties of smoothness, invariant with respect the swap \((z,w) \mapsto (w,z)\) and such that \({\hat{\Xi }}(0,z,w) = \Psi (z,w)\). Then by taking the derivatives \(\partial ^h_z\partial ^k_w|_{(0,0)}\) one recovers equation (2.26), and thus the coefficients given by (2.27) satisfy (2.24) on [0, T].Footnote 8 To see that (2.28) admits such a solution, note that \({\hat{\Xi }}(t,z,0)\) solves the Riccati equation \(\partial _t {\hat{\Xi }}(t,z,0) + {\hat{\Xi }}(t,z,0)^2 = \varphi (z,0)\); hence,

$$\begin{aligned} {\hat{\Xi }}(t,z,0) = \xi (z) \, \frac{\xi (z) \sinh (\xi (z) t) + \psi (z) \cosh (\xi (z) t)}{\psi (z) \sinh (\xi (z) t) + \xi (z) \cosh (\xi (z) t)}, \quad \psi :=\Psi (\cdot ,0). \end{aligned}$$

Note that letting

$$\begin{aligned} \mathcal {E}(t,z;\zeta _1,\zeta _2) :=\zeta _1(z) \sinh (\xi (z) t) + \zeta _2(z) \cosh (\xi (z) t) \end{aligned}$$

one can write

$$\begin{aligned} {\hat{\Xi }}(t,z,0) = \xi (z) \, \frac{\mathcal {E}(t,z;\xi ,\psi )}{\mathcal {E}(t,z;\psi ,\xi )} = \frac{\partial }{\partial t} \log \mathcal {E}(t,z;\psi ,\xi ). \end{aligned}$$

For any \(t \in I\) fixed, this function is well-defined for \(z \in \varrho \mathbb {D}\) and analytic therein by the compatibility assumption (2.22). At this point, (2.28) becomes a first-order ODE in \(t \in I\), for all \(z,w \in \varrho \mathbb {D}\) fixed, whose solution is given by

$$\begin{aligned} {\hat{\Xi }}(t,z,w)= & {} \frac{\xi (z) \xi (w)}{\mathcal {E}(t,z;\psi ,\xi ) \mathcal {E}(t,w;\psi ,\xi )} \bigg ( \Psi (z,w) + \int _0^t \Big ( \mathcal {E}(t,z;\xi ,\psi ) \mathcal {E}(t,w;\xi ,\psi ) \nonumber \\{} & {} + \frac{\varphi (z,w)}{\xi (z) \xi (w)} \mathcal {E}(t,z;\psi ,\xi ) \mathcal {E}(t,w;\psi ,\xi ) \Big ) \bigg ). \end{aligned}$$
(2.29)

Note that \({\hat{\Xi }}(t,\cdot ,\cdot )\) is well-defined for \(z,w \in \varrho \mathbb {D}\) by the same argument we applied to \(\hat{\Xi }(t,\cdot ,0)\). Also, it is trivial that \({\hat{\Xi }}(\cdot ,z,w) \in C^\infty (I)\), and by differentiating under the integral sign one proves the analyticity of \({\hat{\Xi }}(t,\cdot ,\cdot )\). \(\square \)

Remark 2.29

Let \(\tilde{\mathcal {E}}(t,z;\zeta _1,\zeta _2) :={\mathcal {E}}(t,z;\zeta _2,\zeta _1)\). As, omitting the dependence on \(\zeta _1\) and \(\zeta _2\) in \(\mathcal {E}\), we have \(\frac{\partial }{\partial t} \mathcal {E}(t,z) = \xi (z) \tilde{\mathcal {E}}(t,z)\), it is easy to see that

$$\begin{aligned} \frac{\partial }{\partial t} \big ( \tilde{\mathcal {E}}(t,z) \mathcal {E}(t,w) \pm \mathcal {E}(t,z) \tilde{\mathcal {E}}(t,w) \big ) = (\xi (z)\pm \xi (w)) \big ( \mathcal {E}(t,z) \mathcal {E}(t,w) \pm \tilde{\mathcal {E}}(t,z) \tilde{\mathcal {E}}(t,w) \big ) \end{aligned}$$

and thus

$$\begin{aligned} \int _0^t \mathcal {E}(s,z) \mathcal {E}(s,w)\,\textrm{d}s = \frac{1}{2} \Big ( \frac{\tilde{\mathcal {E}}(\cdot ,z) \mathcal {E}(\cdot ,w) + \mathcal {E}(\cdot ,z) \tilde{\mathcal {E}}(\cdot ,w)}{\xi (z)+\xi (w)} + \frac{\tilde{\mathcal {E}}(\cdot ,z) \mathcal {E}(\cdot ,w) - \mathcal {E}(\cdot ,z) \tilde{\mathcal {E}}(\cdot ,w)}{\xi (z)-\xi (w)} \Big )\bigg |_0^t. \end{aligned}$$

Letting \((\zeta _1,\zeta _2) \in \{(\xi ,\psi ),(\psi ,\xi )\}\) one can then compute the integral in (2.29). Set

$$\begin{aligned} \sigma ^\pm (t,z,w) = \frac{\mathcal {L}(t,z) + \mathcal {L}(t,w)}{\xi (z) + \xi (w)} \pm \frac{\mathcal {L}(t,z) - \mathcal {L}(t,w)}{\xi (z) - \xi (w)}, \qquad \mathcal {L}(t,\cdot ) :=\frac{\mathcal {E}(t,\cdot ;\xi ,\psi )}{\tilde{\mathcal {E}}(t,\cdot ;\xi ,\psi )};\nonumber \\ \end{aligned}$$
(2.30)

then

$$\begin{aligned} 2\,{\hat{\Xi }}(t,z,w) = {\tilde{\Psi }}(z,w) + \varphi (z,w) \sigma ^+(t,z,w) + \xi (z)\xi (w) \sigma ^-(t,z,w), \end{aligned}$$
(2.31)

where

$$\begin{aligned} {\tilde{\Psi }} (z,w) = \Psi (z,w) - \varphi (z,w) \sigma ^+(0,z,w) + \xi (z)\xi (w) \sigma ^-(0,z,w). \end{aligned}$$

Lemma 2.30

Let \(c=(c_{hk})_{h,k\in \mathbb {N}}\) be the solution to system (2.24) on [0, T]. Then, for any \(r \in (1,\varrho )\),

$$\begin{aligned} \Vert c_{hk} \Vert _{\infty ;[0,T]} \le \frac{K(r,T)}{r^{h+k}} \quad \forall \, h,k \in \mathbb {N}, \end{aligned}$$
(2.32)

for some constant K(rT) depending only on T, r, f and g. In particular, \(c \in C^0([0,T]; \ell ^1(\mathbb {N}^2))\) and the same is true for \(\dot{c}\).Footnote 9

Proof

By the strong gathering assumption, the function \(\hat{\Xi }(t,\cdot ,\cdot )\) given in Lemma 2.28 is analytic in a neighborhood of \(\mathcal {Q}_r :=\bar{\mathbb {D}}_r \times \bar{\mathbb {D}}_r\), with \(r \in (1,\varrho )\). Then, by Cauchy’s theorem on derivatives,

$$\begin{aligned} \Vert c_{hk} \Vert _{\infty ;[0,T]} \le \frac{1}{r^{h+k}} \max _{[0,T] \times \partial \mathcal {Q}_r} |{\hat{\Xi }} |. \end{aligned}$$
(2.33)

This proves that \(c \in C^0([0,T]; \ell ^1(\mathbb {N}^2))\), and further regularity is easily proven by induction by exploiting system (2.24). \(\square \)

Lemma 2.31

Let c be the solution to (2.24) on [0, T]. The functions \(u^i\) given by (2.25) are well-defined for \((t,x) \in [0,T] \times \mathcal {X}\), differentiable with respect to t, and twice Fréchet-differentiable with respect to x.

Proof

Since a suitable shift of coordinates in \(\mathcal {X}\) transforms \(u^i(t,\cdot )\) into \(u^0(t,\cdot )\), it is sufficient to prove the result for \(u^0\). The existence of a solution follows from Lemma 2.30, and the differentiability with respect to t follows from the same lemma and the fundamental theorem of calculus. The differentiability with respect to x follows again from Lemma 2.30, as it is trivial to see that for any \(h \in \mathcal {X}\) small one has

$$\begin{aligned} u^0(t,x+h) - u^0(t,x) - \langle (c(t) \otimes I_d)x,h\rangle _{\mathcal {X}} - \frac{1}{2} \langle (c(t) \otimes I_d)h,h\rangle _{\mathcal {X}} = 0, \end{aligned}$$

where, with an obvious notation, \(\langle (c(t) \otimes I_d)y,z\rangle _{\mathcal {X}} = \sum _{h,k \in \mathbb {N}} c_{hk}(t) \langle y^h,z^k\rangle _{{\mathbb {R}}^d}\). \(\square \)

Lemma 2.32

Let \(u^i\) be given by (2.25), where c is the solution to (2.24) on [0, T]. Let \(\mathcal {B}_R\) be the closed ball of radius R in \(\mathcal {X}\). Then, for any \(r \in (1,\varrho )\),

$$\begin{aligned} \sum _{j \in \mathbb {Z}\setminus \{i\}} \left\Vert D_ju^j D_ju^i \right\Vert _{\infty ;[0,T]\times \mathcal {B}_R} \le \frac{1}{r-1} \bigg ( \frac{R K r}{r-1} \bigg )^2, \end{aligned}$$

where \(K=K(r,T)\) is the constant appearing in Lemma 2.30.

Proof

We have \(D_j u^i(t,x) = 0\) if \(j < i\), and \(D_j u^i(t,x) = \sum _{h \in \mathbb {N}} c_{j-i,h}(t) x^h\) if \(j \ge i\). Then, for \(x \in \mathcal {B}_R\), and \(r \in (1,\varrho )\) fixed, estimate (2.32) yields

$$\begin{aligned} \left|D_ju^j(t,x) D_ju^i(t,x) \right|\le R^2 \sum _{h,k \in \mathbb {N}} |c_{0h}(t) ||c_{j-i,k}(t) |\le \bigg ( \frac{RKr}{r-1} \bigg )^2 \frac{1}{r^{j-i}}, \end{aligned}$$

for all \(t \in [0,T]\). The thesis now follows by computing \(\sum _{j > i} r^{i-j}\). \(\square \)

Since, by construction, choosing \(u^i\) as in (2.25) satisfying (S1) and (S2) in Definition 2.3 yields a classical solution, the proof of Theorem 2.26 is complete.

2.7 Almost-Optimal Controls for the N-Player Game

We did not prove long-time existence of a solution to the Nash system for the N-player game, with \(N > \ell \) finite; nevertheless, in Corollary 2.36 below we show that on the horizon [0, T] the infinite-dimensional optimal control for the i-th player, given by \({\bar{\alpha }}^{*i}(t,X_t) :=\sum _{j\ge i} c_{0,j-i}(t) X_t^j\), if suitably “projected” onto \(({\mathbb {R}}^d)^N\) provides an \(\epsilon \)-Nash equilibrium for the N-player game, with \(\epsilon \rightarrow 0\) as \(N \rightarrow \infty \). In claiming so, we consider classes of admissible controls in the sense of the following definition.

Definition 2.33

Let \(R,L \ge 0\). A control \(\alpha :[0,T] \times ({\mathbb {R}}^d)^N \rightarrow ({\mathbb {R}}^d)^N\) in feedback form belongs to \(\mathcal {A}_{R,L}\) if, for all \(t \in [0,T]\), \(x,y \in ({\mathbb {R}}^d)^N\),

$$\begin{aligned} |\alpha (t,0)| \le R, \qquad |\alpha (t,x) - \alpha (t,y)| \le L|x-y|. \end{aligned}$$

The Lipschitz constant L is said to be admissible if \(L \ge \Vert c_{0\cdot }\Vert _{C^0([0,T];\ell ^1(\mathbb {N}))}\).

Remark 2.34

For such controls, it is known (cf., e.g., [2, Theorems 9.1 and 9.2]) that

$$\begin{aligned} {\left\{ \begin{array}{ll} \textrm{d}X_t = \alpha (t,X_t)\,\textrm{d}t + \sqrt{2} \,\textrm{d}B_t &{} t \in [0,T]\\ X_0 = x_0 \in ({\mathbb {R}}^d)^N \end{array}\right. } \end{aligned}$$
(2.34)

has a unique solution, satisfying \(\mathbb {E}\,\sup _{[0,T]} |X|^2 \le C\) where C is a locally bounded function of R, L and T, directly proportional to \(1+|x_0|^2\).

Theorem 2.35

Consider the N-player game on [0, T] with time evolution of the state of the players given by (2.34) and costs given by (1.1) with \(f^0,g^0 \ge 0\). Let Assumptions (\(\star \)) be in force with \([0,T] \subseteq I\). Let c solve (2.24) and define the control \(\alpha ^*\) by

$$\begin{aligned} - \alpha ^{*i}(t,X^0_t,\dots ,X^{N-1}_t) :=\sum _{j=0}^{N-i-1} c_{0j}(t) X^{j+i}_t + \sum _{j=N-i}^{N-1} c_{0j}(t) X^{j+i-N}_t, \qquad i \in [[N]]. \end{aligned}$$

Then, for any \(R \ge 0\) and admissible L, for any \(i \in [[N]]\) and any \((\alpha ^{*,-i},\psi ) \in \mathcal {A}_{R,L}\),Footnote 10

$$\begin{aligned} J^i(\alpha ^*) \le J^i((\alpha ^{*,-i},\psi )) + {\hat{C}}(\delta ^M + (\delta ^{-M} + N ) \delta ^N) \qquad \forall \, \delta \in (\varrho ^{-1},1), \ \forall \;\! M \ge \ell , \end{aligned}$$

where \(\ell \) is the dimension appearing in Assumptions (\(\star \)) and the constant \({\hat{C}}\) is a locally bounded function of R, L, T and \(\delta \), directly proportional to \(1+|x_0|^2\).

Proof

Since \(f^i\) and \(g^i\) are shift-invariant and \(\alpha ^*\) is linear in the state variable with \(D\alpha ^*\) circulant, without loss of generality we can prove the thesis for \(i = 0\). We denote by \(X^*\) the solution to (2.34) when \(\alpha = \alpha ^*\) and by X the solution to (2.34) when \(\alpha = (\psi , \alpha ^{*1},\dots ,\alpha ^{{*},{N-1}}) =:{\hat{\alpha }}^{*}\). We wish to estimate from above the quantity

$$\begin{aligned} J^0(\alpha ^{*}) - J^0({\hat{\alpha }}^{*})= & {} \frac{1}{2} \, \mathbb {E}\bigg [ \int _0^T \!\! \big ( |\alpha ^{*0}(t,X^*_t)|^2- |\psi (t,X_t)|^2 \nonumber \\ {}{} & {} + F(X^*_t) - F(X_t) \big )\,\textrm{d}t + G(X^*_T) - G(X_T) \bigg ], \end{aligned}$$
(2.35)

where \(F :=\langle F^0\cdot ,\cdot \rangle \) and \(G :=\langle G^0\cdot ,\cdot \rangle \). Let u be the QSD solution to (2.6) on \([0,T] \times \mathcal {X}\). Since G is convex and \(DG = Du^0(T,\cdot )\),

$$\begin{aligned}\begin{aligned} G(X^*_T) - G(X_T)&\le \langle Du^0(T,X^*_T),X^*_T - X_T\rangle _{{\mathbb {R}}^{dN}} = \sum _{j=0}^{N-1} \sum _{k = 0}^{\ell -1} c_{jk}(T) \langle X^{*k}_T,X^{*j}_T-X^j_T\rangle _{{\mathbb {R}}^d} \\&= \sum _{j=0}^{N-1} \sum _{k = 0}^{\ell -1} \bigg ( \int _0^T \dot{c}_{jk}(t) \langle X^{*k}_t,X^{*j}_t-X^j_t\rangle _{{\mathbb {R}}^d}\, \textrm{d}t \\&\quad \ + c_{jk}(t) \langle \textrm{d}X^{*k}_t,X^{*j}_t-X^j_t\rangle _{{\mathbb {R}}^d} + c_{jk}(t) \langle X^{*k}_t,\textrm{d}(X^{*j}_t-X^j_t)\rangle _{{\mathbb {R}}^d} \bigg ). \end{aligned}\end{aligned}$$

Note that we can replace \(\ell \) with any \(M \ge \ell \) because \(c_{jk}(T) = g_{jk}(T) = 0\) if \(k > \ell \). As c solves (2.24) we obtain

$$\begin{aligned} \begin{aligned} G(X^*_T) - G(X_T)&\le \sum _{j=0}^{N-1} \sum _{k = 0}^{M-1} \bigg ( \int _0^T \sum _{h = 1}^{j} c_{0,j-h}(t) c_{hk}(t) \langle X^{*k}_t,X^{*j}_t-X^j_t\rangle _{{\mathbb {R}}^d} \, \textrm{d}t \\&\quad + \int _0^T \sum _{h = 0}^{k} c_{0,k-h}(t) c_{hj}(t) \langle X^{*k}_t,X^{*j}_t-X^j_t\rangle _{{\mathbb {R}}^d} \,\textrm{d}t \\&\quad - f_{jk} \int _0^T \langle X^{*k}_t,X^{*j}_t-X^j_t\rangle _{{\mathbb {R}}^d}\,\textrm{d}t \\&\quad + \int _0^T c_{jk}(t) \langle \alpha ^{*k}(t,X^*_t),X^{*j}_t-X^j_t\rangle _{{\mathbb {R}}^d} \,\textrm{d}t \\&\quad + \int _0^T c_{jk}(t) \langle X^{*k}_t,\alpha ^{*j}(t,X^*_t)-{\hat{\alpha }}^{*j}(t,X_t)\rangle _{{\mathbb {R}}^d} \,\textrm{d}t \bigg ) + Z_T, \end{aligned} \end{aligned}$$

where \((Z_t)_{0\le t\le T}\) is a martingale starting from 0. Straightforward computations show that, omitting the dependence on t,

$$\begin{aligned} - \sum _{k=0}^{M-1} c_{jk} \alpha ^{*k}(X^*) = \sum _{k=0}^{M-1}\sum _{h=0}^{k} c_{0,k-h} c_{hj} {X^{*k}} + \sum _{h=N-M+1}^{N-1} \sum _{k=N-h}^{M-1} c_{0h} c_{jk} X^{*,h+k-N} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} - \sum _{j=0}^{N-1} c_{jk}(\alpha ^{*j}(X^*)-{\hat{\alpha }}^{*j}(X))&= - c_{0k}(\alpha ^{*0}(X^*) - \psi (X)) + \sum _{j=0}^{N-1} \sum _{h=1}^{j} c_{hk}c_{0,j-h} (X^{*j}-X^j) \\&\quad \ + \sum _{h=N}^{2N-2} \sum _{j=h-N+1}^{N-1} c_{jk} c_{0,h-j} (X^{*,h-N} - X^{h-N}); \end{aligned}\end{aligned}$$

therefore,

$$\begin{aligned} \begin{aligned} G(X^*_T) - G(X_T)&\le - \sum _{k=0}^{M-1} \int _0^T c_{0k}(t) \langle X^{*k}_t,\alpha ^{*0}(X^*) - \psi (X)\rangle _{{\mathbb {R}}^d} \, \textrm{d}t \\&\quad \ - \sum _{j=0}^{N-1} \sum _{k = 0}^{M-1} f_{jk} \int _0^T \langle X^{*k}_t,X^{*j}_t-X^j_t\rangle _{{\mathbb {R}}^d}\,\textrm{d}t - \int _0^T \mathcal {E}_t \,\textrm{d}t, \end{aligned}\end{aligned}$$
(2.36)

where we have set

$$\begin{aligned} \begin{aligned} \mathcal {E}&:=\sum _{j=0}^{N-1} \sum _{h=N-M+1}^{N-1} \sum _{k=N-h}^{M-1} c_{0h} c_{jk} \langle X^{*,h+k-N},X^{*j}-X^j\rangle _{{\mathbb {R}}^d} \\&\quad \ + \sum _{k=0}^{M-1} \sum _{h=N}^{2N-2} \sum _{j=h-N+1}^{N-1} c_{jk} c_{0,h-j} \langle X^{*k},X^{*,h-N} - X^{h-N}\rangle _{{\mathbb {R}}^d}. \end{aligned}\end{aligned}$$

Note now that by the convexity of \(\frac{1}{2}|\cdot |^2\), omitting the dependence on t,

$$\begin{aligned} \frac{1}{2} |\alpha ^{*0}(X^*)|^2 - \frac{1}{2} |\psi (X)|^2 - \langle \alpha ^{*0}(X^*),\alpha ^{*0}(X^*) - \psi (X)\rangle _{{\mathbb {R}}^d} \le 0 \end{aligned}$$
(2.37)

and by the convexity of F

$$\begin{aligned} F(X^*) - F(X) - \langle DF(X^*),X^*-X\rangle _{{\mathbb {R}}^d} \le 0. \end{aligned}$$
(2.38)

Using (2.36), (2.37) and (2.38) in (2.35) we obatin

$$\begin{aligned} J^0(\alpha ^*) - J^0({\hat{\alpha }}^{*}) \le \mathbb {E} \bigg [ \sum _{k=\ell }^{N-1} \int _0^T c_{0k}(t) \langle X^{*k}_t,\alpha ^{*0}(X^*) - \psi (X)\rangle _{{\mathbb {R}}^d} \, \textrm{d}t - \int _0^T \mathcal {E}_t \,\textrm{d}t \bigg ].\nonumber \\ \end{aligned}$$
(2.39)

As, whenever \(\{ Y,Z \} \subseteq \{ X^*,X \}\),

$$\begin{aligned} \sup _{j,k \in [[N]]} \mathbb {E} \int _0^T |\langle Y^{k}_t,Z^{j}_t\rangle _{{\mathbb {R}}^d}| \,\textrm{d}t \le T C, \end{aligned}$$

where \(C = C(|x_0|,R,L,T)\) is the constant appearing in Remark 2.34, we have

$$\begin{aligned} \begin{aligned} \bigg | \,\mathbb {E} \int _0^T \mathcal {E}_t \, \textrm{d}t\, \bigg |&\le 2TC \bigg ( \Vert c\Vert _{C^0([0,T];\ell ^1(\mathbb {N}^2))} \sum _{h \ge N-M} \Vert c_{0h}\Vert _{\infty ;[0,T]} \\&\quad \ + \sum _{h\ge N} \sum _{j=0}^{N-1} \Vert c_{j\cdot }\Vert _{C^0([0,T];\ell ^1(\mathbb {N}))} \Vert c_{0,h-j}\Vert _{\infty ;[0,T]} \bigg ), \end{aligned}\end{aligned}$$

so that by Lemma 2.30, for any \(\delta \in (\varrho ^{-1},1)\), there exists \(K > 0\) such that

$$\begin{aligned} \bigg | \,\mathbb {E} \int _0^T \mathcal {E}_t \, \textrm{d}t\, \bigg | \le {\tilde{C}} ( \delta ^{-M} + N ) \delta ^N, \quad {\tilde{C}} :=\frac{2TCK^2}{(1-\delta )^3}. \end{aligned}$$
(2.40)

On the other hand,

$$\begin{aligned} \sup _k \, \mathbb {E} \int _0^T |\langle X^{*k}_t,\alpha ^{*0}(X^*) - \psi (X)\rangle _{{\mathbb {R}}^d}| \le 2R\sqrt{TC} + 2LC, \end{aligned}$$
(2.41)

where we used again that \(\alpha ^*, {\hat{\alpha }}^{*} \in \mathcal {A}_{R,L}\). Therefore, from (2.39), (2.40), (2.41) and Lemma 2.30 we obtain

$$\begin{aligned} J^0(\alpha ^*) - J^0({\hat{\alpha }}^{*}) \le {\hat{C}} (\delta ^M + (\delta ^{-M} + N ) \delta ^N), \quad {\hat{C}} :=2(R\sqrt{TC} + LC)\frac{K}{1-\delta } \vee {\tilde{C}}. \end{aligned}$$

This concludes the proof. \(\square \)

Corollary 2.36

Let \(\Omega \subset {\mathbb {R}}^d\) bounded, and assume the same as in Theorem 2.35, but with \(x_0 \in \Omega ^N\). Let \(R \ge 0\) and L be admissible (in the sense of Definition 2.33); consider as admissible those controls belonging to \(\mathcal {A}_{R,L}\). Then, for any \(\epsilon > 0\) there exists \(N_0 = N_0(\epsilon ,R,L,\varrho , \ell , \Omega )\) such that if \(N \ge N_0\) then the control \(\alpha ^*\) provides an \(\epsilon \)-Nash equilibrium of the game.

Proof

It suffices to require that

$$\begin{aligned} {\hat{C}} (\delta ^M + (\delta ^{-M} + N ) \delta ^N) \le \epsilon \quad \forall \, N \ge N_0, \end{aligned}$$
(2.42)

where, for instance, one sets \(\delta = \frac{1}{2}(1+\varrho ^{-1})\). Choose \(M = M(N)\) such that \(M \rightarrow \infty \) and \(M = o(N)\) as \(N \rightarrow \infty \); for example, \(M = \lfloor \sqrt{N} \rfloor \) for all \(N > \ell ^2\). Since \(x_0 \in \Omega ^N\) there exists a constant \({\hat{C}}'\), independent of N, such that \({\hat{C}} \le N {\hat{C}}'\). Then the conclusion follows from the fact that the left-hand side of (2.42) goes to 0 as \(N \rightarrow \infty \). \(\square \)

2.8 The Ergodic Nash System

Consider now the related ergodic problem, with costs

$$\begin{aligned} {\bar{J}}^i(\alpha ) = \liminf _{T \rightarrow +\infty } \,\frac{1}{2T}\, \mathbb {E} \int _0^T \!\big ( |\alpha ^i |^2 + \langle F^i X,X\rangle \big ), \end{aligned}$$

where the dynamics and the assumptions on \(F^i\) are the same as before. The corresponding Nash system reads

$$\begin{aligned} \lambda ^i - \Delta v^i + \frac{1}{2} |D_i v^i |^2 + \sum _{j\ne i} D_j v^j D_j v^i = {\bar{F}}^i \qquad \text {on} \ \mathcal {Y}, \end{aligned}$$
(2.43)

where \(\mathcal {Y}\) is either \(({\mathbb {R}}^d)^N\), for the N-player game, or \(\mathcal {X}\), for the limit game with infinitely many players. In the latter case we give notions of classical solution and QSD solution which are analogous as those in the previous section.

Definition 2.37

A sequence of pairs \(((\lambda ^i, v^i))_{i\in \mathbb {Z}}\) of real numbers \(\lambda ^i\) and \({\mathbb {R}}\)-valued functions \(v^i\) defined on \(\mathcal {X}\) is a classical solution to the ergodic Nash system (2.43) on \(\mathcal {X}\) if the following hold:

  1. (E1)

    each \(v^i\) is of class \(C^2\) with respect to \(x \in \mathcal {X}\), in the Fréchet sense;

  2. (E2)

    for each \(i \in \mathbb {N}\), the series \(\sum _{j\ne i} D_j v^j D_j v^i\) uniformly converges on all bounded subsets of \(\mathcal {X}\);

  3. (E3)

    system (2.43) is satisfied pointwise for all \(x \in \mathcal {X}\);

A QSD ergodic (QSDE) solution will be a classical solution to (2.43) with

$$\begin{aligned} \lambda ^i \equiv \lambda = {{\,\textrm{tr}\,}}{\bar{c}}, \qquad v^i(x) = \frac{1}{2} \sum _{h,k \in \mathbb {N}} {\bar{c}}_{hk} \langle x^{h+i},x^{k+i}\rangle _{{\mathbb {R}}^d}, \end{aligned}$$
(2.44)

for some \({\bar{c}} \in \ell ^1(\mathbb {N}^2)\).

Remark 2.38

By the structure of (2.43), it is clear that if \(((\lambda ^i, v^i))_{i\in \mathbb {Z}}\) is a classical solution, then so will be \(((\lambda ^i, v^i+\mu ^i))_{i\in \mathbb {Z}}\) for any choice of real numbers \(\mu ^i\). We will prove that there exists a special choice of \(\mu \in {\mathbb {R}}\), and of \({\bar{c}} \in \ell ^1(\mathbb {N}^2)\), such that the solution \(((\lambda , v^i+\mu ))_{i\in \mathbb {Z}}\), with \(\lambda \) and \(v^i\) given by (2.44), is in a precise sense the limit of the QSD solution as \(T \rightarrow +\infty \) (see Theorem 2.42).

Arguing as in the previous section, the coefficients \({\bar{c}}_{hk}\) of a QSDE solution are given by the solutions of the following system:

$$\begin{aligned} - c_{0h} c_{0k} + \sum _{j = 0}^h c_{0,h-j} c_{jk} + \sum _{j = 0}^k c_{0,k-j} c_{jh} = f_{hk}. \end{aligned}$$
(2.45)

It is immediate to see that if c solves (2.45), then so does \(- c\), hence we cannot have a unique solution to this limit system.

Lemma 2.39

There are exactly two sequences \((c_{hk}^\pm )_{h,k \in \mathbb {N}}\) which solve (2.45). Such sequences are one the opposite of the other; that is, \(c^- = - c^+\).

Proof

We have \(c_{00}^2 = f_{00} > 0\), hence \(c_{00} \in \{ \pm \sqrt{f_{00}}\, \}\). Once the sign of \(c_{00}\) is chosen, all other \(c_{hk}\) can be uniquely determined by induction on hk. \(\square \)

Both solutions can be represented also in this case by a generating function.

Lemma 2.40

The two solutions to (2.45) are given by

$$\begin{aligned} c_{hk}^\pm = \pm \,\frac{1}{h!\,k!} \, \frac{\partial ^{h+k}}{\partial z^h \partial w^k}\bigg |_{(0,0)} \bar{\Xi }, \end{aligned}$$
(2.46)

for some analytic function \(\bar{\Xi }:\varrho \mathbb {D}^2 \rightarrow \mathbb {C}\) such that \(\bar{\Xi }(z,w) = \bar{\Xi }(w,z)\).

Proof

A function \(\bar{\Xi }\) is the desired generating function if it solves the equation

$$\begin{aligned} - \bar{\Xi }(z,0)\,\bar{\Xi }(0,w) + \left( \bar{\Xi }(z,0) + \bar{\Xi }(0,w) \right) \bar{\Xi }(z,w) = \varphi (z,w), \end{aligned}$$

for \(z,w \in \varrho \mathbb {D}\). It follows that \(\bar{\Xi }(z,0)^2 = \varphi (z,0)\), so choose \(\bar{\Xi }(\cdot ,0) = \xi \); then

$$\begin{aligned} \bar{\Xi }(z,w) = \frac{\varphi (z,w) + {\xi (z)}{\xi (w)}}{{\xi (z)} + {\xi (w)}}. \end{aligned}$$
(2.47)

Note that the real part of the denominator in (2.47) can vanish only if \(|z|,|w| = \varrho \), hence \(\bar{\Xi }\) is well-defined and analytic for \((z,w) \in \varrho \mathbb {D}^2\). \(\square \)

This is sufficient to prove the existence of exactly two opposite QSDE solutions, and thus that the limiting ergodic Nash system in \(\mathcal {X}\) is soluble. We state this as a theorem.

Theorem 2.41

There exist exactly two QSDE solutions to (2.43) on \(\mathcal {X}\). Such solutions are one the opposite of the other.

Proof

Argue as in the first part of the proof of Lemma 2.30 to say that \(c^+ \in \ell ^1(\mathbb {N}^2)\), then argue as in the proof of Lemma 2.31 to build the QSDE solution determined by choosing \({\bar{c}} = c^+\). Finally note that the solution determined by the choice \({\bar{c}} = c^-\) is the opposite function. \(\square \)

2.9 Long-Time Asymptotics

As expected by KAM theory and ergodic control, we are going to prove now that, up to a constant, the QSDE solution corresponding to \(c^+\) (that is, the solution with \(c_{00} > 0\)) describes the long-time asymptotics of the QSD solution as \(T \rightarrow +\infty \), while the opposite solution should be regarded as the result of considering the limit \(T \rightarrow -\infty \) instead. To highlight the dependence of the QSD solution on T, we will write it as \(u^i_T\); on the other hand, since by the shift-invariance property it will suffice to show the convergence of \(u^0_T\) to the QSDE solution \(v^0\), we will omit the superscript 0.

Theorem 2.42

Let Assumptions (\(\star \)) be in force with \([0,+\infty ) \subseteq I\). Let \(u_T\) be the value function of the 0-th player corresponding to the QSD solution to the Nash system on \([0,T] \times \mathcal {X}\); let v be the value function of the 0-th player corresponding to the QSDE solution on \(\mathcal {X}\) determined by \({\bar{c}} = c^+\). Let \(\lambda :={{\,\textrm{tr}\,}}{\bar{c}}\). Then, for any \(t \ge 0\), as \(t < T \rightarrow +\infty \), the following limits hold, locally uniformly in both \(x \in \mathcal {X}\) and t:

$$\begin{aligned} \frac{u_T(t,x)}{T-t} \rightarrow \lambda \end{aligned}$$
(2.48)

and there exists a constant \(\mu \in {\mathbb {R}}\) such that

$$\begin{aligned} u_T(t,x) - \lambda (T-t) \rightarrow v(x) + \mu . \end{aligned}$$
(2.49)

The proof is based on the following result, which is strictly related to a refinement of Lemma 2.30 (cf. Remark 2.44 below) and is due to the possibility of explicitly compute the integral in formula (2.29) as showed in Remark 2.29.

Lemma 2.43

Under Assumptions (\(\star \)) with \([0,+\infty ) \subseteq I\), there exists a nonnegative function \(\gamma \in C_0^0([0,+\infty )) \cap L^1([0,+\infty ))\), depending only on r, f and g, such that

$$\begin{aligned} \sup _{(z,w) \in \mathcal {Q}_r} \bigg |\sigma ^\pm (t,z,w) - \frac{2}{\xi (z) + \xi (w)} \bigg |\le \gamma (t)\, \end{aligned}$$

for all \(t \ge 0\), where \(\sigma ^\pm \) are defined as in (2.30).

Proof

By the continuity of \(\xi \), given \(r' \in (r,\varrho )\),Footnote 11 there exists \(\epsilon > 0\) such that

$$\begin{aligned} \Re \,\xi \ge \epsilon \quad \text {on}\ \bar{\mathbb {D}}_{r'} \end{aligned}$$
(2.50)

whence

$$\begin{aligned} \inf _{k\in \mathbb {Z}} \,|{\xi (w)t - \big (k+\tfrac{1}{2}\big )i\pi }|\ge \mathfrak {d}(t) \qquad \forall \, w \in \bar{\mathbb {D}}_{r'}, \ t \in [0,+\infty ), \end{aligned}$$
(2.51)

for some function \(\mathfrak {d}\) which is uniformly positive on \([0,+\infty )\).Footnote 12 By (2.51) there exists a uniformly positive function \({\mathfrak {f}} :[0,+\infty ) \rightarrow {\mathbb {R}}_+\), which depends only on r and f, such that \(|\cosh (\cdot \,t) |\ge {\mathfrak {f}}(t)\) on \(\xi (\bar{\mathbb {D}}_r)\); also, by (2.50) we can suppose that \({\mathfrak {f}}\) be asymptotic to \(\frac{1}{2} e^{3\epsilon |\cdot |}\) at \(\infty \). Since, with \(\mathcal {L}\) defined as in (2.30),

$$\begin{aligned} \frac{\partial }{\partial \xi } \, \mathcal {L}(t,z;\xi (z),\psi (z)) = \frac{1}{\cosh (\xi (z)t)} \, \frac{(\xi (z)^2-\psi (z)^2)t - \frac{\psi (z)}{\cosh (\xi (z)t)}}{(\psi (z)\tanh (\xi (z) t)+\xi (z))^2}, \end{aligned}$$

we obtain, also using (2.22),

$$\begin{aligned} \bigg | \frac{\mathcal {L}(t,z) - \mathcal {L}(t,w)}{\xi (z) - \xi (w)} \bigg | \lesssim \frac{t}{\mathfrak {f}(t)^2} \quad \forall \, (t,z,w) \in [0,+\infty ) \times \mathcal {Q}_r, \end{aligned}$$
(2.52)

where the implied constant depends only on r, f and g. At this point, it is easy to see that the desired conclusion follows. \(\square \)

Remark 2.44

This proof shows that if \([0,+\infty ) \subseteq I\), then the constant K appearing in Lemma 2.30 is in fact independent of T, since \(\sup _{{\mathbb {R}}\times \mathcal {Q}_r} |{\hat{\Xi }} |\) is finite. In particular, \(c \in C^0([0,T]; \ell ^1(\mathbb {N}^2))\) along with its derivatives, and their the norms are bounded uniformly with respect to \(T>0\).

Proof of Theorem 2.42

Fix \(r \in (1,\varrho )\). By comparing expressions (2.31) and (2.47) one sees that

$$\begin{aligned} \sup _{|z|,|w| \le r} |\Xi (t,z,w) - \bar{\Xi }(z,w) |\lesssim \gamma (T-t), \end{aligned}$$
(2.53)

where \(\gamma \) is the function given in Lemma 2.43, and the implied constant depends only on the \(L^\infty \)-norms of \(\varphi \) and \(\xi \) on \(\mathbb {D}_r\) and \(\mathcal {Q}_r\), respectively. By Lemma 2.28 and Cauchy’s theorem on derivatives,

$$\begin{aligned} |c_{hk}^T(t) - {\bar{c}}_{hk} |\le \frac{1}{r^{h+k}} \sup _{|z|,|w| \le r} |\Xi (t,z,w) - \bar{\Xi }(z,w) |\qquad \forall \, h,k \in \mathbb {N}; \end{aligned}$$
(2.54)

where we have used the superscript T to stress the fact that \(c_{hk}(t) = c_{hk}^T(t)\) depends on the horizon T. Plugging (2.53) into (2.54) yields

$$\begin{aligned} \sum _{h,k\in \mathbb {N}} |c_{hk}^T(t) - {\bar{c}}_{hk} |\lesssim \gamma (T-t) \end{aligned}$$
(2.55)

as well as

$$\begin{aligned} |\,{{\,\textrm{tr}\,}}c^T(t) - \lambda \, |\lesssim \gamma (T-t), \end{aligned}$$
(2.56)

where the implied constants depend only on r and f. As \(\gamma \) is integrable on \([0,+\infty )\), by (2.56) so is \({{\,\textrm{tr}\,}}{\hat{c}} - \lambda \), where we use the notation \({\hat{c}} = c(T-\cdot )\) introduced in the proof of Lemma 2.27; thus by the dominated convergence theorem there exists \(\mu \in {\mathbb {R}}\) such that

$$\begin{aligned} \int _t^T {{\,\textrm{tr}\,}}c^T - (T-t)\lambda \rightarrow \mu \quad \text {as} \ T\rightarrow +\infty , \end{aligned}$$

locally uniformly in t. Along with (2.55), this proves (2.48) and (2.49). \(\square \)

Remark 2.45

The argument of the previous proof also applies to the case when \(t = sT\), with \(s \in [0,1]\). In this case, we can give the following estimate of the rate of convergence of (2.48): for any \(r \in (1,\varrho )\),

$$\begin{aligned} \sup _{\Vert x\Vert _{\mathcal {X}} \le L, \ s \in [0,1]} \bigg |\frac{u_T(sT,x)}{T} - (1-s)\lambda \bigg |\lesssim _{L,r} \frac{1}{T}. \end{aligned}$$

The implied constant can be computed quite explicitly, by retracing the proofs above; we confine ourselves to pointing out that it depends only on L, r and f, and that it explodes as \(L \rightarrow \infty \) or \(r \rightarrow 1\).

2.10 Digression on a Delicate Limit Case

We have noted in Example 2.2 that the case of a cost designed according to an underlying directed circulant graph structure is limit among those satisfying our assumptions, in the sense that Definition 2.18 holds with \(\varrho = 1\).

Results like Lemmata 2.27, 2.28, 2.30, 2.39 and 2.40 continue to hold, but the other methods used in the previous sections are not refined enough to successfully prove all the previous theorems for those limit cases, even though, along with Remark 2.44, they are sufficient in order to establish \(\ell ^\infty \)-stability at the level of the system for c; that is, convergence in \(\ell ^\infty (\mathbb {N}^2)\) of the solution to (2.24) to a solution of (2.45).

On the other hand, having a closer look at the simplest limit case, which is the directed chain given by the choice \(g = 0\), \(f^0_{00} = f^0_{11} = 1 = -f^0_{01} = -f^0_{10}\) and \(f^0_{hk} = 0\) for all other hk, we note that we are also able to compute QSDE solutions quite easily, thanks to formula (2.47). In this case we have \(\varphi (z,0) = 1 - z\), and we find the expansion

$$\begin{aligned} \bar{\Xi }(z,w) = 1 + \sum _{j \ge 1} (-)^j \left( {\begin{array}{c}\frac{1}{2}\\ j\end{array}}\right) \big ( z^j + w^j \big ) + \sum _{j \ge 2} (-)^j \left( {\begin{array}{c}\frac{3}{2}\\ j\end{array}}\right) \sum _{\begin{array}{c} h,k \ge 1 \\ h+k = j \end{array}} z^hw^k, \end{aligned}$$

yielding

$$\begin{aligned} c^\pm _{hk} = \pm (-)^{h+k} \left( {\begin{array}{c}\frac{3}{2}-\delta _{0,hk}\\ h+k\end{array}}\right) . \end{aligned}$$
(2.57)

As by Stirling’s formula \(|c^\pm _{hk} |\asymp (h+k)^{\delta _{0,hk}-\frac{5}{2}}\), one easily sees that these coefficients well-define a QSDE solution, hence the limit ergodic Nash system has a solution.

One can also note that the coefficients enjoy the property that

$$\begin{aligned} {\bar{c}}_{hk} = {\bar{c}}_{0,h+k} - {\bar{c}}_{0,h+k-1} \quad \text {if} \ hk \ne 0, \end{aligned}$$
(2.58)

where \({\bar{c}}\) is either \(c^+\) or \(c^-\); this can be seen from (2.57) or proved by induction using system (2.45). In fact, the same can be done for system (2.24), so that property (2.58) also holds for theFootnote 13 solution of (2.24) on [0, T], for any fixed \(T>0\). Therefore, the information about the coefficients of a prospective QSD or QSDE solution is encoded in the functions of one complex variable \(\Xi _0(t,\cdot ) :=\Xi (t,\cdot ,0)\) and \(\bar{\Xi }_0 = \bar{\Xi }(\cdot ,0)\), namely

$$\begin{aligned} \Xi _0(t,z) = \sqrt{1-z}\, \tanh \left( \sqrt{1-z}\,(T-t)\right) \quad \text {and} \quad \bar{\Xi }_0(z) = \sqrt{1-z}. \end{aligned}$$

This peculiar fact is specific of the directed chain. It could be useful as it allows to conclude existence of a solution to the infinite-dimensional evolutive Nash system provided that the sequence of functions \((c_{0k})_{k\in \mathbb {N}}\) is monotone,Footnote 14 yet this would not still be enough to deal with convergence to an ergodic solution.

Another property is instead shared by all problems having f of the form (2.1): the polynomial \(\varphi \) factors as \(\varphi (z,w) = \xi ^2(z)\xi ^2(w)\); this makes \(\Xi \) and \(\bar{\Xi }\) functions of \((\xi (z),\xi (w))\), possibly helping in the analysis of the aforementioned limit cases.

3 Mean-Field-Like Games: Long-Time Existence

In this part we consider the Nash system (2.8) without the shift-invariance assumption. Instead, we make a mean-field-like assumption, and look for conditions that guarantee the existence of solutions on arbitrarily large time horizon T. Given \((a^i_{jk})_{i,j,k} \in \mathscr {S}(N)^N\) we will use the notation B(a) for the matrix \(B(a)_{hk} = a^{h}_{hk}\). Given the i-th vector \(e^i\) of the canonical basis of \({\mathbb {R}}^N\), we will write \(E^i = e^i (e^i)^\textsf{T}\). The indices will range over [[N]].

Note that system (2.8) can be rewritten in a forward form as

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{c}^i - (c^i)^\textsf{T}E^ic^i + B(c)^\textsf{T}c^i + c^i B(c) = f^i \\ c^i(0) = g^i \end{array}\right. } \quad \ i \in [[N]], \end{aligned}$$
(3.1)

where \(c^i(t), f^i(t),g^i \in \mathscr {S}(N)\) for all \(i \in [[N]]\) and \(t \in [0,T]\).

Theorem 3.1

Let \(T>0\) and let \(f \in L^1((0,T);\mathscr {S}(N)^N)\) and \(g \in \mathscr {S}(N)^N\) satisfy

$$\begin{aligned} \sup _i \Big ( N \sum _{\begin{array}{c} h,k\\ k \ne i \end{array}} | \bullet ^i_{hk} |^2 + N \sum _{\begin{array}{c} k \\ k \ne i \end{array}} | \bullet ^k_{ki} |^2 + |\bullet ^i_{ii} |^2 \Big ) \le {\kappa _\bullet }, \quad \kappa _\bullet > 0, \quad \text {for}\ \bullet \in \{ f,g\}. \end{aligned}$$
(3.2)

Suppose that \(B(f) \ge -K_f I\) and \(B(g) \ge -K_g I\) for some constants \(K_f\) and \(K_g\) such that

$$\begin{aligned} K_g< \sup _{M\in {\mathbb {R}}} Me^{-2MT} {= \frac{1}{2eT}}, \qquad K_f < \sup _{M \in {\mathbb {R}}} {\frac{M(Me^{-MT}-K_g)}{T\big (1 \vee \frac{e^{2MT}-1}{2MT}\big )}}. \end{aligned}$$
(3.3)

Then there exists \(N_0 \in \mathbb {N}\) such that if \(N > N_0\) there exists a unique absolutely continuous solution c to (3.1) on [0, T), which satisfies

$$\begin{aligned} \sup _i \Big ( N \sum _{\begin{array}{c} h,k\\ k \ne i \end{array}} | c^i_{hk} |^2 + N \sum _{\begin{array}{c} k \\ k \ne i \end{array}} | c^k_{ki} |^2 + |c^i_{ii} |^2 \Big ) \le C \end{aligned}$$
(3.4)

for some constant C which is independent of N.

Note that (3.3) is automatically satisfied when \(K_f, K_g \le 0\) for any \(T > 0\). Otherwise, for fixed \(K_f\) and \(K_g\), it poses a restriction on the size of T (or, similarly, a restriction on the size of \((K_f)_+\) and \((K_g)_+\) once T is fixed). Back to the value functions \(u^i\), the previous estimate reads as follows:

$$\begin{aligned} \sup _i \Big ( N \sum _{\begin{array}{c} h,k \\ k \ne i \end{array}} \Vert D^2_{hk}u^i \Vert _\infty ^2 + N \sum _{\begin{array}{c} k \\ k \ne i \end{array}} \Vert D^2_{ki}u^k \Vert _\infty ^2 + \Vert D^2_{ii}u^i \Vert _\infty ^2 \Big ) \le C; \end{aligned}$$

in particular,

$$\begin{aligned} \sup _i \sum _{j:\, j \ne i} \Vert D_j \alpha ^i\Vert _\infty ^2 = \sup _i \sum _{j:\, j \ne i} \Vert D^2_{ij} u^i\Vert _\infty ^2 \le \frac{C}{N}. \end{aligned}$$

Remark 3.2

We use the terminology mean-field-like since any \(f^i\) such that

$$\begin{aligned} \sup _i |f^i_{ii}| + N \sup _{\begin{array}{c} i,j \\ j\ne i \end{array}} |f^i_{ij}| + N^2 \!\!\! \sup _{\begin{array}{c} i,j,k \\ j\ne i,\, k \ne i \end{array}} |f^i_{jk}| \le C \end{aligned}$$

satisfies (3.2), with \(\kappa _f\) depending on C (and not on N). In turn, the previous inequality is satisfied when \(f^i(x) = V^i(x^i, (N-1)^{-1} \sum _{j \ne i} \delta _{x^j})\), where \(V^i\) is a smooth enough function defined over \({\mathbb {R}}^d \times \mathcal {P}({\mathbb {R}}^d)\) (see for instance [10, Proposition 6.1.1]). Note that \(V^i\) here may depend on i, and it is not the same for all i like in the standard MFG setting.

Remark 3.3

Theorem 2.17 presented in the previous section and Theorem 3.1 can be compared as follows; the former is about short-time solutions for shift-invariant Mean-Field-like problems, while the latter provides (possibly) long-time solutions, under further structural conditions involving B(f) and B(g). Note that the former requires shift-invariance, that is all players are identical (though they do not observe the population through the empirical measure necessarily). Note also that the former is more precise in terms of estimates, that are pointwise with respect to hki, while the latter involves \(\ell ^2\) norms. Clearly, the lines of two proofs are rather different.

The proof of the theorem is based on the following lemmata.

Lemma 3.4

Let c be an absolutely continuous solution to (3.1) with \(B(c) \ge -MI\) on [0, T) for some \(M \in {\mathbb {R}}\). Assume (3.2). Then the following estimates hold on [0, T):

$$\begin{aligned} \sum _{\begin{array}{c} h,k \\ k\ne i \end{array}} |c^i_{hk}|^2 \le \kappa _0 N^{-1}, \qquad \kappa _0(t) :=(\kappa _g+ \kappa _f t)e^{(1+4M_+)t}, \end{aligned}$$
(3.5)
$$\begin{aligned} \sup _k \sum _{i\ne k} |c^i_{ik}|^2 \le \kappa _1N^{-1}, \qquad k_1(t) :=2\kappa _0(t) e^{2 \int _0^t\! \sqrt{\kappa _0}\,}, \end{aligned}$$
(3.6)
$$\begin{aligned} \sup _i |c^i_{ii}|^2 \le \kappa _2, \qquad \kappa _2(t) = (\kappa _g + (\kappa _f + \kappa _0 \kappa _1 N^{-2}) t)e^{(2+M_+)t}, \end{aligned}$$
(3.7)

where \(M_+ = M \vee 0\). As a consequence, c continuously extends on [0, T].

Proof

Multiplying the equation for \(c^i_{hk}\) by \(c^i_{hk}\) and summing over h and \(k\ne i\) we have

$$\begin{aligned} \begin{aligned} \frac{1}{2} \frac{\textrm{d}}{\textrm{d}t} \sum _{\begin{array}{c} h,k \\ k\ne i \end{array}} |c^i_{hk}|^2&= \sum _{\begin{array}{c} h,k \\ k\ne i \end{array}} f^i_{hk} c^i_{hk} - \sum _{\begin{array}{c} j,h,k \\ j,k\ne i \end{array}} c^i_{hk} c^i_{jh} c^j_{jk} - \sum _{\begin{array}{c} j,h,k \\ k\ne i \end{array}} c^i_{hk} c^i_{jk} c^j_{jh} \\&= \sum _{\begin{array}{c} h,k \\ k\ne i \end{array}} f^i_{hk} c^i_{hk} - \textrm{tr}({\hat{c}}^i{}^\textsf{T}B( c) {\hat{c}}^i) - \textrm{tr}(\tilde{c}^i{}^\textsf{T}B( c) {\tilde{c}}^i), \end{aligned} \end{aligned}$$

where we have set \({\hat{c}}^i_{hk} :=c^i_{hk}(1-\delta _{hi})\) and \({\tilde{c}}^i_{hk} = c^i_{hk}(1-\delta _{ki})\). It follows that

$$\begin{aligned} \frac{\textrm{d}}{\textrm{d}t} \sum _{\begin{array}{c} h,k \\ k\ne i \end{array}} |c^i_{hk}|^2 \le \kappa _f N^{-1} + (1+4M_+) \sum _{\begin{array}{c} h,k \\ k\ne i \end{array}} |c^i_{hk}|^2, \end{aligned}$$

thus, by Gronwall’s inequality, (3.5) is proved. Multiplying the equation for \(c^i_{ik}\) by \(c^i_{ik}\) and summing over \(i \ne k\) we obtain

$$\begin{aligned} \begin{aligned} \frac{1}{2} \frac{\textrm{d}}{\textrm{d}t} \sum _{i\ne k} |c^i_{ik}|^2&= \sum _{i\ne k} f^i_{ik} c^i_{ik} - \sum _{i\ne k} B( c)_{kk} |c^i_{ik}|^2 - \sum _{i,j \ne k} c^i_{ik} c^i_{ji} c^j_{jk} - \sum _{\begin{array}{c} i \ne k \\ j\ne i \end{array}} c^i_{ik} c^i_{jk} c^j_{ji} \\&\le \sum _{i\ne k} f^i_{ik} c^i_{ik} - \sum _{i\ne k} B( c)_{kk} |c^i_{ik}|^2 - {{\,\textrm{tr}\,}}({\hat{B}}(c)^\textsf{T}B(c) {\hat{B}}(c)) - \sum _{\begin{array}{c} i \ne k \\ j\ne i \end{array}} c^i_{ik} c^i_{jk} c^j_{ji}, \end{aligned} \end{aligned}$$

where \({\hat{B}}(c)_{hk} = B(c)_{hk}\) if \(h = k\) and it is null otherwise. By (3.5)

$$\begin{aligned} \bigg | \sum _{\begin{array}{c} i \ne k \\ j\ne i \end{array}} c^i_{ik} c^i_{jk} c^j_{ji} \bigg | \le \sup _\ell \sum _{i\ne \ell } |c^i_{i\ell }|^2 \bigg ( \sum _{\begin{array}{c} i \ne k \\ j\ne i \end{array}} |c^i_{jk}|^2 \bigg )^{\frac{1}{2}} \le \sqrt{\kappa _0}\, \sup _\ell \sum _{i\ne \ell } |c^i_{i\ell }|^2. \end{aligned}$$

We have

$$\begin{aligned} \frac{\textrm{d}}{\textrm{d}t} \sum _{i\ne k} |c^i_{ik}|^2 \le \kappa _f N^{-1} + (1+4M_+) \sum _{i\ne k} |c^i_{ik}|^2 + 2\sqrt{\kappa _0}\, \sup _\ell \sum _{i\ne \ell } |c^i_{i\ell }|^2; \end{aligned}$$

that is,

$$\begin{aligned} \sup _k \sum _{i\ne k} |c^i_{ik}(t)|^2 \le (\kappa _g + \kappa _f t) N^{-1} + \int _0^t \!(1+4M_++2\sqrt{\kappa _0}\,) \sup _k \sum _{i\ne k} |c^i_{ik}|^2. \end{aligned}$$

By Gronwall’s inequality one finds (3.6). Consider at this point that \(c^i_{ii}\) solves

$$\begin{aligned} \dot{c}^i_{ii} + |c^i_{ii}|^2 + 2\sum _{j\ne i} c^j_{ji} c^i_{ij} = f^i_{ii}, \end{aligned}$$
(3.8)

where by (3.5) and (3.6)

$$\begin{aligned} \Big | \sum _{j\ne i} c^j_{ji} c^i_{ij} \Big |^2 \le {\kappa _0\kappa _1} N^{-2}. \end{aligned}$$

Therefore, multiplying equation (3.8) by \(c^i_{ii}\) one obtains

$$\begin{aligned} \frac{\textrm{d}}{\textrm{d}t}\, |c^i_{ii}|^2 \le \kappa _f + (2+M_+)|c^i_{ii}|^2 + {\kappa _0\kappa _1} N^{-2} \end{aligned}$$

and thus (3.7) by Gronwall’s inquality. \(\square \)

Lemma 3.5

Under the hypotheses of Theorem 3.1, let c be as in Lemma 3.4. Suppose that

$$\begin{aligned} Me^{-2MT} > K_g, \qquad K_f < {\frac{M(Me^{-MT}-K_g)}{T\big (1 \vee \frac{e^{2MT}-1}{2MT}\big )}}. \end{aligned}$$

Then there exists \(N_*(T)\) such that \(B(c)(T) > -MI\) provided that \(N > N_*(T)\). Furthermore, the map \(T \mapsto N_*(T)\) is continuous on \([0,+\infty )\).

Proof

Consider that B(c) solves the equation

$$\begin{aligned} \dot{B}(c) + B(c)^2 = B(f) - D. \qquad \text {where} \quad D_{ik} = \sum _{j\ne i} c^i_{kj} c^j_{ji}. \end{aligned}$$
(3.9)

By estimates (3.5) and (3.6),

$$\begin{aligned} \Vert D\Vert _2^2 = \sum _{i,k} \bigg ( \sum _{j\ne i} c^i_{kj} c^j_{ji} \bigg )^2 \le \sup _i \sum _{j \ne i} |c^j_{ji}|^2 \cdot \sum _{\begin{array}{c} i,j,k \\ j \ne i \end{array}} |c^i_{jk}|^2 \le \kappa _0 \kappa _1 N^{-1}. \end{aligned}$$

Let now \(\xi \) solve the linear equation \({\dot{\xi }} = B(c)^\textsf{T}\xi \) on [0, T) with terminal condition \(\xi (T) = \zeta \), for some arbitrary \(\zeta \in \mathbb {S}^{N-1}\); note that since \(\frac{\textrm{d}}{\textrm{d}t} |\xi |^2 \ge -4\,M|\xi |^2\) we have

$$\begin{aligned} 1 \wedge e^{2M(T-\cdot )} \le |\xi | \le 1 \vee e^{2M(T-\cdot )} \quad \text { on }[0,T]. \end{aligned}$$

By (3.9) we get

$$\begin{aligned} (\xi ^\textsf{T}B(c) \xi )\dot{,} \!=\! \xi ^\textsf{T}(B(f)-D \!+\! B(c)B(c)^\textsf{T}) \xi \! \ge \!\xi ^\textsf{T}(B(f)\!-\!D) \xi \! \ge \!-(K_f \!+\! \sqrt{\kappa _0\kappa _1}\! N^{-\frac{1}{2}})|\xi |^2. \end{aligned}$$

If \(K_f < 0\) and N is large enough, then \(B(c)(T) > - \nu K_g I\), where

$$\begin{aligned} \nu :={\left\{ \begin{array}{ll} e^{2MT} &{} \text { if }MK_g \ge 0 \\ 1 &{} \text { if } M K_g \le 0, \end{array}\right. } \end{aligned}$$

thus it is easily seen that \(B(c)(T) > - MI\). If \(K_f \ge 0\), then

$$\begin{aligned} B(c)(T) \ge - \Big ( \nu K_g + \int _0^T ( K_f + \sqrt{\kappa _0\kappa _1} N^{-\frac{1}{2}} ) \, {(1 \vee e^{2M(T-\cdot )})} \Big ) I. \end{aligned}$$
(3.10)

It follows that in order to have \(B(c)(T) > -MI\) it suffices that

$$\begin{aligned} \nu K_g + T K_f {(1 \vee h(MT))} + N^{-\frac{1}{2}} T \sqrt{\kappa _0(T)\kappa _1(T)}\,h(MT) < M, \end{aligned}$$

where \(h(z) :=(e^{2z}-1)/(2z)\). This is guaranteed by our assumptions on \(K_g\), \(K_f\) and M (by which \(\nu K_g + T K_f {(1 \vee h(MT))} < M\)), provided that N is large enough. The continuity of \(N_*(T)\) is easily seen as one can write it explicitly using the estimates above. \(\square \)

Proof of Theorem 3.1

Fix M according to Lemma 3.5 (note that this implies that \(M > K_g\)). By the Cauchy–Lipschitz theorem there exists \(\tau > 0\) such that (3.1) has a unique absolutely continuous solution on \([0,\tau )\). By taking \(\tau \) smaller if necessary, we may suppose that by continuity \(B(c) > -M I\) on \([0,\tau )\). Then

$$\begin{aligned} {\bar{\tau }}:= & {} \sup \{ \tau> 0:\ (3.1) \text { has a unique absolutely continuous solution with } B(c)\\{} & {} \qquad > - MI \text { on } [0,\tau ) \} \end{aligned}$$

is well-defined. Seeking for a contradiction, suppose that \(\bar{\tau }< T\). Let \(N_0 :=\max _{[0,T]} N_*\), where \(N_*\) is given by Lemma 3.5. By Lemma 3.4, c continuously extends on \([0,{\bar{\tau }}]\), with \(B(c)({\bar{\tau }}) > -MI\) as guaranteed by Lemma 3.5, thanks to our choice of \(N_0\). By the Cauchy–Lipschitz theorem one can extend the solution on \([0,\tau ')\) for some \(\tau ' > {\bar{\tau }}\) and by continuity we may suppose that \(B(c) > -MI\) on \([0,\tau ')\). This contradicts the maximality of \({\bar{\tau }}\), thus \({\bar{\tau }} \ge T\). Finally, estimate (3.4) follows from (3.5), (3.6) and (3.7). This concludes the proof. \(\square \)