1 Introduction

With the recent pandemic of COVID-19, the importance of management of large populations in order to control the evolution of the disease has been recognized globally. How a pandemic plays out is a consequence of the interplay of many complex processes, e.g., disease specific spread mechanisms, the network of social interactions, and society-wide efforts to stop or slow the spread. As individuals, we have all made choices during the ongoing pandemic about the extent to which we minimize our personal risk of being infected. However, there is a trade-off between being careful and the pursuit of happiness. As we all have learned by now, our risk is not only determined by our own vigilance but also by others’ choices and our environment.

In the framework of rational agentsFootnote 1, each individual anticipates the action of their neighbours, neighbours’ neighbours, etc., and any other external influence, then selects their action as a best response to others’ actions. In other words, they want optimize their private outcome while taking into account their surrounding environment, which includes other agents’ actions. This type of strategic interaction is a non-cooperative game. The communication between agents in the game may be restricted by geography, social circles, and other factors. Moreover, people interact with different intensity, depending on their occupation and personality. Hence, the agents in the game, each with their own predisposition for risk, will act in a wide variety of ways and thus naturally form a heterogeneous crowd.

Consider the game discussed above where all agents anticipate the others’ action and then selfishly plays a best response. Strategy profiles (the collection of all players’ actions) consistent with such behavior are Nash equilibria, i.e., profiles such that no player profits from a unilateral deviation. Computing Nash equilibria in games with large number of players is particularly hard; under some specific assumptions, approximate equilibria can be found by using the mean field game approach developed independently by Lasry-Lions [45, 46] and Huang-Malhamé-Caines [35]. The approach has found many practical applications, examples include the mathematical modeling of price movement in energy markets; pedestrian crowd motion and evacuation; and epidemic disease spread.

One of the fundamental assumptions in mean field game theory is that agents are indistinguishable and interact homogeneously. However, in some real-world applications such as the modeling of epidemics, the diversity of individuals and the variation of their interactions are important factors to consider. Examples of such include the effects of travel restrictions, multiple age groups with distinct social behavior and risk profiles, and the spectrum of preexisting health conditions. The aspects listed above require the game to have a non-uniform underlying network structure. Games with a large number of non-identical players can be analyzed with so-called graphon games whenever the network specifying the interactions is denseFootnote 2.

Epidemics are driven by the spread of the disease from infected to susceptible agents. The set of susceptible agents is not necessarily the whole non-infected population, for certain diseases immunity is gained after exposure. The evidence suggests that the COVID-19 virus mainly spreads through close contactFootnote 3. Fortunately, the disease transmission probability can be decreased by efforts of the individual. For example, an individual can choose to avoid public and closed spaces, wear a protective mask or do their shopping online. When two people meet the disease transmission probability depends on both sides’ effort. The disease is less likely to occur if both parts wear protective masks than if just one part does. However, the decrease in risk of transmission is not additive in the interacting agents’ efforts. Using this intuition, we assume in this paper that disease transmission likelihood depends on efforts in a multiplicative way. The effort of the agents will therefore be given the name “contact factor control”, in line with the game-based model introduced in [2].

Before giving the full description of the epidemiological graphon game model, how it can be approached numerically, and all technical details, we expand upon the heuristics of graphon-based interaction with contact factor control and review the literature of related fields of research in the next sections.

1.1 The SIR Graphon Game with Contact Factor Control

The most famous compartmental model in epidemics is arguably the classical Susceptible-Infected-Removed (SIR) modelFootnote 4. In this section, we take advantage of its wide familiarity and compact formulation to further motivate for the concept of contact factor control and graphon-type interaction.

In order to give a description of the rate at which agents become infected, we need to first introduce some notation. Consider N individuals who transition between the states Susceptible (\({\mathsf {S}}\)), Infected (\({\mathsf {I}}\)), and Removed (\({\mathsf {R}}\)). An individual in state \({\mathsf {R}}\) has either gained immunity or deceased. Denote the state of agent \(j\in \{1,\dots , N\}\) at time t by \(X^{j,N}_t\). A susceptible individual might encounter infected individuals, resulting in disease transmission. Encounters occur pairwise and randomly throughout the population with intensity \(\beta \). The number of encounters with infected agents in a short time interval \([t-\varDelta t,t)\) is approximately proportional to the the share of the population in state \({\mathsf {I}}\) at t. Between each agent pair (jk), we set the interaction strength to \(w(x_j,x_k)\), where w is a graphon (see Definition 1 for its definition) and \(x_j, x_k\) are random variables uniformly distributed on [0, 1]. Hence agent k’s transition rate from state \({\mathsf {S}}\) to \({\mathsf {I}}\) is scaled by w and of the form \(\beta \frac{1}{N}\sum _{k=1}^N w(x_j,x_k){\mathbf {1}}_{{\mathsf {I}}}(X^{k,N}_{t-})\). Upon infection, an individual starts the path to recovery. The jump from state \({\mathsf {I}}\) to \({\mathsf {R}}\) happens after an exponentially distributed time with rate \(\gamma \). The state \({\mathsf {R}}\) is absorbing.

Denoting \(Z^{j,N}_t := \frac{1}{N}\sum _{k=1}^N w(x_j,x_k){\mathbf {1}}_{{\mathsf {I}}}(X^{k,N}_{t-})\), the transition rate matrix for player \(j\in \{1,\dots , N\}\) is

$$\begin{aligned} Q(Z^{j,N}_t) = \begin{bmatrix} -\beta Z^{j,N}_t &{} \beta Z^{j,N}_t &{} 0 \\ 0 &{} -\gamma &{} \gamma \\ 0 &{} 0 &{} 0 \end{bmatrix}. \end{aligned}$$
(1)

Here we use the order \({\mathsf {S}}, {\mathsf {I}}, {\mathsf {R}}\) for the columns and the rows. For instance, the term \(\beta Z^{j,N}_t\) encodes the rate at which an agent arrives in \({\mathsf {I}}\) coming from \({\mathsf {S}}\). As explained above, this rate is proportional to the weighted average \(Z^{j,N}_t\) of infected agents interacting with player j. At this stage, the distinguishability of the players is seen in the aggregate variables \((Z^{j,N}_t)_{j=1}^N\) which in general differ in value. As \(N\rightarrow \infty \), we expect the probability distribution flow of player j to converge to the solution of the ordinary differential equation (ODE)

$$\begin{aligned} {\dot{p}}^{x_j}(t) = p^{x_j}(t)Q(Z^{x_j}_t),\quad p^{x_j}(0) = p_0^{x_j}, \end{aligned}$$
(2)

where \(p_0^{x_j}\) is some given initial distribution over the states and

$$\begin{aligned} Z^{x_j}_t := \int _I w(x_j,y)p^y(t, {\mathsf {I}})dy. \end{aligned}$$
(3)

Here \(p^{x_j}(t)\) is understood as a vector of length 3 whose coordinates correspond to the probability for player \(x_j\) of being in state \({\mathsf {S}}\), \({\mathsf {I}}\) and \({\mathsf {R}}\), respectively at time t. Equation (2) encodes the evolution of these probabilities. \(Z^{x_j}_t\) is the average number of infected agents around player \(x_j\), weighted by the pairwise interaction strength \(w(x_j,y)\). Scaling \(p^x(t)\), \(x\in [0,1]\), by a population size N, \(Np^x(t) =: (S^x(t), I^x(t), R^x(t))\), we retrieve a formulation of the compartmental SIR model with graphon-based interactions

$$\begin{aligned} {\dot{S}}^x(t)&= -\frac{\beta }{N}\left( \int _I w(x,y)I^y(t)dy\right) S^x(t),&S^x(0)&= Np^x_0({\mathsf {S}}), \\ {\dot{I}}^x(t)&= \frac{\beta }{N}\left( \int _Iw(x,y)I^y(t)dy\right) S^x(t) - \gamma I^x(t),&I^x(0)&= Np^x_0({\mathsf {I}}), \\ {\dot{R}}^x(t)&= \gamma I^x(t),&R^x(0)&= Np^x_0({\mathsf {R}}). \end{aligned}$$

So far we have considered a model without action. Assuming agents choose a “contact factor” in order to decrease their risk of getting infected, which enters the model in line with the discussion above, we express this new feature mathematically as follows: Given that the meeting frequency is \(\beta \), pairing is random, disease spreads from infected agents to susceptible, and the spread probability is scaled by the efforts of the individuals that meet in a multiplicative way, the transition rate for individual j from \({\mathsf {S}}\) to the \({\mathsf {I}}\) is

$$\begin{aligned} \beta \alpha ^j_t \frac{1}{N}\sum _{k=1}^N w(x_j,x_k) \alpha ^k_t {\mathbf {1}}_{{\mathsf {I}}}(X^{k,N}_{t-}), \end{aligned}$$
(4)

where \(\alpha ^k_t\) denotes the (contact factor) action of individual \(k\in \{1,\dots , N\}\) at time t, selected from set of actions A. Along the lines of the heuristics of mean field game theory, we anticipate that in an appropriate approximation of our interacting system in the limit \(N\rightarrow \infty \), agent x transitions from susceptible to infected with rate

$$\begin{aligned} \beta \alpha ^x_{t} \int _Iw(x,y)\left( \int _A a \rho ^y_t(da,{\mathsf {I}})\right) dy, \end{aligned}$$
(5)

where \(\rho ^y_t\) is the joint distribution of action and state of player y at time t and \(\alpha ^x_{t}\) is the action of individual x.

1.2 Related Literature

1.2.1 Graphon Games and Finite State Space Mean Field Games

Challenges related to large populations arise in the game theory, and mean field game (MFG) theory presents a toolbox to compute equilibria in these games with large number of players. MFGs were first developed for continuous state space models [35, 45, 46] and later for a finite state space models [32, 33, 42]. The theory for finite state MFGs has been extended in many directions, with contributions including a probabilistic approach [14], the master equation approach [4], minor-major player games [12], and extended gamesFootnote 5 [13]. Further, finite state mean field control, risk-sensitive control, and zero-sum games are treated in [17,18,19] which cover cases of unbounded jump intensities. Graphon games [3, 7,8,9, 23, 30, 49] have recently been receiving an increasing research interest. The motivation is the study of strategic decision making in the face of a large non-complete dense networks of distinguishable agents. The graphon game’s rising popularity stems from its ability to handle heterogeneity of agents to an extent far beyond the MFG theory.

1.2.2 Mean Field Games and Related Models for Epidemics

Decision making in compartmental models (e.g., the SIR model) have been studied intensively for a long time, with an increasing interest recently with the COVID-19 pandemic. In the form of games and optimal control problems, disease-combating efforts ranging from strategies for social contracts to vaccination have been analyzed in the literature. Here, we focus on work relying on the graphon game theory and the mean-field approach.

During an epidemic where the disease is prone to close-contact transmission (one example being COVID-19) control of the contact rate or other social distancing protocols are go-to solutions in the fight against the disease. Such strategies and related variations have been studied in the context of mean field games [16, 28], mean field optimal control [47], and mean field type games [55]. In reality the population needs to be tested for the disease in order to accurately asses the risks in the decision making process. Two recent papers studying optimal testing policies are [15, 36]. The effects of vaccination and accompanying decision making problems are not studied in this paper; they have been analyzed in MFG-related settings since before the COVID-19 pandemic [27, 31, 37, 43, 44, 50]. Recently, the interplay between a population in which a disease spreads and a regulator has been studied in the form of Stackelberg games. In such models, the members in the populations are taking actions based on policies issued by the regulator, while the regulator anticipates the population’s reaction and optimizes the policy. The case of a cooperative population has been studied in [36], while in [2] a population of selfish agents with contact factor control has been studied.

An example of a deterministic optimal control problem for centralized decision making during a pandemic in a society with multiple communicating subpopulations is given in [29]. The subpopulations interact over a non-uniform graph. A central planner wants to flatten the (global) curve of infections, leading to the optimal control problem. Sending the number of subpopulations to infinity, we anticipate a limit where each interaction is weighted by a graphon and the limit model would be reminiscent of the interacting system of Kolmogorov equations studied in this paper.

1.2.3 Networks and Graphons in Epidemiology

There is a vast body of literature on epidemiology modeling with network interactions. A review of the studies that use idealized networksFootnote 6 in epidemiology models can be found in [39]. More closely related to the ideas in this paper, there are recent contributions connecting epidemic models and graphons. In [57] a sensitivity analysis on the graphon SIS epidemic model is conducted. An infinite dimensional SIS model with application to targeted vaccination is considered in [24]. The paper [40] proposes a model with local-density dependent Markov processes interacting through a graphon structure, and considers applications to epidemiology. In a similar but more general setting to the SIR model with graphon interaction, [1] studies convergence of a stochastic particle system to an SIR-like system of PDEs with spatial interaction. We note [1] and its continuation [25, 26] may be relevant for a future study of the convergence of N-player Nash equilibria to the equilibria in the finite state graphon game. The works mentioned in this section only consider the dynamics of the population without taking the agents’ decision making into account.

1.3 Contributions and Paper Structure

This paper is, to the best of our knowledge, the first to address the analysis and numerical resolution of graphon games that are time-dependent and with a discrete state space. The application to epidemiology model departs from the traditional literature on epidemiology models and graphon models by the incorporation of a game theoretical aspect: here we go beyond dynamic graphon systems and find Nash equilibria for rational agents. We construct a probabilistic particle model for a continuum of interacting agents and prove that graphon aggregates must be deterministic (as in e.g. (5)) under a set of natural conditions on the strategies and transition rates. This motivates the study of the asymptotic deterministic model formulation and gives a transparent interpretation of the agent’s control in the applied context. We derive theoretical results for the deterministic model: a verification theorem and an existence theorem for the coupled continuum of forward-backward ordinary differential equations (FBODEs) that characterize the finite state graphon game at equilibrium are proven. This is reminiscent of the mean field game framework, except that here the population is heterogeneous due to the graphon-based interactions. This makes the computation of solutions much more challenging. We then propose a machine learning method to solve the FBODE system. Finally, we consider a graphon game model for epidemic disease spread. Multiple test cases are solved with the proposed numerical method and the experimental results are discussed.

The outline of the rest of the paper is as follows: In Sect. 2, we introduce the model and analyze its deterministic formulation. In Sect. 3, we introduce the numerical approach and give experiment results. In Sect. 4, a theoretical framework for the model’s probabilistic framework is presented and we rigorously define the graphon game. For the sake of conciseness, the proofs are postponed to the appendices.

2 Model

2.1 Setup and Preliminaries

Let \(n\in {\mathbb {N}}\) and let E be the finite set \(\{1,\dots , n\}\). For each \(e\in E\) define the difference operator \(\varDelta _e\) acting on functions on E by the formula \([\varDelta _e \phi ](e^{\prime }) = \phi (e^{\prime }) - \phi (e)\). We identify the set of probability measures on E, \({\mathcal {P}}(E)\), with the simplex \(\varDelta (E) := \{x = (x_1,\dots , x_n)\in {\mathbb {R}}^n_+: \sum _i x_i = 1\}\) and endow it with the Euclidean distance. Throughout the paper, the notation \({\mathcal {P}}(\cdot )\) will be used to denote the set of Borel probability measures.

Let \(T>0\) be a finite time horizon. A process \((f_t)_{t\in [0,T]}\) will be denoted with its bold letter symbol \({\varvec{f}}\). Let \({\mathcal {C}} := C([0,T]; {\mathbb {R}})\) be the space of continuous real-valued functions from [0, T], \({\mathcal {D}} := D([0,T]; {\mathbb {R}})\) be the space of real-valued functions from [0, T] càdlàg at \(t\in [0,T)\) and continuous at \(t=T\). We denote the uniform norm by \(\Vert x\Vert _T := \sup _{s\in [0,T]}|x(s)|\), \(x\in {\mathcal {D}}\). We note that \(({\mathcal {C}}, \Vert \cdot \Vert _T)\) and \(({\mathcal {D}}, \Vert \cdot \Vert _T)\) are both Banach spaces, only the former is separable. Let \({\mathcal {D}}_E \subset {\mathcal {D}}\) be the set of functions in \(f\in {\mathcal {D}}\) such that \(f([0,T])\subset E\). Since \({\mathcal {D}}_E\) is a closed subset of \({\mathcal {D}}\), \(({\mathcal {D}}_E, \Vert \cdot \Vert _T)\) is a complete metric space.

Let I be the unit interval equipped with the Euclidean distance. We denote by \(\lambda _I\) and \({\mathcal {B}}(I)\) the Lebesgue measure and Borel \(\sigma \)-field on I, respectively. The set I is indexing the continuum of players in the graphon game. Throughout the paper, we will employ the notation \({\underline{\phi }} := (\phi (x))_{x\in I}\) for functions with domain I. Furthermore, in most cases we will denote the index argument with a superscript: \(\phi ^x := \phi (x)\).

This paper studies games with heterogeneous interactions. When the players in the game interact, the weight they give to each others’ action is parameterized by their indices. This weighted averaging is captured as the integration with respect to a graphon kernel function (see Section 1 for the discussion). Here, we give the most prevalent definition of a graphon.

Definition 1

A graphon is a symmetric Borel-measurable function, \(w : I\times I \rightarrow [0,1]\).

The graphon induces an operator W from \(L^2(I)\) to itself: for any \({\underline{\phi }}\in L^2(I)\),

$$\begin{aligned}{}[W{\underline{\phi }}]^x := \int _I w(x,y)\phi ^ydy. \end{aligned}$$
(6)

Remark 1

The definition of a graphon varies somewhat in the literature. Some authors require neither symmetry nor a non-negative range. As mentioned in the introduction, the graphon as defined here can be used to represent the limit of a sequence of dense random graphs as the number of nodes (players) goes to infinity. For example, the constant graphon \(w(x,y) = p\) is in a sense the limit of a sequence of Erdős-Rényi graphs with parameter \(p \in [0,1]\). Conversely, random graphs can be sampled from a graphon in at least two different ways: either we sample points in I and construct a weighted graph whose weights are given by the graphon, or we sample points in I and then sample edges with probabilities given by the graphon.

A Q-matrix with real-valued entries \(q_{i,j}\), \(i,j\in E\), is an \(n\times n\) matrix with non-negative off-diagonal entries such that:

$$\begin{aligned} q_{i,i} = -\sum _{j=1: j\ne i}^n q_{i,j},\quad i\in E. \end{aligned}$$
(7)

In this paper, we will consider controlled Q-matrices with entries that depend on population aggregates. More specifically, we let for each \(x\in I\), \(q^x_{i,j}: A \times {\mathbb {R}} \mapsto {\mathbb {R}}\), \(i,j\in E\), be bounded measurable functions such that \(Q^x(\alpha , z) := [q^x_{i,j}(\alpha ,z)]_{i,j=1}^n\) is a Q-matrix for all \((\alpha ,z) \in A\times {\mathbb {R}}\). We are going to work under the following assumption on the rates \(q^x_{i,j}\):

Condition 1

  1. (i)

    There is a finite constant \(q_{\max }>0\) such that for all \(x\in I\), \((i,j)\in E^2\), \(a \in A\), and \(z\in {\mathbb {R}}\):

    $$\begin{aligned} |q^x_{i,j}(a,z)| \le q_{\max }. \end{aligned}$$
    (8)
  2. (ii)

    There is a finite constant \(C>0\), possibly depending on n, such that for \(p = 1,2\) and for all \(x\in I\), \(t\in [0,T]\), \(\alpha \in A\), \((i,j)\in E^2\), and \(z,z' \in {\mathbb {R}}\)

    $$\begin{aligned}&|q^x_{i, k+i}(\alpha , z) - q^x_{j, k+j}(\alpha , z')| \le C\left( {\mathbf {1}}_{\{i\ne j\}} + |z - z'|^p\right) . \end{aligned}$$

Since the state space is finite and the rates are assumed to be uniformly bounded in Condition 1.(i), the Hölder-continuity assumption of Condition 1.(ii) is less restrictive than it otherwise would have been.

2.2 The Finite State Graphon Games Model for Epidemiology

In this section, we give a descriptive introduction to the epidemiological graphon game without going in to all the technical details. A rigorous mathematical motivation, built on the theory of Fubini extensions to accommodate for a continuum of independent jump processes, is presented in Sect. 4.

On a probability space \((\varOmega , {\mathcal {F}}, {\mathbb {P}})\), we consider a continuum of E-valued pure jump processes \(\varvec{X}^x = (X^x_t)_{t\in [0,T]}\) indexed over \(x\in I\). That is, for each \(\omega \in \varOmega \), \({\varvec{X}}^x(\omega )\in {\mathcal {D}}_E\). The stochastic process \({\varvec{X}}^x\) models the state trajectory of player x. The initial state \(X^x_0\) is sampled from a distribution \(p_0^x \in {\mathcal {P}}(E)\). Each player implements a strategy \(\varvec{\alpha }^x\), a process taking values in the compact interval \(A\subset {\mathbb {R}}\) described in more detail below. The players interact and each player’s state trajectory is potentially influenced by the whole strategy profile \((\varvec{\alpha }^x)_{x\in I}\). To emphasize dependence we denote the state trajectory of player x as \(\varvec{X}^{\underline{\varvec{\alpha }},x}\) given a strategy profile \(\underline{\varvec{\alpha }}=(\varvec{\alpha }^x)_{x\in I}\). For all \(x\in I\), \({\varvec{X}}^{\underline{\varvec{\alpha }},x}\) is a E-valued pure jump process with rate matrix \(Q^x(\alpha ^x_t, Z^{\underline{\varvec{\alpha }},x}_t)\) at time \(t\in [0,T]\). The rate matrix is controlled by \(\varvec{\alpha }^x\) and influenced by the population aggregate \({\varvec{Z}}^{\underline{\varvec{\alpha }},x}\). The aggregates we consider are averages weighted by a graphon w, more specifically of the form \([W K({\underline{\alpha }}_t, {\underline{X}}^{\underline{\varvec{\alpha }}}_{t-})]\) (cf. (6)) for some function \(K : A\times E \rightarrow {\mathbb {R}}\). In Sect. 4 we prove that the aggregate is a deterministic function of time, henceforth we write

$$\begin{aligned} Z^{\underline{\varvec{\alpha }},x}_t = \int _I w(x,y){\mathbb {E}}\left[ K(\alpha ^y_t, X^{\underline{\varvec{\alpha }},y}_{t-})\right] dy, \end{aligned}$$
(9)

where w is a graphon.

K will be called the impact function since it quantifies how much a player’s joint state and control distribution impacts the aggregate variable. One example is the impact function in Sect. 1.1 where \(K(\alpha ,e) = \alpha {\mathbf {1}}_{(e = {\mathsf {I}})}\), where the interpretation being that the aggregate is the averaged contact factor control of infected players. We have the following assumption on K:

Condition 2

There exist finite constants \(L_K,C_K>0\) such that for all \(a,a'\in A\) and \(e\in E\):

$$\begin{aligned} |K(a,e) - K(a',e)| \le L_K |a-a'|,\quad |K(a,e)| \le C_K. \end{aligned}$$

Our (for now formal) probabilistic definition of the interacting system of players is complete. The rigorous analysis of the system, the construction of a continuum of state trajectories, and conditions under which the aggregate is deterministic, i.e., of the form (9), is treated in detail in Sect. 4.

The key feature of the graphon game is that the aggregate variable is in general not the same for two distinct players. The players are therefore distinguishable and there is no “representative agent", as in MFGsFootnote 7. As a direct consequence, there is no flow of player state distributions common to all players. Instead, each player has their private flow. Denote by \(p^{\underline{\varvec{\alpha }},x}(t,e)\) the probability that player \(x\in I\) that is in state \(e\in I\) at time \(t\in [0,T]\), given that the population plays the strategy profile \(\underline{\varvec{\alpha }}\). We shall argue that player x’s state distribution flow \(\varvec{p}^{\underline{\varvec{\alpha }},x}\) solves the Kolmogorov forward equation

$$\begin{aligned}&\frac{d}{dt}p^{{\underline{\varvec{\alpha }},x}}(t) = p^{\underline{\varvec{\alpha }},x}(t) Q^x(\alpha ^x_t, Z^{\underline{\varvec{\alpha }},x}_t),\quad p^{\underline{\varvec{\alpha }},x}(t) := (p^{\underline{\varvec{\alpha }},x}(t,e))_{e\in E} \end{aligned}$$
(10)

with initial condition \(p^{\underline{\varvec{\alpha }},x}(0) = p_0^x\) and the player’s aggregate variable \(Z_t^{\underline{\alpha },x}\) is

$$\begin{aligned} Z^{\underline{\varvec{\alpha }},x}_t = \int _I w(x,y) \left( \int _{A\times E} K(a,e) \rho ^{\underline{\varvec{\alpha }},y}_t(da,de)\right) dy, \end{aligned}$$
(11)

with \(\rho ^{\underline{\varvec{\alpha }},y}_t\) being the joint probability law of control and state, \((\alpha ^y_t, X^{\underline{\varvec{\alpha }},y}_{t-})\).

We turn our focus to the players’ actions. We will make three standing assumptions that directly affect which strategies the players will choose. The first is that the environment that endogenously affects the players (but is known to the players) is varying smoothly over time, with no abrupt changes for example in lockdown penalties or expected recovery time. Secondly, if the players can impact their environment with their control, then the environment varies smoothly with their control too. For example, the risk of infection depends continuously on the agent’s level of cautiousness. Finally, the players’ strategies are decentralized, i.e., are unaffected by the transition of any agent other than themselves. Under these circumstances, the players have no apparent reason to discontinuously change their action over time except at times of transition between states. Such strategies (A-valued; decentralized; continuous in time between changes of the player’s own state) will be called admissible and the set of admissible strategies denoted by \({\mathbb {A}}\). The setting is further discussed in Sect. 4.

In this paper, we focus on the finite horizon problem where the cost is composed of two components: a running cost and a terminal cost. For each player \(x\in I\), the assumptions on the conditions that the running and terminal cost functions, \(f^x : [0,T]\times E\times {\mathbb {R}}\times A \rightarrow {\mathbb {R}}\) and \(g^x: E\times {\mathbb {R}}\rightarrow {\mathbb {R}}\), satisfy are given later in the text together with the theoretical results. The total expected cost to player x for playing the strategy \(\varvec{\sigma }\in {\mathbb {A}}\) while the population plays the strategy profile \(\underline{\varvec{\alpha }}\) is

$$\begin{aligned}&{\mathcal {J}}^x(\varvec{\sigma }; \underline{\varvec{\alpha }}) = {\mathbb {E}}\left[ \int _0^Tf^x(t, X^{\underline{\varvec{\alpha }},x}_t, Z^{\underline{\varvec{\alpha }},x}_t, \sigma _t)dt + g^x(X^{\underline{\varvec{\alpha }},x}_T, Z^{\underline{\varvec{\alpha }},x}_T)\right] . \end{aligned}$$
(12)

As we shall see, a change in player x’s control has no effect on the aggregate. Hence, the expected cost depends on the strategy profile only indirectly through the value of the aggregate variable. See Sect. 4 for the details. Therefore, hereinafter we shall use the notation \(J^x(\varvec{\sigma }; \varvec{Z}^{\underline{\varvec{\alpha }},x}) \) for the right-hand side of (12). In light of this, we employ the following definition of a Nash equilibrium in the graphon game:

Definition 2

The strategy profile \(\underline{\varvec{\alpha }}\) is a Nash equilibrium if it is admissible and no player can gain from a unilateral deviation, i.e.,

$$\begin{aligned} J^x(\varvec{\alpha }^x; {\varvec{Z}}^{\underline{\varvec{\alpha }},x}) \le J^x(\varvec{\sigma }; {\varvec{Z}}^{\underline{\varvec{\alpha }},x}), \quad \forall x\in I,\ \forall \varvec{\sigma }\in {\mathbb {A}}. \end{aligned}$$

2.3 Analysis of Finite State Graphon Games

By Definition 2, an admissible strategy profile \(\hat{\underline{\varvec{\alpha }}}\) is a Nash equilibrium if there exists an aggregate profile \(\hat{\underline{{\varvec{Z}}}} = (\hat{{\varvec{Z}}}^x)_{x\in I}\) such that

  • for all \(x \in I\), \(\hat{\varvec{\alpha }}^x\) minimizes \(J^x(\cdot ; \hat{{\varvec{Z}}}^x)\);

  • for all \(x \in I\), \(\hat{{\varvec{Z}}}^x\) is the aggregate perceived by player x if the population uses strategy profile \(\hat{\underline{\varvec{\alpha }}}\).

This alternative formulation has the advantage to split the characterization of the equilibrium into two parts and in the first part, the optimization problem faced by a single agent is performed with while the aggregate is fixed.

With a flow \({\varvec{Z}}^x\) being fixed, we define the value function of player \(x\in I\) as, for \(t\in [0,T]\) and \(e\in E\),

$$\begin{aligned}&u^x(t,e) := \\&\inf _{\varvec{\sigma }\in {\mathbb {A}}}{\mathbb {E}}\left[ \int _t^T f^x(s,X^{\varvec{\sigma },{\varvec{Z}}^x,x}_s, Z^x_s, \sigma _s)dt + g^x(X^{\varvec{\sigma },{\varvec{Z}}^x,x}_T, Z^x_T)\ |\ X^{\varvec{\sigma },{\varvec{Z}}^x,x}_t = e \right] \end{aligned}$$

where \({\varvec{X}}^{\varvec{\sigma },{\varvec{Z}}^x,x}\) is an E-valued pure jump process with transition rate matrix \(Q^x(\sigma _t, Z^x_t)\) at time \(t\in [0,T]\) and initial distribution \(p^x_0\).

To derive optimality conditions, we introduce the Hamiltonian of player x:

$$\begin{aligned} H^x(t,e, z, h, a) := \overrightarrow{{\mathbf {1}}_e} Q^x(a,z)h + f^x(t,e,z,a), \end{aligned}$$
(13)

where \(h\in {\mathbb {R}}^{n}\) and \(\overrightarrow{{\mathbf {1}}_e}\) is the coordinate (row) vector in direction e in \({\mathbb {R}}^n\). We assume that \(A \ni \alpha \mapsto H^x(t,e,z,h,\alpha )\) admits a unique measurable minimizer \({\hat{a}}^x_e(t,z,h)\) for all (tzh) and define the minimized Hamiltonian of player x:

$$\begin{aligned} \hat{H}^x(t, e, z, h) := H^x(t,e,z,h,\hat{a}^x_e(t,z,h)). \end{aligned}$$
(14)

The dynamic programming principle of optimal control leads to the HJB equation for \(u^x\) that reads

$$\begin{aligned} {\dot{u}}^x(t,e) + \hat{H}^x(t,e,Z^x_t, u^x(t,\cdot )) = 0, \quad u^x(T,e) = g^x(e, Z^x_T), \end{aligned}$$
(15)

where \({\dot{u}}^x(t,e)\) denotes the time derivative of \({u}^x(t,e)\). Noting that \(\overrightarrow{{\mathbf {1}}_e} Q(a,z)h = \overrightarrow{{\mathbf {1}}_e} Q(a,z) \varDelta _e h\), (15) can be equivalently written as

$$\begin{aligned} {\dot{u}}^x(t,e) + \hat{H}^x(t,e,Z^x_t, \varDelta _e u^x(t,\cdot )) = 0, \quad u^x(T,e) = g^x(e, Z^x_T). \end{aligned}$$
(16)

In the following theorem, we verify that the solution of the HJB equation indeed is the value function of the infinitesimal agent’s control problem, and we provide an expression for an optimal Markovian control in terms of this value function and the aggregate.

Theorem 1

If \(u^x : [0,T]\times E \ni (t,e) \mapsto u^x(t,e)\in {\mathbb {R}}\) is a continuously differentiable solution to the HJB equation (15), then \(u^x\) is the value function of the optimal control problem when the flow \({\varvec{Z}}^x\) is given. Moreover, the function

$$\begin{aligned} {\hat{\phi }}^x(t,e) = \hat{a}_e^x(t,Z^x_t, u^x(t,\cdot )) \end{aligned}$$
(17)

gives an optimal Markovian control.

Next, we prove the existence of a solution to the coupled Kolmogorov-HJB system at equilibrium. For that purpose we place the following condition:

Condition 3

  1. (i)

    There exist two functions \(Q_1\) and \(Q_2\) with \(Q_2\) locally Lipschitz such that \(Q(a,z) = Q_1(z) + aQ_2(z)\) for all \(a \in A, z \in {\mathbb {R}}\).

  2. (ii)

    \(a \mapsto f^x(t,e,z,a)\) is continuously differentiable, and as a function of a it is strongly convex, uniformly in (tez) with constant \(\lambda \); \((t,z) \mapsto \partial _a f^x(t,e,z,a)\) is locally Lipschitz continuous, uniformly in \(a \in A, e \in E\).

  3. (iii)

    f and g are uniformly bounded and \(z \mapsto f^x(t,e,z,a)\) is continuous.

As a consequence of Condition 3(i) and 3(ii) \(a \mapsto H^x(t,e, z, h, a)\) is once continuously differentiable and strictly convex, and \((t,z,h) \mapsto {\hat{a}}_e(t,z,h)\) is locally Lipschitz continuous (see Lemma 1 in Appendix A.2). We denote Lipschitz constant of \([-c,c]\ni z \mapsto \hat{a}(t,z,h)\) by \(L_{{\hat{a}}}(c)\), which can be bounded from above using smoothness properties of \(Q^x\) and \(f^x\). More specifically, \(L_{{\hat{a}}}\) depends on the local Lipschitz coefficients of \(z \mapsto \partial _a Q^x(a,z)\) and \((t,z,h) \mapsto \partial _a f^x(t,e,z,a)\), see the proof of Lemma 1). Recall that \(C_K\) denotes the uniform upper bound of the impact function K guaranteed by Condition 2.

Theorem 2

Assume Conditions 1, 2, 3 hold. If \(\Vert w\Vert _{L^2(I\times I)}L_KL_{{\hat{a}}}(C_K) < 1\), then the coupled Kolmogorov–HJB forward-backward system at equilibrium

$$\begin{aligned} \left\{ \begin{aligned}&{\dot{p}}^x(t, e) = \sum _{e^{\prime } \in E} q^x_{e^{\prime },e}(\hat{\phi }^x(t, e^{\prime }), Z_t^x) p^x(t, e^{\prime }), \quad \forall e \in E,\ t\in [0,T]\\&{\dot{u}}^x(t, e) = -{\hat{H}}^x(t, e, Z_t^x, \varDelta _e u^x(t, \cdot )), \quad \forall e \in E,\ t \in [0,T]\\&Z_t^x = \int _I w(x,y) \left( \sum _{e\in E}K({{\hat{\phi }}}^y(t,e), e)p^y(t,e)\right) dy \\&u^x(T,e) = g^x(e, Z^x_T),\quad p^x(0,e) = p^x_0(e), \\&{\hat{\phi }}^x(t,e) = \hat{a}_e^x(t,Z^x_t, u^x(t,\cdot )) \end{aligned} \right. \end{aligned}$$
(18)

admits a bounded solution (up) in \(C([0,T]; L^2(I\times E)\times L^2(I\times E))\) such that for each \((t,x)\in [0,T]\times I\), \(p^x(t, \cdot )\) is a probability mass function on E.

2.4 The Finite State Graphon Game for SIR with Contact Factor Control

Here, we introduce a model which we shall use as a test bed for the numerical algorithm presented in Sect. 3. It is inspired by the first example scenario in [2] and builds on the case discussed in Sect. 1.1. It is a compartmental model with four possible states: \((\mathsf S)\)usceptible, \(({\mathsf {I}})\)nfected, \(({\mathsf {R}})\)ecovered and \(({\mathsf {D}})\)eceased. The agents choose their level of contact factor. A regulator (government or health care authority) recommends state-dependent contact factor levels to the agents, denoted by \(\varvec{\lambda }^{(e)}\), \(e\in \{{\mathsf {S}}, {\mathsf {I}}, {\mathsf {R}}\}\). For enforcement purposes, it also sets penalties for deviation from these levels. The cost has 3 components: The first component penalizes the agent for not following the regulator’s recommended contact factor level, the second is the cost of treatment for an infected agent (this cost can be player specific due to individual differences in health care plan coverage, etc.), and the last one is the cost for being deceased. In this setting, the running cost is written as

$$\begin{aligned} \begin{aligned} f^x(t,e,z,\alpha ) = \frac{c_\lambda }{2}\left( \lambda ^{({\mathsf {S}})}(t) - \alpha \right) ^2 {\mathbf {1}}_{(e = {\mathsf {S}})} + \frac{1}{2}\left( \lambda ^{(\mathsf R)}(t)-\alpha \right) ^2 {\mathbf {1}}_{(e = {\mathsf {R}})}&\\ + \left( \frac{1}{2}\left( \lambda ^{({\mathsf {I}})}(t) - \alpha \right) ^2 + c_I(x)\right) {\mathbf {1}}_{(e = {\mathsf {I}})} + c_D(x) {\mathbf {1}}_{(e = {\mathsf {D}})},&\end{aligned} \end{aligned}$$
(19)

where \(c_I\) and \(c_D\) are nonnegative costs functions (of the player index). We set the terminal cost to be identically zero, \(g^x(e,z) = 0\) for all \((e,z)\in E \times {\mathbb {R}}\). The transition rate matrix for player x is given as:

(20)

where \(\beta , \gamma , \kappa ,\rho \) are nonnegative parameter functions, \(0\le \rho \le 1\), determining the rates of infection, recovery, reinfection, and decease, and where in each row, the diagonal entry is the negative of the sum of the other terms on the same row. In line with the discussion in Sect. 1.1, the transition rate from state \({\mathsf {S}}\) to \({\mathsf {I}}\) depends on the agent’s own decision and the aggregate variableFootnote 8. Furthermore, when an infected agent transitions, she goes to state \({\mathsf {R}}\) with probability \(\rho \) and to state \({\mathsf {D}}\) with probability \((1-\rho )\). Then, for player x the optimality conditions yield

$$\begin{aligned}&{{\hat{\phi }}}^x(t,{\mathsf {S}}) = \lambda ^{({\mathsf {S}})}(t) + \frac{\beta (x)}{c_\lambda }Z^x_t(u^x(t,{\mathsf {S}}) - u^x(t,{\mathsf {I}})) \\&{{\hat{\phi }}}^x(t,{\mathsf {I}}) = \lambda ^{({\mathsf {I}})}(t) \\&{{\hat{\phi }}}^x(t,{\mathsf {R}}) = \lambda ^{({\mathsf {R}})}(t) \end{aligned}$$

and the forward-backward graphon ODE system reads:

$$\begin{aligned}&{\dot{p}}^x(t, \cdot ) = p^x(t, \cdot )Q^x({{\hat{\phi }}}^x(t, {\mathsf {S}}), Z^x_t) \\&{\dot{u}}^x(t,{\mathsf {S}})= \beta (x){{\hat{\phi }}}^x(t,\mathsf S)Z^x_t\big (u^x(t,{\mathsf {S}}) - u^x(t,{\mathsf {I}})\big ) -\frac{c_\lambda }{2} \big (\lambda ^{({\mathsf {S}})}(t)-{{\hat{\phi }}}^x(t, {\mathsf {S}})\big )^2, \\&{\dot{u}}^x(t,{\mathsf {I}}) = \rho (x)\gamma (x)\left( u^x(t,{\mathsf {I}}) - u^x(t,{\mathsf {R}})\right) \\&\qquad \qquad \quad + (1-\rho (x))\gamma (x)\left( u^x(t,{\mathsf {I}}) - u^x(t,{\mathsf {D}})\right) - c_I(x), \\&{\dot{u}}^x(t,{\mathsf {R}})= \kappa (x)\left( u^x(t,{\mathsf {R}}) - u^x(t,{\mathsf {S}})\right) , \\&{\dot{u}}^x(t,{\mathsf {D}})= -c_D(x), \\&u^x(T,e)= 0,\quad p^x(0,e) = p^x_0(e),\quad e\in \{{\mathsf {S}}, {\mathsf {I}}, {\mathsf {R}}\}, \\&Z^x_t= \int _I w(x,y){{\hat{\phi }}}^y(t,{\mathsf {I}})p^y(t,{\mathsf {I}}) dy,\quad t\in [0,T],\ x\in I. \end{aligned}$$

We note that with a careful choice of \(\beta (x)\), \(c_\lambda \), and \((\varvec{\lambda }^{(e)})_{e\in E}\) this system will satisfy the sufficient condition for existence from Theorem 2.

3 Numerical Approach

We rewrite the continuum of FBODEs (18) as the solution of a minimization problem: minimize

$$\begin{aligned} {\mathbb {J}}(\theta ) = \int _{I} \sum _{e \in E} |u^x_{\theta }(T,e) - g^x(e, Z_{\theta ,T}^x)|^2 dx, \end{aligned}$$
(21)

where \((p_{\theta }, u_{\theta })\) solve the forward-forward continuous system of ODEs:

$$\begin{aligned} \left\{ \begin{aligned}&{\dot{p}}^x_{\theta }(t, e) = \sum _{e^{\prime } \in E} q^x_{e^{\prime },e}\big (Z_{\theta ,t}^x, {\hat{a}}^x(t, e^{\prime }, Z^x_{\theta ,t}, u^x_{\theta }(t,\cdot ))\big ) p^x_{\theta }(t, e^{\prime }), \\&{\dot{u}}^x_{\theta }(t, e) = -{\hat{H}}^x(t, e, Z_{\theta ,t}^x, \varDelta _e u^x_{\theta }(t, \cdot )), \\&Z_{\theta ,t}^x = \int _I w(x,y) \left( \sum _{e \in E} K\big ({\hat{a}}^y(t, e, Z^y_{\theta ,t}, u^y_{\theta }(t, \cdot )\big ), e \big ) p^{y}_{\theta }(t,e) \right) \lambda (dy) \\&u^x_{\theta }(0,e) = \varphi ^x_{\theta }(e), \quad p^x(0,e) = p^x_0(e), \quad e \in E,\ t\in [0,T]\ . \end{aligned} \right. \end{aligned}$$
(22)

This “shooting” strategy is reminiscent of the one used e.g. in [20, 21, 41] for stochastic optimal control problem and e.g. in [2, 11] for mean field games in a numerical context. However, here we deal with a continuum of ODEs rather than a finite number of stochastic differential equations. Here \(\theta \) is the parameter in the function \(\varphi \) replacing the initial condition of u. Typically, \(\theta \) is a real-valued vector of dimension the number of degrees of freedom in the parametric function \(\varphi \). In general, the true initial condition for \(u^x\) is a nonlinear function with a potentially complicated shape. So we need to choose a rich enough class of parametric functions. In the implementation, we used a deep neural network with a feedforward architecture. See, e.g., [10] for a description of the feedforward neural network architecture we used in the implementation.

Our strategy to find \(\theta \) is to run a gradient-descent based method. To alleviate the computational cost and to introduce some randomness, at each iteration we replace the above cost \({\mathbb {J}}\) by an empirical average over a finite set of indices x, which is also used to approximate the value of the aggregate quantities. More precisely, for a finite set \({\mathbf {S}}\) of indices, we introduce

$$\begin{aligned} {\mathbb {J}}^N(\theta ,{\mathbf {S}}) = \frac{1}{N}\sum _{x \in {\mathbf {S}}} \sum _{e \in E} |u^x_{\theta ,{\mathbf {S}}}(T,e) - g^x(e, Z_{\theta ,{\mathbf {S}},T}^x)|^2, \end{aligned}$$
(23)

where \((p_{\theta ,{\mathbf {S}}}, u_{\theta ,{\mathbf {S}}})\) solves the forward-forward (finite) system of ODEs:

$$\begin{aligned} \left\{ \begin{aligned}&{\dot{p}}^x_{\theta ,{\mathbf {S}}}(t, e) = \sum _{e^{\prime } \in E} q^x_{e^{\prime },e}( Z_{\theta ,{\mathbf {S}},t}^x, {\hat{a}}^x(t, e^{\prime }, Z^x_{\theta ,t}, u^x_{\theta ,{\mathbf {S}}}(t,\cdot ))\big ) p^x_{\theta ,{\mathbf {S}}}(t, e^{\prime }), \\&{\dot{u}}^x_{\theta ,{\mathbf {S}}}(t, e) = -{\hat{H}}^x(t, e, Z_{\theta ,{\mathbf {S}},t}^x, \varDelta _e u^x_{\theta ,{\mathbf {S}}}(t, \cdot )), \\&Z_{\theta ,{\mathbf {S}},t}^x = \frac{1}{N}\sum _{y \in {\mathbf {S}}} w(x,y) \left( \sum _{e\in E} K\Big ({\hat{a}}^y(t, e, Z^y_{\theta ,t}, u^y_{\theta ,{\mathbf {S}}}(t,\cdot )\big ),e\Big ) p^{y}_{\theta ,{\mathbf {S}}}(t,e) \right) , \\&u^x_{\theta ,{\mathbf {S}}}(0,e) = \varphi ^x_{\theta }(e), \quad p^x(0,e) = p^x_0(e), \quad e \in E,\ x \in {\mathbf {S}}. \end{aligned} \right. \end{aligned}$$
(24)
figure a

3.1 Piecewise Constant Graphon

Let \(m^1, m^2,\dots , m^K\) be non-negative numbers such that \(\sum _{k=1}^K m^k =1\). We divide the player population into K groups, \(B^1,\dots , B^K\), placing all players with index \(x\in [0,m^1)\) into group \(B^1\), etc. We assume that players belonging to the same group are indistinguishable. For example, all players within a group must have the same recovery rate \(\gamma \) and if \(x,x'\in I\) are the indices of two players in the same group, then \(w(x,y) = w(x',y)\) for all \(y\in I\). In this situation, we only need to specify the graphon’s values on each block of indices corresponding to a group, since the graphon is a constant on each block. Let us identify the group \(B^i\) with its index block (or set). We can compactly represent the interaction weights between the blocks with a connection matrix \([w_{ij}]_{i,j=1}^K\), where \(w_{ij}\) is the connection strength between players in block \(B^i\) and players in block \(B^j\). Then, for all players x in block \(B^i\)

$$\begin{aligned} Z_t^x = \int _I w(x,y)\phi ^y(t,{\mathsf {I}})p^y(t,{\mathsf {I}})dy = \sum _{k=1}^K w_{ik} \lambda ^{({\mathsf {I}},k)} p^k(t,{\mathsf {I}}) m^k, \end{aligned}$$
(25)

which is constant over \(x\in B^i\). This is a feature of that we sometimes see when the piecewise constant graphon is used. Furthermore, we assume \(\beta \), \(\gamma \), \(\kappa \) constant over each block but can differ between blocks. It opens up the possibility for us to solve the graphon game with classical numerical methodsFootnote 9 and a way of evaluating the DeepGraphonGame algorithm.

Fig. 1
figure 1

Top: Density of deceased people for age groups 45-65 (left) and 65+ (right) under different policies where NL: No Lockdown, QI: Quarantine for Infected, AL: Age Specific Lockdown, FL: Full Lockdown. Bottom: Comparison Plots for No Lockdown and Quarantine for Infected policies: Density of infected people (left), Aggregate \({\varvec{Z}}\) (middle), Control of susceptible people (right) are plotted for each 4 age groups under both policies

Turning to the remaining example set up, we specify the cost structure as a particular case of the general formulation of Sect. 2.4. We assume that the regulator has set \(\varvec{\lambda }^{({\mathsf {S}},k)}, \varvec{\lambda }^{({\mathsf {I}},k)}, \varvec{\lambda }^{({\mathsf {R}},k)}\), \(k=1,\dots , K\), different for each block.

In this scenario, we first study the policy effects on the death ratio in the age groups. The policies compared are no lockdown (NL); quarantine for infected (QI); age specific lockdown (AL); full lockdown (FL). We can see in Fig. 1 that the death ratio decreases nearly 30% if infected individuals are quarantined, compared to no lockdown. Furthermore, if an age specific lockdown is implemented, we see that even more lives are saved while not deteriorating the economy. Zooming in on the comparison of no lockdown and quarantine for infected (second row of Fig. 1), we note that susceptible individuals are using a smaller contact factor when there is no quarantine in place. They optimize their risk and hence want to be more cautious, since the risk of getting infected is higher (Tables 1 and 2).

Secondly, in the same scenario, we model multiple cities with different attributes and study the effects of the travel restrictions. In this experiment, we compare a universal no-travel policy to the policy where traveling in or out of one of the cities is restricted (totaling four policies in the comparison). City 1 is a highly populated city with a more contagious virus variant, city 2 also has this variant; however, it is a small city. City 3 is a highly populated city but with a less contagious virus variant. For visual simplification, we assume that there are no deaths (i.e. \(\rho =0\)). In Fig. 2, we can see that the infected-density curve can be flattened the most if city 1 has travel restrictions. The reason is the existence of the more contagious variant and the large size of the city 1. We note that when this restriction is implemented the susceptible individuals feel relieved and increase their contact factor control (Tables 3 and 4).

Table 1 Parameters of the experiment with different age groups
Table 2 Parameters used in the experiments with age-groups specific lockdowns (NL: No Lockdown, QI: Quarantine for Infected, AL: Age Specific Lockdown, FL: Full Lockdown): \(\lambda ^{{\mathsf {S}}}\) and \(\lambda ^{{\mathsf {I}}}\) vary between the age groups.
Fig. 2
figure 2

Density of infected people in the whole population (including all cities) under 4 different policies, NL: No Lockdown, C1L: City 1 Lockdown, C2L: City 2 Lockdown, C3L: City 3 Lockdown (left). Comparison Plots for No Lockdown and City 1 Lockdown policies: Density of infected people in each city (middle left), Aggregate \({\varvec{Z}}\) in each city (middle right), Control of susceptible people in each city (right)

Table 3 Connection matrix (left) and Parameters (right) used in the experiment with different cities
Table 4 Parameters used in the experiment with different cities

3.1.1 Sanity Check for the Numerical Approach

Here, we test the DeepGraphonGame algorithm by comparing its solution to the solution obtained by solving the ODE system for the cities-example when city 1 has travel restrictions (Table 4). As can be seen in Fig. 3, the DeepGraphonGame algorithm approximates the exact result well. A plot of the function \(x\mapsto u^x(0,{\mathsf {S}})\), where \(u^x\) is the numerically computed value function, can be seen on the right side of the bottom row in Fig. 3. We can clearly see that agents in the same block have the same \(u^x(0,{\mathsf {S}})\) values. From this we infer that the DeepGraphonGame algorithm is preforming well when learning this piecewise constant function.

Fig. 3
figure 3

Comparison of the ODE and NN results when there is a lockdown for City 1: Top: State densities in City 1 (left), City 2 (middle) and City 3 (right). Middle: Value functions given each state in City 1 (left), City 2 (middle) and City 3 (right). Bottom: Aggregate \({\varvec{Z}}\) (left), Control of susceptible people (middle) and value function at time 0 given state is susceptible as a function of index, \(u^x(0, {\mathsf {S}})\) (right)

3.2 General Graphon

To show scalability of the proposed numerical approach now we focus on the second example in [2] with the SEIRD model where the state \(({\mathsf {E}})\)xposed is added. An individual is in state \({\mathsf {E}}\) when infected but is not yet infectious. Hence, the agents evolve from \({\mathsf {S}}\) to \({\mathsf {E}}\) and then to \({\mathsf {I}}\), and the infection rate from \({\mathsf {S}}\) to \({\mathsf {E}}\) depends on the proportion of the infected agents. The diagram of the dynamics can be seen in Fig. 4. The cost structure is similar to the one used in Sect. 2.4. After introducing the state \({\mathsf {E}}\), we set

$$\begin{aligned} \begin{aligned} f^x(t,e,z,\alpha )&= \frac{c_\lambda }{2}\left( \lambda ^{({\mathsf {S}})}(t) - \alpha \right) ^2 {\mathbf {1}}_{(e = {\mathsf {S}})} + \frac{c_\lambda }{2}\left( \lambda ^{({\mathsf {E}})}(t) - \alpha \right) ^2 {\mathbf {1}}_{(e = {\mathsf {E}})}\, \\&\quad + \left( \frac{1}{2}\left( \lambda ^{({\mathsf {I}})}(t) - \alpha \right) ^2 + c_I(x)\right) {\mathbf {1}}_{(e = {\mathsf {I}})} + \frac{1}{2}\left( \lambda ^{({\mathsf {R}})}(t)-\alpha \right) ^2 \mathbf{1}_{(e = {\mathsf {R}})} + c_D(x) {\mathbf {1}}_{(e = {\mathsf {D}})}. \end{aligned} \end{aligned}$$

In this example, we focus on an application where the agents are not homogeneous over blocks. The interaction strength between individual x and y is now given by the power law graphon: \(w(x,y) = (xy)^{-g}\) where \(-\infty <g\le 0\) is a constantFootnote 10. Intuitively, the power law graphon models interactions in a population where a small number of individuals are responsible for a large number of the interactions. For example, a population with superspreadersFootnote 11 can be modeled with this graphon. The model with an underlying power law graphon interaction requires us to solve a continuum of coupled ODEs which is not computationally feasible. However, by using the DeepGraphonGame algorithm, the solution can be learned by using simulated particles, i.e. agents.

Fig. 4
figure 4

Diagram of SEIRD model for individual x

According to CDC, COVID-19 reinfection is very rareFootnote 12; therefore, we assume that there is no reinfection (i.e. \(\kappa =0\)). Furthermore, the recovery duration is around 10 days since the symptom onsetFootnote 13. For this reason, we assume that \(\gamma =0.1\; \text {days}^{-1}\). According to the recent study conducted by Lauer et al. [56], an exposed person begins to show symptoms after around 5 days. Based on this observation, we choose \(\epsilon = 0.2\; \text {days}^{-1}\). Finally, the Basic Reproduction Number estimate \(R_0=2\) used by CDCFootnote 14 leads us to set \(\beta = R_0 \times \gamma = 0.2\) in our simulations (Table 5).

The experiment results for a sampled finite subset of the agent population are presented in Fig. 5. In the figure, each line corresponds to one agent and the color of the plot gets darker as the index of the agent (i.e. x) increases. Our first observation is that as the index of the agent increases, the aggregate also increases. In response to this high aggregate, the agent lower its contact rate, in order to protect itself. However, this protection is not enough to neutralize the effects of the high levels of the aggregate and the probability of the agent to be infected is still elevated.

Table 5 Parameters in the SEIRD model experiments with power law graphon
Fig. 5
figure 5

Results of agents from a sampled population. In each plot, colors are chosen from a continuous colormap to represent the index of the agents with the following convention: If the index x of a player is higher, the color of the line is darker. Top: Probability of being susceptible (left), exposed (middle) and infected (right). Bottom: Probability of being recovered (left), Aggregate \({\varvec{Z}}\) (middle), Control at the susceptible state (right)

4 The Probabilistic Approach to Finite State Graphon Games

This section contains a closer study of the continuum of interacting jump processes that constitute the graphon game dynamics. Going back to the informal discussion in the introductory Sect. 1.1, it is not clear that there would be an adequate law of large numbers so that (4) converges to (5) since the averaged random variables are dependent. A common approach in economic theory for this situation is to consider the continuum limit and average the continuum of random variables with respect to a non-atomic probability measure over I. Using the example from Sect. 1.1 again, a continuum limit of (4) is

$$\begin{aligned} \beta \alpha ^{x_j}_t \int _I w(x_j, y)\alpha ^y_t{\mathbf {1}}_{{\mathsf {I}}}(X^y_{t-})dy. \end{aligned}$$

However, the integral in the expression above is ill-defined. There is an issue of constructing a continuum of independent random variables (here, that would be the driving Poisson noise) that are jointly measurable in the sample and the index. If the construction is done in the usual way via the Kolmogorov construction, then almost all random variables are essentially equal to an arbitrarily given function on the index space (i.e., as random variables they are constants). Hence, in any interesting case the function \(y \mapsto X^y_{t-}\) will not be measurable with respect to the Lebesgue measure. One solution proposed by economists is to extend the usual probability space to a so-called a Fubini extension [52], a probability space over \(\varOmega \times I\) where Fubini’s theorem holds. On the Fubini extension a continuum of random variables can be constructed that are essentially pairwise independent (e.p.i.; see Theorem 3 for the definition) and jointly measurable in sample and index. Moreover, there is hope for an exact law of large numbers [53], justifying our assumption about the determinism of the aggregate variable in the previous sections. We will construct a Fubini extension that carries a continuum of e.p.i. Poisson random measures. Then, each player path will be defined in the representation of the counting processes associated to the pure jump process as a stochastic integral with respect to a family of independent Poisson random measures with Lebesgue mean measure on \({\mathbb {R}} \times [0, T]\) as suggested in Skorokhod [51] and Grigelionis [34].

4.1 Theoretical Background and Definitions

4.1.1 Poisson Random Measures

Let us denote by \((\varGamma , {\mathcal {G}}, {\mathbb {G}})\) the measure space \(({\mathbb {R}}\times [0,T], {\mathcal {B}}({\mathbb {R}}\times [0,T]), \text {Leb}_{{\mathbb {R}}\times [0,T]})\). We first recall the definition of a Poisson random measure. A family \(\bigl (N_\cdot (G)\bigr )_{G\in {\mathcal {G}}}\) of random variables defined on some probability space \((H, {\mathcal {H}}, {\mathbb {H}})\) is said to be a Poisson random measure with measure \({\mathbb {G}}\) if

  • for all \(G\in {\mathcal {G}}\) such that \({\mathbb {G}}(G)<\infty \), \(N_\cdot (G)\) is a Poisson random variable with rate \({\mathbb {G}}(G)\);

  • the random variables \(N_\cdot (G_1),\dots , N_\cdot (G_n)\) are mutually independent whenever the sets \(G_1,\dots , G_n \in {\mathcal {G}}\) have finite \({\mathbb {G}}\)-measures and are disjoint;

  • for all \(\omega \in H\), \(N_\omega (\cdot )\) is a measure on \((\varGamma ,{\mathcal {G}})\).

We define \( {\tilde{M}} := \{\mu \ |\ \mu \text { is a }\sigma \text {-finite non-negative measure on }(\varGamma ,{\mathcal {G}})\}, \) and the subset of locally finite measures by

$$\begin{aligned} M := \{\mu \in {\tilde{M}}\ |\ \mu ({{\tilde{G}}}) < \infty \text { for all bounded } {{\tilde{G}}} \in {\mathcal {G}}\}. \end{aligned}$$

For all bounded measurable \({{\tilde{G}}}\), define the mappings \( I_{{{\tilde{G}}}} : {\tilde{M}} \ni \mu \mapsto \mu ({{\tilde{G}}}) \in {\mathbb {R}}. \) Let \(\tilde{{\mathcal {M}}}\) and \({\mathcal {M}}\) be the \(\sigma \)-algebras induced by the mappings \(I_{{{\tilde{G}}}}\) on \({\tilde{M}}\) and M respectively. For us, a random measure will be a measurable function from \((H, {\mathcal {H}}, {\mathbb {H}})\) into \(({\tilde{M}}, \tilde{{\mathcal {M}}})\) that almost surely takes values in \((M, {\mathcal {M}})\). We shall also use the fact that M (equipped with the vague topology) is a Polish space [22]. We denote the law of N on \((M, {\mathcal {M}})\) by \({\mathcal {N}}\).

The Poisson random measure N has an accompanying martingale. For all bounded \(A\in {\mathcal {B}}({\mathbb {R}})\), \({\widehat{N}}(A,t) := N(A\times [0,t])- \text {Leb}(A)t\) is a square integrable zero-mean martingale.

4.1.2 The Fubini Extension

In the model, state dynamics are given as E-valued jump processes. We will construct such a process from \(2|E|-1=2n-1\) independent Poisson random measures. The possible jumps will be those so that the state process jumps between two integers in E, at most \(n-1\) steps up or down. The initial state of player \(x\in I\) is randomly sampled from a pre-selected distribution \(p^x_0 \in {\mathcal {P}}(E)\).

In order to model idiosyncratic random shocks affecting the dynamics of the individual states, we use the framework of Fubini extensions. It allows us to capture a form of independence for a continuum of random variables, while preserving joint measurability.

Definition 3

If \((\varOmega , {\mathcal {F}}, {\mathbb {P}})\) and \((I, {\mathcal {I}}, \lambda )\) are probability spaces, a probability space \((\varOmega \times I, {\mathcal {W}}, {\mathbb {Q}})\) extending the usual product space \((\varOmega \times I, {\mathcal {F}}\otimes {\mathcal {I}}, {\mathbb {P}}\otimes \lambda )\) is said to be a Fubini extension if for any real-valued \({\mathbb {Q}}\)-integrable function f on \((\varOmega \times I, {\mathcal {W}})\)

  1. (i)

    the two functions \(f_x : \omega \mapsto f(\omega ,x)\) and \(f_\omega : x \mapsto f(\omega ,x)\) are integrable, respectively, on \((\varOmega , {\mathcal {F}},{\mathbb {P}})\) for \(\lambda \)-a.e. \(x\in I\), and on \((I,{\mathcal {I}},\lambda )\) for \({\mathbb {P}}\)-a.e. \(\omega \in \varOmega \);

  2. (ii)

    \(\int _\varOmega f_x(\omega ) d{\mathbb {P}}\) and \(\int _If_\omega (x)d\lambda (x)\) are integrable, respectively, on \((I,{\mathcal {I}},\lambda )\) and \((\varOmega , {\mathcal {F}},{\mathbb {P}})\), with the Fubini property

    $$\begin{aligned} \int _{\varOmega \times I}f(\omega ,x)d{\mathbb {Q}}(\omega ,x)&= \int _I\left( \int _\varOmega f_x(\omega )d{\mathbb {P}}(\omega )\right) d\lambda (x)\\&= \int _\varOmega \left( \int _If_\omega (x)d\lambda (x)\right) d{\mathbb {P}}(\omega ). \end{aligned}$$

The following theorem summarizes the results by Sun and collaborators, see for example [53] and [54], which we use as a foundation for our model.

Theorem 3

There exists a probability space \((I,{\mathcal {I}},\lambda )\) extending \((I,{\mathcal {B}}_I,\lambda _I)\), a probability space \((\varOmega , {\mathcal {F}},{\mathbb {P}})\), and a Fubini extension \((\varOmega \times I, {\mathcal {F}}\boxtimes {\mathcal {I}}, {\mathbb {P}}\boxtimes \lambda )\) such that for any measurable mapping \({\underline{\phi }}\) from \((I,{\mathcal {I}}, \lambda )\) to \({\mathcal {P}}(E \times M^{2n-1})\) there is an \({\mathcal {F}}\boxtimes {\mathcal {I}}\)-measurable process \({\underline{f}} : \varOmega \times I \rightarrow E \times M^{2n-1}\) such that the random variables \(f^x = {\underline{f}}(\cdot , x)\) are essentially pairwise independent (e.p.i.), i.e., for \(\lambda \)-a.e. \(x\in I\), \(f^x\) is independent of \(f^y\) for \(\lambda \)-a.e. \(y\in I\), and \({\mathbb {P}}\circ (f^x)^{-1} = \phi ^x\) for all \(x\in I\).

We set \(\phi ^x = p^x_0 \otimes (\otimes _{k=-n+1}^{n-1} {\mathcal {N}})\), where \({\mathcal {N}}\) is the probability law of the Poisson random measure introduced above, and \(p^x_0\) is the initial distribution of player x. By Theorem 3 (which holds for \(\underline{\phi }\) since E and M are Polish spaces) there exists a collection of random variables \((\underline{\xi }, {\underline{N}}_{k}; k = -n+1 ,\dots , n-1)\) on a Fubini extension \((\varOmega \times I, {\mathcal {F}}\boxtimes {\mathcal {I}}, {\mathbb {P}}\boxtimes \lambda )\), that are e.p.i. and \(\phi ^x\)-distributed for all \(x\in I\). With the model in Sect. 2 in mind, we assume that the mapping \(x \mapsto p^x_0\) is Lebesgue-measurable (this assumption is however not necessary for the analysis that follows).

We denote by \(L^2_\boxtimes (\varOmega \times I; {\mathcal {D}})\) the Bochner space of all (equivalence classes of) strongly \(({\mathbb {P}}\boxtimes \lambda ,{\mathcal {B}}({\mathcal {D}}))\)-measurable functions \({\underline{f}} : \varOmega \times I \rightarrow {\mathcal {D}}\) for which

$$\begin{aligned} {\mathbb {E}}^\boxtimes \left[ \Vert {\underline{f}}\Vert ^2_T\right] = \int _{\varOmega \times I} \Vert \varvec{f}^x(\omega )\Vert ^2_T{\mathbb {P}}\boxtimes \lambda (d\omega , dx) < + \infty . \end{aligned}$$

We define \(L^2_\boxtimes (\varOmega \times I; {\mathcal {C}})\) in the same way, with \({\mathcal {C}}\) replacing \({\mathcal {D}}\) above. By e.g. [38, Ch. 1.2.b], \(L^2_\boxtimes (\varOmega \times I; {\mathcal {C}})\) and \(L^2_\boxtimes (\varOmega \times I; {\mathcal {D}})\) are Banach spaces.

For later reference, we define also the set \({\mathcal {L}}_E\) as the subset of \(L^2_\boxtimes (\varOmega \times I; {\mathcal {D}})\) of \({\mathbb {P}}\boxtimes \lambda \)-a.e. \({\mathcal {D}}_E\)-valued functions. One can show that \({\mathcal {L}}_E\) is a closed subset of \(L^2_\boxtimes (\varOmega \times I; {\mathcal {D}})\), hence \(({\mathcal {L}}_E, \Vert \cdot \Vert _{L^2_\boxtimes (\varOmega \times I; {\mathcal {D}})})\) is a complete metric space.

4.1.3 The Set of Admissible Strategies

We can now give a rigorous definition of the set of admissible strategy profiles. Recall that A is a compact subset of \({\mathbb {R}}\).

Definition 4

We define the set \(\underline{{\mathbb {A}}}\) of admissible strategy profiles in feedback form as the set of A-valued, \({\mathcal {I}}\otimes {\mathcal {B}}([0,T]) \otimes {\mathcal {B}}(E)\)-measurable functions \(\underline{\alpha }\) on \(I\times [0,T] \times E\) such that \(\underline{\alpha }(x,\cdot , e)\) is a continuous function on [0, T] for every \((x,e) \in I\times E\).

We will sometimes use the same notation \(\underline{{\mathbb {A}}}\) for the set of admissible control processes associated to an admissible strategy profile in feedback form \(\underline{\varvec{\alpha }}\). At time t, for player \(x\in I\), the value of such a control process is the action \(\underline{\alpha }(x,t,X^{x,\varvec{\alpha }}_{t-})\) where \(X^{x,\varvec{\alpha }}_{t-}\) is the state of player x just before time t. These control processes are predictable with respect to the filtration generated by the player’s private state, and decentralized since they do not depend directly on other players’ states.

The continuity in time is a strong assumption and prohibits the player from immediately reacting to abrupt changes in their environment. However, if a player transitions between two states at time t their control can be discontinuous at that time (as a multivariate function of time and state). In Sect. 2, a rationale was given for restricting our attention to such controls.

4.2 The Finite State Graphon Game in the Fubini Extension

We begin by describing an interacting system of a continuum of particles. First, we define the decoupled system where the aggregate variable vector has been “frozen”. Then, we define the aggregate with a fixed point argument. Finally, we prove that the aggregate is in fact deterministic.

Consider the pure jump stochastic integral equation (here written formally)

$$\begin{aligned} \begin{aligned}&X_t^{\underline{\varvec{\alpha }},\underline{{\varvec{z}}},x} = \xi ^x + \sum _{k=-n+1}^{n-1} k \int _{{\mathbb {R}}\times (0,t]} {\mathbf {1}}_{[0, \kappa ^x(X_{s-}^{\underline{\varvec{\alpha }}, \underline{{\varvec{z}}}, x},k, \alpha ^x_s, z^x_{s-})]}(y)N_k^x(dy, ds), \end{aligned} \end{aligned}$$
(26)

where \(x\in I\), \(t\in [0,T]\), \(\underline{\varvec{\alpha }}(\omega ) = (\varvec{\alpha }^x(\omega ))_{x} := (\underline{\alpha }(x,t,X^{\underline{\varvec{\alpha }},\underline{\varvec{z}}, x}_{t-}(\omega ))_{t,x}\) for some admissible strategy profile \(\underline{\varvec{\alpha }}\), \(\underline{{\varvec{z}}}\in L^2_\boxtimes (\varOmega \times I; {\mathcal {D}})\), for any \(k\in {\mathbb {Z}}\), \(x\in I\), \(s\in [0,T]\), \(a\in A\), \(z\in {\mathbb {R}}\), \(i\in E\)

$$\begin{aligned} \kappa ^x(i,k,a,z) := {\left\{ \begin{array}{ll} q^x_{i, i+k}(a,z), &{} i+k\in E, \\ 0, &{} i + k \not \in E, \end{array}\right. } \end{aligned}$$
(27)

is the rate of jumps from state i to state \(i+k\) given the action \(a\in A\) and the aggregate value z. The proposition below asserts that (26) has a unique solution in \({\mathcal {L}}_E\), the subset of \(L^2_\boxtimes (\varOmega \times I; {\mathcal {D}})\) defined in the end of Sect. 4.1.2.

Proposition 1

Assume that Condition 1 holds. Let \(\underline{{\varvec{z}}}\in L^2_\boxtimes (\varOmega \times I; {\mathcal {D}})\) and \(\underline{\alpha } \in \underline{{\mathbb {A}}}\) be fixed. Then there is a unique strong solution \(\underline{\varvec{X}}^{\underline{\varvec{\alpha }},\underline{{\varvec{z}}}} \in {\mathcal {L}}_E\) to (26), i.e., a \({\mathbb {P}}\boxtimes \lambda \)-a.e. \({\mathcal {D}}_E\)-valued process satisfying (26) \({\mathbb {P}}\boxtimes \lambda \)-a.s.

Note that the quantity

$$\begin{aligned} M^x_k(t)=\int _{{\mathbb {R}}\times (0,t]} {\mathbf {1}}_{[0, \kappa ^x(X_{s-}^{\underline{\varvec{\alpha }}, \underline{{\varvec{z}}}, x},k, \alpha ^x_s, z^x_{s-})]}(y)N_k^x(dy, ds), \end{aligned}$$

appearing in the right-hand side of (26), is a counting process with intensity \(\kappa ^x(X_{t-}^{\underline{\varvec{\alpha }}, \underline{{\varvec{z}}}, x},k, \alpha ^x_t, z^x_{t-})\) at time \(t\in [0,T]\) so by construction the solution to (26) (granted by Proposition 1) is almost surely an E-valued pure jump process with intensity matrix \(Q^x(\alpha ^x_t(\omega ), z^x_{t-}(\omega ))\) at time \(t\in [0,T]\).

For a fixed admissible strategy profile \(\underline{\varvec{\alpha }}\), consider now the coupled system

$$\begin{aligned} \begin{aligned}&X_t^{\underline{\varvec{\alpha }},x} = \xi ^x + \sum _{k=-n+1}^{n-1} k \int _{{\mathbb {R}}\times (0,t]} {\mathbf {1}}_{[0, \kappa ^x_s(X_{s-}^{\underline{\varvec{\alpha }},x},k, \alpha ^x_s, Z^{\underline{\varvec{\alpha }},x}_{s-})]}(y)N_k^x(dy\otimes ds),\\&Z_t^{\underline{\varvec{\alpha }},x} = \int _I w(x,y)K(\alpha ^y_t, X^{\underline{\varvec{\alpha }}, y}_{t-})\lambda (dy). \end{aligned} \end{aligned}$$
(28)

The next theorem proves that (28) is well-posed with a unique solution in \(L^2_\boxtimes \)-sense. It further specifies the regularity of the solution: the aggregate variable \(\underline{{\varvec{Z}}}^{\underline{\varvec{\alpha }}}\) must \({\mathbb {P}}\boxtimes \lambda \)-a.s. be a deterministic and a continuous function of time.

Theorem 4

Let Condition 1 and 2 hold, and let \(\underline{\varvec{\alpha }} \in \underline{{\mathbb {A}}}\).

  1. (i)

    There exists a unique solution \(\underline{{\varvec{X}}}^{\underline{\varvec{\alpha }}}\in {\mathcal {L}}_E\) to (28). The corresponding aggregate \(\underline{{\varvec{Z}}}^{\underline{\varvec{\alpha }}}\) is a random variable in \(L^2_\boxtimes (\varOmega \times I; {\mathcal {C}})\).

  2. (ii)

    The aggregate \(\underline{\varvec{Z}}^{\underline{\varvec{\alpha }}}\) is \({\mathbb {P}}\boxtimes \lambda \)-a.s. equal to a deterministic (i.e., constant in \(\omega \)) function in \(L^2_\boxtimes (\varOmega \times I; {\mathcal {C}})\).

  3. (iii)

    There is a unique pair \(\underline{\varvec{\check{X}}}^{\underline{\varvec{\alpha }}}\) and \(\underline{\varvec{\check{Z}}}^{\varvec{{\underline{\alpha }}}}\) of versions of \(\underline{{\varvec{X}}}^{\underline{\varvec{\alpha }}}\) and \(\underline{{\varvec{Z}}}^{\underline{\varvec{\alpha }}}\), respectively, solving (28) for all \(x\in I\) in the standard \(L^2\)-sense. Moreover \(\underline{\varvec{\check{Z}}}^{\varvec{{\underline{\alpha }}}}\) is deterministic and continuous in time for all \(x\in I\).

Theorem 4 justifies working with a model defined for all \(x\in I\) with a deterministic, continuous-in-time aggregate in Sect. 2. From here on, we will represent the \(L^2_\boxtimes \)-elements solving system (28) with the version defined for all \(x\in I\) and drop the check in the notation.

Remark 2

If admissible strategy profiles did not have the prescribed continuity property we could not expect the aggregate to be a continuous function of time. One example of such a case is found in [2] where a regulator imposes a penalty that is discontinuous in time, resulting in equilibrium controls and aggregates discontinuous in time. We leave the analysis of the more general case to future work.

We now turn to the notion of player costs and equilibrium. Denote by \({\mathbb {A}}\) the set of A-valued and \({\mathcal {B}}([0,T])\otimes {\mathcal {B}}(E)\)-measurable functions on \([0,T]\times E\), continuous \(t\in [0,T]\) for every \(e\in E\) that. If the player population plays according to an admissible strategy profile \(\underline{\alpha }\in \underline{{\mathbb {A}}}\) and player \(x\in I\) decides to play strategy \(\varvec{\sigma }= (\sigma (t, X^{\underline{(\varvec{\alpha }^{-x}, \varvec{\sigma })},x}_{t-}))_{t\in [0,T]}\) where \(\sigma \in {\mathbb {A}}\) and

$$\begin{aligned} (\varvec{\alpha }^{-x},\varvec{\sigma })^y := {\left\{ \begin{array}{ll} \varvec{ \alpha }^y, &{}\text {if }y\ne x \\ \varvec{\sigma }, &{}\text {if }y=x, \end{array}\right. } \end{aligned}$$

then \(\underline{(\varvec{\alpha }^{-x},\varvec{\sigma })}\) is an admissible strategy profile and the player’s expected cost for using \(\varvec{\sigma }\) is

$$\begin{aligned} {\mathcal {J}}^x(\varvec{\sigma };\varvec{\underline{\alpha }}) := {\mathbb {E}} \Big [ \int _0^T f^x\big (t, X^{\underline{(\varvec{\alpha }^{-x},\varvec{\sigma })},x}_t, \beta _t, Z^{\varvec{\underline{\alpha }},x}_t\big )dt + h^x\big (X^{\underline{(\varvec{\alpha }^{-x},\varvec{\sigma })},x}_T, Z^{\varvec{\underline{\alpha }},x}_T\big ) \Big ]. \end{aligned}$$

In fact, \({\mathbb {A}}\) is the set of strategies a player can deviate to without destroying the admissibility of the strategy profile. Therefore, we say that if \(\underline{\varvec{{{\hat{\alpha }}}}} = (\varvec{{{\hat{\alpha }}}}^x)_{x\in I}\in \underline{{\mathbb {A}}}\) satisfies

$$\begin{aligned} {\mathcal {J}}^x(\varvec{{{\hat{\alpha }}}}^x; \underline{\varvec{{{\hat{\alpha }}}}}) \le {\mathcal {J}}^x(\varvec{\sigma }; \underline{\varvec{{{\hat{\alpha }}}}}),\qquad \sigma \in {\mathbb {A}},\ x\in I, \end{aligned}$$

the \(\underline{\varvec{{{\hat{\alpha }}}}}\) is a Nash equilibrium of the graphon game. The dependence of the cost on the whole strategy profile is unnecessarily complicated, as the following reasoning shows. Notice that \(\varvec{Z}^{\underline{(\varvec{\alpha }^{-x},\varvec{\sigma })}, x} = {\varvec{Z}}^{\underline{\varvec{\alpha }},x}\) since \(\underline{(\varvec{\alpha }^{-x},\varvec{\sigma })} = \underline{\varvec{\alpha }}\) for \(\lambda \)-a.e. \(x\in I\). The other players’ actions appear in player x’s cost indirectly, through the aggregate \(\varvec{Z}^{\underline{\varvec{\alpha }},x}\), which is unaffected if one specific player changes control (it is an integral with respect to a non-atomic measure). Thus, we write \({\mathcal {J}}^x(\varvec{\sigma };\underline{\varvec{\alpha }})\) as \(J^x(\varvec{\sigma };\varvec{Z}^{\underline{\varvec{\alpha }},x})\) a function taking an admissible strategy and an aggregate variable trajectory:

$$\begin{aligned} {\mathbb {A}} \times {\mathcal {C}} \ni (\varvec{\sigma },\varvec{\zeta }) \mapsto J^x(\varvec{\sigma }; \varvec{\zeta }) \in {\mathbb {R}}. \end{aligned}$$

In light of this, an equivalent definition of the Nash equilibrium is that a strategy profile \(\underline{\varvec{{{\hat{\alpha }}}}} = (\varvec{{{\hat{\alpha }}}}^x)_{x\in I}\) is a Nash equilibrium in the graphon game if it satisfies

$$\begin{aligned} J^x(\varvec{\alpha }^x; \varvec{Z}^{\underline{\varvec{\alpha }},x}) \le J^x(\varvec{\sigma }; {\varvec{Z}}^{\underline{\varvec{\alpha }},x}),\qquad \varvec{\sigma }\in {\mathbb {A}},\ x\in I, \end{aligned}$$

further justifying the game setup in Sect. 2.

5 Conclusion and outlook

In this paper, we introduced stochastic graphon games in which the agents evolve in a finite state space. We provided optimality conditions in the form of a continuum of forward-backward ODE system, for which we established existence of solutions. We proposed a numerical method based on a neural network approximation of the initial condition of the FBODE. We then applied our theoretical framework and numerical method to a class of models from epidemiology and we provided several test cases. From here, several directions can be considered for future work. An interesting aspect would be to incorporate a regulator with a Stackelberg type model as was done in [2] without graphon structure. The theoretical analysis would probably rely on a combination of the tools developed in the present work together with tools from optimal contract theory. However there would be some important challenges depending on the class of controls that are admissible for the regulator. Their controls could indeed lead to discontinuities in the incentives to the population, which would raise subtle measurability questions. This is left for future work. Another direction is to consider more realistic epidemiological models (e.g., with more compartments). Such models would be more complex and we expect our proposed machine learning numerical method to be helpful from this point of view. Furthermore, to be able to use graphon games to make epidemiological predictions, it would be interesting to investigate further how to use real data in the model and in the numerical method.