Mean-Field-Game Model of Corruption

Abstract

A simple model of corruption that takes into account the effect of the interaction of a large number of agents by both rational decision making and myopic behavior is developed. Its stationary version turns out to be a rare example of an exactly solvable model of mean-field-game type. The results show clearly how the presence of interaction (including social norms) influences the spread of corruption by creating certain phase transition from one to three equilibria.

Introduction

Analysis of the spread of corruption in bureaucracy is a well-recognized area of the application of game theory, which attracted attention of many researchers. General surveys can be found in [2, 25, 36]. In his Prize Lecture [24], Hurwicz gives an introduction in laymen terms of various problems arising in an attempt to find out ‘who will guard the guardians?’ and which mechanisms can be exploited to enforce the legal behavior?

In a series of papers [32, 33], the authors analyze the dynamic game, where entrepreneurs have to apply to a set of bureaucrats (in a prescribed order) in order to obtain permission for their business projects. For an approval, the bureaucrats ask for bribes with their amounts being considered as strategies of the bureaucrats. This model is often referred to as petty corruption, as each bureaucrat is assumed to ask for a small bribe, so that the large bureaucratic losses of entrepreneurs occur from a large number of bureaucrats. This is an extension of the classical ultimatum game, because the game stops whenever an entrepreneur declines to pay the required graft. The existence of an intermediary undertaking the contacts with bureaucrats for a fee may essentially affect the outcomes of this game.

In the series of works [39, 44, 45], the authors develop an hierarchical model of corruption, where the inspectors of each level audit the inspectors of the previous level and report their finding to the inspector of the next upper level. For a graft, they may choose to make a falsified report. The inspector of the highest level is assumed to be honest but very costly for the government. The strategy of the government is in the optimal determination of the audits on various levels with the objective to achieve the minimal level of corruption with minimal cost.

Paper [42] develops a simple model to get an insight into the problem of when unifying efforts lead to strength or corruption. In paper [37], the model of a network corruption game is introduced and analyzed, with the dynamics of corrupted services between the entrepreneurs and corrupted bureaucrats propagating via the chain of intermediary. In [38], the dichotomy between public monitoring and governmental corruptive pressure on the growth of economy was modeled. In [35], an evolutionary model of corruption is developed for ecosystem management and biodiversity conservation.

The research on the political aspects of corruption develops around the Acton’s dictum that ‘power corrupts,’ where the elections serve usually as a major tool of public control; see [14] and references therein. Closely related are the so-called inspection games; see surveys, e.g., in [3, 4, 30, 31].

On the other hand, one of the central trends in the modern theory of games and optimal control is related to the analysis of systems with a large number of agents providing a strong link with the study of interacting particles in statistical mechanics. Therefore, it is natural to start applying these methods to the games of corruption, which until recently were mostly studied by the classical game-theoretic models with two or three players. The model of corruption with a large number of agents interacting in a myopic way (agents try to copy a more successful behavior of their peers) was developed in [28] as an example of a general model of pressure and resistance that extends the approach of evolutionary games to players interacting in response to a pressure executed by a distinguished big player (a principal). In the present paper, we consider each player of a large group to be a rational optimizer, thus bringing the model to the realm of mean-field games.

Mean-field games present a quickly developing area of the game theory. It was initiated by Lasry–Lions [34] and Huang–Malhame–Caines [2123]; see [5, 7, 9, 19, 20] for recent surveys, as well as [1012, 17, 29] and references therein.

New trends concern the theory of mean-field games with a major player [40], the numeric analysis [1], the risk-sensitive games [43], the games with a discrete state space (see [6, 18] and references therein) as well as the games and control with a centralized controller of a large pool of agents (see [13] and [27]).

Here we develop a concrete mean-field-game model with a finite state space of individual players describing the distribution of corrupted and honest agents under the pressure of both an incorruptible governmental representative (often referred to, in the literature, as ‘benevolent principal’; see, e.g., [2]) and the ‘social norms’ of the society. This game represents a rare example of an exactly solvable model. In particular, it reveals explicitly the non-uniqueness of solutions which is widely discussed in the general mean-field-game theory. On the other hand, we hope this example can serve as a natural toy model to analyze the link between stationary and dynamic models, again an important non-trivial problem of the general theory. From the point of view of the application to corruption, our contribution is in a systematic study of the interaction of a large number of (potentially corrupted) agents, each one of them being considered as a rational optimizer. This mean-field-interaction component of our model can be used to enrich the settings of the majority of papers cited above.

The paper is organized as follows. In the next two sections, we present our model and formulate the main results. Then we discuss its shortcomings and perspectives. The two final sections contain the proofs.

The Model and the Objectives of Analysis

A model we introduce is an instance of the finite state space mean-field games of [15, 16].

An agent is supposed to be in one of the three states: honest H, corrupted C, and reserved R, where R is the reserved job of low salary that an agent receives as a punishment if her corrupted behavior is discovered.

The change between H and C is subject to the decisions of the agents (though the precise time of the execution of their intent is noisy), the change from C to R is random with distributions depending on the level of the efforts (say, a budget used) b of the principal (a government representative) invested in chasing a corrupted behavior, and the change R to H (so-to-say, a new recruitment) may be possible and is included as a random event with a certain rate.

Let \(n_H, n_C, n_R\) denote the numbers of agents in the corresponding states with \(N=n_H+n_C+n_R\) the total number of agents. By a state of the system, we shall mean either the 3-vector \(n=(n_H, n_C, n_R)\) or its normalized version \(x=(x_H, x_C, x_R)=n/N\).

The control parameter u of each player in states H or C may have two values, 0 and 1, meaning that the player is happy with her state (H or C) or she prefers to switch one to another; there is no control in the state R. When the updating decision 1 is made, the updating effectively occurs with some rates \(\lambda \). The recovery rate, that is the rate of change from R to H (we assume that once recruited the agents start by being honest), is a given constant r.

Remark 1

The choice of bang–bang controls with two values seems to be easiest to interpret: You are either happy with your state or want to change. The control \(u\in (0,1)\) would be more vague (and more difficult to evaluate) here. However, as u enters linearly in the HJB, the maximizers would anyway belong to \(\{0,1\}\) even if \(u\in [0,1]\) are allowed.

Apart from taking a rational decision to swap H and C, an honest agent can be pushed to become corruptive by her corruptive peers, the effect being proportional to the fraction of corrupted agents with certain coefficient \(q_\mathrm{inf}\), which is analogous to the infection rate in epidemiologic models. On the other hand, the honest agents can contribute to chasing and punishing corrupted behavior, this effect of a desirable social norm being proportional to the fraction of honest agents with certain coefficient \(q_\mathrm{soc}\). The presence of the coefficients \(q_\mathrm{inf}\), \(q_\mathrm{soc}\) reflecting the social interaction, makes the dynamics of individual agents dependent on the distribution of other agents, thus bringing the model to the setting of mean-field games. It is of our major concern to find out how the presence of interaction influences the spread of corruption.

Thus, if all agents use the strategy \(u_H, u_C \in \{0,1\}\) and the efforts of the principle is b, the evolution of the state x is clearly given by the ODE

$$\begin{aligned} \left\{ \begin{aligned}&\dot{x}_R =(b+q_\mathrm{soc} x_H) x_C -r x_R, \\&\dot{x}_H =r x_R -\lambda (x_H u_H - x_Cu_C)-q_\mathrm{inf} x_H x_C, \\&\dot{x}_C =-(b+q_\mathrm{soc} x_H) x_C +\lambda (x_H u_H - x_Cu_C)+q_\mathrm{inf} x_H x_C. \end{aligned} \right. \end{aligned}$$
(1)

Here \(u_H,u_C\) can be considered as arbitrary measurable functions of t.

It is instructive to see how this ODE can be rigorously deduced from the Markov model of interaction. Namely, if all agents use the strategy \(u_H, u_C \in \{0,1\}\) and the efforts of the principal is b, the generator of the Markov evolution on the states n is

$$\begin{aligned} L_NF(n_H, n_C, n_R)= & {} n_C \left( b+q_\mathrm{soc}\frac{n_H}{N}\right) F(n_H,n_C-1, n_R+1)\\&+\,n_R r F(n_H+1, n_C, n_R-1)\\&+\,n_H \left( \lambda u_H +q_\mathrm{inf}\frac{n_C}{N} \right) F(n_H-1, n_C+1,n_R)\\&+\,\lambda n_C u_C F(n_H+1,n_C-1, n_R). \end{aligned}$$

For any N, this generator describes a Markov chain on the finite state space \(\{n=(n_H,n_C,n_R): n_H+n_C+n_R=N\}\), where any agent, independently of others, can be recruited with rate r (if in state R) or change from C to H or vice versa if desired (with rate \(\lambda \)), and where the change of the state due to binary interactions are taken into account by the terms containing \(q_\mathrm{soc}\) and \(q_\mathrm{inf}\).

In terms of x, the generator \(L_NF\) takes the form

$$\begin{aligned} L_NF(n_H, n_C, n_R)= & {} x_C (b+q_\mathrm{soc} x_H) F(x-e_C/N+e_R/N)+x_R r F(x-e_R/N+e_H/N)\nonumber \\&+\,x_H (\lambda u_H+q_\mathrm{inf}x_C) F(x-e_H/N+e_C/N)\nonumber \\&+\,\lambda x_C u_C F(x-e_C/N+e_H/N), \end{aligned}$$
(2)

where \(\{e_j\}\) is the standard basis in \(\mathbf {R}^3\).

If F is a differentiable function, \(L_NF\) converges to

$$\begin{aligned} LF(x)= & {} x_C (b+q_\mathrm{soc}x_H) \left( \frac{\partial F}{\partial x_R}-\frac{\partial F}{\partial x_C}\right) +x_R r \left( \frac{\partial F}{\partial x_H}-\frac{\partial F}{\partial x_R}\right) \nonumber \\&+x_H (\lambda u_H+q_\mathrm{inf} x_C) \left( \frac{\partial F}{\partial x_C}-\frac{\partial F}{\partial x_H}\right) + \lambda x_C u_C \left( \frac{\partial F}{\partial x_H}-\frac{\partial F}{\partial x_C}\right) , \end{aligned}$$
(3)

as \(N\rightarrow \infty \), which follows from the Taylor formula. This is a first-order partial differential operator, and its characteristics are given by ODE (1). A rigorous proof that the Markov chain generated by (2) weakly converges to the solutions of ODE (1) is carried out in a more general setting in papers [27, 28].

The Markov model not only is important as a tool to derive (1), but it helps to understand the dynamics of individual players (in statistical mechanics terms corresponding to the so-called tagged particles), which are central for a mean-field-game analysis of agents trying to deviate from the behavior of a crowd. Namely, if x(t) and b(t) are given, the dynamics of each individual player is the Markov chain on the 3 states with the generator

$$\begin{aligned} \left\{ \begin{aligned}&L^\mathrm{ind}g(R)=r (g(H)-g(R)) \\&L^\mathrm{ind}g(H)=(\lambda u^\mathrm{ind}_H+q_\mathrm{inf} x_C) (g(C)-g(H)) \\&L^\mathrm{ind}g(C)=\lambda u^\mathrm{ind}_C(g(H)-g(C))+(b+q_\mathrm{soc} x_H) (g(R)-g(C)) \end{aligned} \right. \end{aligned}$$
(4)

depending on the individual control \(u^\mathrm{ind}\in \{0,1\}\), so that \(\dot{g}=L^\mathrm{ind}g\) is the Kolmogorov backward equation of this chain.

Assume that an employed agent receives a wage \(w_H\) per unit of time and, if corrupted, an average payoff \(w_C\) (that includes \(w_H\) plus some additional illegal reward); she has to pay a fine f when her illegal behavior is discovered; the reserved wage for fired agents is \(w_R\). Thus, the total payoff for a player on the time period [tT] is \(\int _t^T w_S (\tau ) d\tau +f M(t,T)\), where S denotes the state (which is either R, or H, or C) and M(tT) is the number of transitions from C to R during the period. If the distribution of other players is \(x(t)=(x_R,x_H,x_C)(t)\), the HJB equation describing the expectation of the optimal payoff \(g=g_t\) (starting at time t with time horizon T) of an agent is

$$\begin{aligned} \left\{ \begin{aligned}&\dot{g}(R)+ w_R +r (g(H)-g(R))=0 \\&\dot{g}(H)+w_H +\max _u (\lambda u+q_\mathrm{inf} x_C) (g(C)-g(H)) =0 \\&\dot{g}(C)+w_C -(b+q_\mathrm{soc}x_H)f +\max _u (\lambda u (g(H)-g(C))\\ {}&+(b+q_\mathrm{soc} x_H) (g(R)-g(C))=0. \end{aligned} \right. \end{aligned}$$
(5)

Therefore, starting with some control

$$\begin{aligned} u^\mathrm{com}(t)=\left( u^\mathrm{com}_C(t), u^\mathrm{com}_H(t)\right) , \end{aligned}$$

used by all players, we can find the dynamics x(t) from Eq. (1) (with \(u^\mathrm{com}\) used for u). Then each individual should solve the Markov control problem (5), thus finding the individually optimal strategy

$$\begin{aligned} u^\mathrm{ind}(t)=\left( u^\mathrm{ind}_C(t), u^\mathrm{ind}_H(t)\right) . \end{aligned}$$

The basic MFG consistency equation can now be explicitly written as

$$\begin{aligned} u^\mathrm{ind}(t)=u^\mathrm{com}(t). \end{aligned}$$
(6)

Instead of analyzing this rather complicated dynamic problem, we shall look for a simpler and practically more relevant problem of consistent stationary strategies.

There are two standard stationary problems arising from HJB (5), one being the search for the average payoff

$$\begin{aligned} g =\lim _{T\rightarrow \infty } \frac{1}{T}\int _0^T g_t \, \hbox {d}t \end{aligned}$$

for long-period games and another the search for discounted optimal payoff. The first is governed by the solutions of HJB of the form \((T-t)\mu +g\), linear in t (with \(\mu \) describing the optimal average payoff), so that g satisfies the stationary HJB equation:

$$\begin{aligned} \left\{ \begin{aligned}&w_R +r (g(H)-g(R))=\mu \\&w_H +\max _u (\lambda u+q_\mathrm{inf} x_C) (g(C)-g(H)) =\mu \\&w_C -(b+q_\mathrm{soc} x_H) f +\max _u (\lambda u (g(H)-g(C))+(b+q_\mathrm{soc} x_H) (g(R)-g(C))=\mu , \end{aligned} \right. \end{aligned}$$
(7)

and the discounted optimal payoff (with the discounting coefficient \(\delta \)) satisfies the stationary HJB

$$\begin{aligned} \left\{ \begin{aligned}&w_R +r (g(H)-g(R))=\delta g (R) \\&w_H +\max _u (\lambda u+q_\mathrm{inf} x_C) (g(C)-g(H)) =\delta g(H) \\&w_C -(b+q_\mathrm{soc} x_H) f +\max _u (\lambda u (g(H)-g(C))\\ {}&+(b+q_\mathrm{soc} x_H) (g(R)-g(C))=\delta g(C). \end{aligned} \right. \end{aligned}$$
(8)

The analysis of these two settings is mostly analogous, as they are in some sense equivalent; see, e.g., [41] (they are analogous for constructing the MFG equilibria, but quite different for the analysis of precise links with a finite N problem; see Discussion below). We shall concentrate on the first one.

For a fixed b, the stationary MFG consistency problem is in finding \((x,u_C,u_H)=(x,u_C(x),u_H(x))\), where x is the stationary point of evolution (1), that is

$$\begin{aligned} \left\{ \begin{aligned}&(b+q_\mathrm{soc} x_H) x_C -r x_R =0 \\&r x_R -\lambda (x_H u_H(x) - x_Cu_C(x))-q_\mathrm{inf} x_H x_C=0 \\&-(b+q_\mathrm{soc} x_H) x_C +\lambda (x_H u_H(x) - x_Cu_C(x))+q_\mathrm{inf} x_H x_C=0, \end{aligned} \right. \end{aligned}$$
(9)

where \(u_C(x), u_H(x)\) are the maximizers in (7). Thus, x is a fixed point of the limiting dynamics of the distribution of large number of agents such that the corresponding stationary control is individually optimal subject to this distribution.

Fixed points can practically model a stationary behavior only if they are stable. Thus, we are interested in stable solutions \((x,u_C,u_H)=(x,u_C(x),u_H(x))\) to the stationary MFG consistency problem, where a solution is stable if the corresponding stationary distribution \(x=(x_R,x_H,x_C)\) is a stable equilibrium to (1) (with \(u_C,u_H\) fixed by this solution). By stability of a fixed point of a dynamics, we mean the usual dynamic stability: If the dynamics has started in a sufficiently small neighborhood of this point, then it converges to it as time tends to infinity. A fixed point is called unstable if in any its neighborhood there are initial points such the dynamics will not converge to the fixed point when starting from these points. As mentioned above, our major concern is to find out how the presence of interaction (specified by the coefficients \(q_\mathrm{soc}, q_\mathrm{inf}\)) affects the stable equilibria.

Results

Our first result describes explicitly all solutions to the stationary MFG consistency problem stated above, and the second result deals with the stability of these solutions.

We shall say that in a solution to the stationary MFG consistency problem the optimal individual behavior is corruption if \(u_C=0, u_H=1\): If you are corrupt stay corrupt, and if you are honest, start corrupted behavior as soon as possible; the optimal individual behavior is honesty if \(u_C=1, u_H=0\): If you are honest stay honest, and if you are involved in corruption try to clean yourself from corruption as soon as possible.

The basic assumptions on our coefficients are

$$\begin{aligned} \lambda >0, r> 0, b>0, \quad f\ge 0, q_\mathrm{soc} \ge 0, q_\mathrm{inf}\ge 0, \quad w_C > w_H > w_R \ge 0. \end{aligned}$$
(10)

The key parameter for our model turns out to be the quantity

$$\begin{aligned} \bar{x} =\frac{1}{q_\mathrm{soc}}\left[ \frac{r(w_C-w_H)}{w_H-w_R+rf}-b\right] \end{aligned}$$
(11)

(which can take values \(\pm \infty \) if \(q_\mathrm{soc}=0\)).

Theorem 3.1

Assume (10).

(i) If \(\bar{x} >1\), then there exists a unique solution \(x^*=(x^*_R, x^*_C, x^*_H)\) to the stationary MFG problem (9), (7), where

$$\begin{aligned} x_C^*=\frac{(1-x_H^*)r}{r+b+q_\mathrm{soc} x_H^*} \end{aligned}$$
(12)

and \(x_H^*\) is the unique solution on the interval (0, 1) of the quadratic equation \(Q(x_H)=0\), where

$$\begin{aligned} Q(x_H)=[(r+\lambda ) q_\mathrm{soc} -r q_\mathrm{inf}] x_H^2 +[r(q_\mathrm{inf}-q_\mathrm{soc})+\lambda r+\lambda b +r b]x_H -rb. \end{aligned}$$
(13)

Under this solution, the optimal individual behavior is corruption: \(u_C=0, u_H=1\).

(ii) If \(\bar{x} <1\), there may be 1, 2, or 3 solutions to the stationary MFG problem (9), (7). Namely, the point \(x_H=1, x_C=x_R=0\) is always a solution, under which the optimal individual behavior is being honest: \(u_C=1, u_H=0\).

Moreover, if

$$\begin{aligned} \max (\bar{x},0) \le \frac{b+\lambda }{q_\mathrm{inf} -q_\mathrm{soc}} <1, \end{aligned}$$
(14)

then there is another solution with the optimal individual behavior being honest, that is \(u_C=1, u_H=0\):

$$\begin{aligned} x^{**}_H= \frac{b+\lambda }{q_\mathrm{inf} -q_\mathrm{soc}}, \quad x_C^{**}=\frac{r(q_\mathrm{inf}-q_\mathrm{soc}-b-\lambda )}{(r+b)q_\mathrm{inf}+(\lambda -r) q_\mathrm{soc}}. \end{aligned}$$
(15)

Finally, if

$$\begin{aligned} \bar{x} > 0, \quad Q(\bar{x})\ge 0, \end{aligned}$$
(16)

there is a solution with the corruptive optimal behavior of the same structure as in (i), that is, with \(x_H^*\) being the unique solution to \(Q(x_H)=0\) on \((0,\bar{x}]\) and \(x_C^*\) given by (12).

Remark 2

As seen by inspection, \(Q[(b+\lambda )/(q_\mathrm{inf}-q_\mathrm{soc})]>0\) (if \(q_\mathrm{inf}-q_\mathrm{soc}>0\)), so that for \(\bar{x}\) slightly less than \(x_H^{**}=(b+\lambda )/(q_\mathrm{inf}-q_\mathrm{soc})\) one has also \(Q(\bar{x})>0\), in which case one really has three points of equilibria given by \(x_H^*, x_H^{**}, x_H=1\) with \(0< x^* <\bar{x} < x^{**} <1\).

Remark 3

In case of the stationary problem arising from the discounting payoff, that is from Eq. (8), the role of the classifying parameter \(\bar{x}\) from (11) is played by the quantity

$$\begin{aligned} \bar{x} =\frac{1}{q_\mathrm{soc}}\left[ \frac{(r+\delta )(w_C-w_H)}{w_H-w_R+(r+\delta )f}-b\right] . \end{aligned}$$
(17)

Theorem 3.2

Assume (10).

(i) The solution \(x^*=(x^*_R, x^*_C, x^*_H)\) (given by Theorem 3.1) with individually optimal behavior being corruption is stable if

$$\begin{aligned} -\frac{\lambda q_\mathrm{soc}}{r}\le q_\mathrm{soc}-q_\mathrm{inf} \le \frac{rq_\mathrm{inf} +(r+b)(br +r\lambda +b\lambda )}{r^2}. \end{aligned}$$
(18)

(ii) Suppose \(\bar{x} <1\). If \(q_\mathrm{inf}-q_\mathrm{soc}\le 0\) or

$$\begin{aligned} q_\mathrm{inf}-q_\mathrm{soc}> 0, \quad \frac{b+\lambda }{q_\mathrm{inf} -q_\mathrm{soc}} >1, \end{aligned}$$

then \(x_H=1\) is the unique stationary MFG solution with individually optimal strategy being honest; this solution is stable. If (14) holds, there are two stationary MFG solution with individually optimal strategy being honest, one with \(x_H=1\) and another with \(x_H=x_H^{**}\) given by (15); the first solution is unstable, and the second is stable.

We are not presenting necessary and sufficient condition for the stability of solutions with optimally corrupted behavior. Condition (18) is only sufficient, but it covers a reasonable range of parameters where the ‘epidemic’ spread of corruption and social cleaning are effects of comparable order.

As a trivial consequence of our theorems, we can conclude that in the absence of interaction, that is for \(q_\mathrm{inf}=q_\mathrm{soc}=0\), the corruption is individually optimal if

$$\begin{aligned} w_C-w_R \ge bf +(w_H-w_R) \left( 1+\frac{b}{r}\right) \end{aligned}$$
(19)

and honesty is individually optimal otherwise (which is of course a reformulation of the standard result for a basic model of corruption; see, e.g., [2]). In the first case, the unique equilibrium is

$$\begin{aligned} x_H^*=\frac{rb}{\lambda r +\lambda b +rb}, \quad x_C^*=\frac{r(1-x_H^*)}{r+b}, \end{aligned}$$
(20)

and in the second case, the unique equilibrium is \(x_H=1\). Both are stable.

Discussion

The results above show clearly how the presence of interaction (including social norms) influences the spread of corruption. When \(q_\mathrm{inf}=q_\mathrm{soc}=0\), one has one equilibrium that corresponds to corrupted or honest behavior depending on a certain relation (19) between the parameters of the game. If social norms or ‘epidemic’ myopic behavior is allowed in the model, which is quite natural for a realistic process, the situation becomes much more complicated. In particular, in a certain range of parameters, one has two stable equilibria, one corresponding to an optimally honest and another to an optimally corrupted behavior. This means in particular that similar strategies of a principal (defined by the choice of parameters \(b,f,w_H\)) can lead to quite different outcomes depending on the initial distributions of honest-corrupted agents or even on the small random fluctuations in the process of evolution. The phase transition from one to three equilibria (like in the VdW gas model) is governed by the parameter \(\bar{x}\) from (11).

The coefficients b and f enter exogenously in our system and can be used as tools for shifting the (precalculated) stable equilibria in the desired direction. These coefficients are not chosen strategically, which is an appropriate assumption for situations when the principal may have only poor information about the overall distribution of states of the agents. It is of course natural to extend the model by treating the principal as a strategic optimizer who chooses b (or even can choose f) in each state to optimize certain payoff. This would place the model in the group of MFG models with a major player, which is actively studied in the current literature.

Classifying agents as corrupted and honest only is a strong simplification of reality. In the spirit of [39] and [28], it is natural to consider the hierarchy \(i=1, \ldots , n\) of the possible positions of agents in a bureaucratic staircase with both basic wages \(w_H^i\) and the illegal payoff \(w_C^i\) in the corresponding states \(H_i\) and \(C_i\) increasing with i. Once a corruptive behavior of an agent from state \(C_i\) is detected, she is supposed to be downgraded to the reserved state \(R=H_0\), and the upgrading from i to \(i+1\) can be modeled as a random event with a given rate. This multilayer model of corruption could bring insights on the spread of corruption among the representatives of the different levels of power. The ‘vertical’ discrimination of agents can be also supplemented by a ‘horizontal’ one, that is by assuming that the agents can be involved in various levels of corruption. These levels can be also considered as a continuous parameter characterizing agents’ strategies.

Another direction of extension would be the inclusion of small noise, that is unpredictable random mutations (for instance, a wrong accusation of an honest agent), in the spirit of paper [26], which can be used for introducing another kind of stability (statistical, rather than dynamic) for the approximating game with N players. With a continuous set strategies (as suggested above), these random disturbances can be modeled as a Gaussian white noise providing the link with the original mean-field-game models based on diffusion processes.

Theoretically, the main questions left open by our analysis are the precise link between the stationary and dynamic MFG solutions and the precise statement of the law of large numbers. Namely, (1) can we solve the dynamic MFG consistency problem (6) and whether its solutions will approach the solutions of the stationary problems described by our theorems? (2) Considering a stochastic game of N players in the Markov model where each player evolves according to (4) with chosen control \(u_C,u_H\) and the distribution \(x_t\) reflects the aggregated distribution so obtained, do our stationary MFG solutions represent approximate Nash equilibria to this game? This latter question is an MFG version of the well-known problem of evolutionary game theory about the correspondence between the results of taking limits \(N\rightarrow \infty \) and \(t \rightarrow \infty \) in a different order, where rather deep results were obtained; see, e.g., [8] and references therein, in particular for the important practical question of ‘how long’ is the ‘long run.’

Notice that the problem essentially simplifies when instead of average payoff one considers the discounted one. In this case, the proof that the corresponding equilibria represent \(\epsilon \)-Nash equilibria for finite N games is reducible to finite times, since all payoffs are bounded and one can choose the horizon T in such a way that the behavior beyond it contributes arbitrary small amount to the total payoff, uniformly for all strategies. And for finite T the convergence can be obtained in a standard way (see, e.g., [29]). For such analysis, the dynamic stability of equilibria is essentially irrelevant, in a sharp contrast with the long-term average model. Recently, a slightly different setting of stationary problems was suggested, where a finite lifetime of agents is explicitly introduced via a given deathrate, leading to interesting results on the convergence of optimal payoffs, as \(N\rightarrow \infty \); see [46].

Proof of Theorem 3.1

Clearly, solutions to (7) are defined up to an additive constant. Thus, we can and will assume that \(g(R)=0\). Moreover, we can reduce the analysis to the case \(w_R=0\) by subtracting it from all equations of (7) and thus shifting by \(w_R\) the values \(w_H,w_C, \mu \). Under these simplifications, the first equation to (7) is \(\mu =rg(H)\), so that (7) becomes the system

$$\begin{aligned} \left\{ \begin{aligned}&w_H +\lambda \max (g(C)-g(H),0) +q_\mathrm{inf} x_C (g(C)-g(H)) =rg(H) \\&w_C -(b+q_\mathrm{soc} x_H) f +\lambda \max (g(H)-g(C),0)-(b+q_\mathrm{soc} x_H) g(C)=rg(H) \end{aligned} \right. \end{aligned}$$
(21)

for the pair (g(H), g(C)) with \(\mu =rg(H)\).

Assuming \(g(C)\ge g(H)\), that is \(u_C=0, u_H=1\), so that the corruptive behavior is optimal, system (21) turns to

$$\begin{aligned} \left\{ \begin{aligned}&w_H +\lambda (g(C)-g(H)) +q_\mathrm{inf} x_C (g(C)-g(H)) =rg(H) \\&w_C -(b+q_\mathrm{soc} x_H) f -(b+q_\mathrm{soc} x_H) g(C)=r g(H). \end{aligned} \right. \end{aligned}$$
(22)

Solving this system of two linear equations, we get

$$\begin{aligned} g(C)= & {} \frac{(r+\lambda +q_\mathrm{inf}x_C)[w_C-(b+q_\mathrm{soc} x_H) f] -rw_H}{r(\lambda +q_\mathrm{inf}x_C+b+q_\mathrm{soc}x_H)+(\lambda +q_\mathrm{inf}x_C)(b+q_\mathrm{soc}x_H)},\\ g(H)= & {} \frac{(\lambda +q_\mathrm{inf}x_C)[w_C-(b+q_\mathrm{soc} x_H) f]+(b+q_\mathrm{soc}x_H) w_H}{r(\lambda +q_\mathrm{inf}x_C+b+q_\mathrm{soc}x_H)+(\lambda +q_\mathrm{inf}x_C)(b+q_\mathrm{soc}x_H)}, \end{aligned}$$

so that \(g(C)\ge g(H)\) is equivalent to

$$\begin{aligned} w_C-(b+q_\mathrm{soc} x_H) f \ge w_H \left( 1+\frac{b+q_\mathrm{soc}x_H}{r}\right) , \end{aligned}$$

or, in other words,

$$\begin{aligned} x_H \le \frac{1}{q_\mathrm{soc}}\left[ \frac{r(w_C-w_H)}{w_H+rf}-b\right] , \end{aligned}$$
(23)

which by restoring \(w_R\) (shifting \(w_C,w_H\) by \(w_R\)) gives

$$\begin{aligned} x_H \le \bar{x}=\frac{1}{q_\mathrm{soc}}\left[ \frac{r(w_C-w_H)}{w_H-w_R+rf}-b\right] . \end{aligned}$$
(24)

Since \(x_H\in (0,1)\), this is automatically satisfied if \(\bar{x} >1\), that is under the assumption of (i). On the other hand, it definitely cannot hold if \(\bar{x} <0\).

Assuming \(g(C)\le g(H)\), that is \(u_C=1, u_H=0\), so that the honest behavior is optimal, system (21) turns to

$$\begin{aligned} \left\{ \begin{aligned}&w_H +q_\mathrm{inf} x_C (g(C)-g(H)) =rg(H) \\&w_C-(b+q_\mathrm{soc} x_H) f +\lambda (g(H)-g(C))-(b+q_\mathrm{soc} x_H) g(C)=rg(H). \end{aligned} \right. \end{aligned}$$
(25)

Solving this system of two linear equations, we get

$$\begin{aligned} g(C)= & {} \frac{(r+q_\mathrm{inf}x_C)[w_C-(b+q_\mathrm{soc} x_H) f]+(\lambda -r)w_H}{r(\lambda +q_\mathrm{inf}x_C+b+q_\mathrm{soc}x_H)+q_\mathrm{inf}x_C(b+q_\mathrm{soc}x_H)}\\ g(H)= & {} \frac{q_\mathrm{inf}x_C [w_C-(b+q_\mathrm{soc} x_H) f]+(\lambda + b+q_\mathrm{soc}x_H) w_H}{r(\lambda +q_\mathrm{inf}x_C+b+q_\mathrm{soc}x_H)+q_\mathrm{inf}x_C(b+q_\mathrm{soc}x_H)} \end{aligned}$$

so that \(g(C)\le g(H)\) is equivalent to the inverse of condition (23).

If \(g(C)\ge g(H)\), that is \(u_C=0, u_H=1\), the fixed point Eq. (9) becomes

$$\begin{aligned} \left\{ \begin{aligned}&(b+q_\mathrm{soc} x_H) x_C -r x_R =0 \\&r x_R -\lambda x_H -q_\mathrm{inf} x_H x_C=0 \\&-(b+q_\mathrm{soc} x_H) x_C +\lambda x_H +q_\mathrm{inf} x_H x_C=0. \end{aligned} \right. \end{aligned}$$
(26)

Since \(x_R=1-x_H-x_C\), the third equation is a consequence of the first two equations, which yields the system

$$\begin{aligned} \begin{aligned}&(b+q_\mathrm{soc} x_H) x_C -r(1-x_H-x_C)=0 \\&r (1-x_H-x_C)-\lambda x_H -q_\mathrm{inf} x_H x_C=0. \end{aligned} \end{aligned}$$
(27)

From the first equation, we have

$$\begin{aligned} x_C=\frac{(1-x_H)r}{r+b+q_\mathrm{soc} x_H}. \end{aligned}$$
(28)

From this, it is seen that if \(x_H\in (0,1)\) (as it should be), then also \(x_C\in (0,1)\) and

$$\begin{aligned} x_C+x_H=\frac{r+x_H(b+q_\mathrm{soc} x_H)}{r+b+q_\mathrm{soc}x_H} \in (0,1). \end{aligned}$$

Plugging \(x_C\) in the second equation of (27), we find for \(x_H\) the quadratic equation \(Q(x_H)=0\) with Q given by (13).

Since \(Q(0)<0\) and \(Q(1)>0\), the equation \(Q(x_H)=0\) has exactly one positive root \(x_H^*\in (0,1)\). Hence, \(x_H^*\) satisfies (23) if and only if either \(\bar{x}>1\) (that is we are under the assumption of (i)) or if (16) holds proving the last statement of (ii).

If \(g(C)\le g(H)\), that is \(u_C=1, u_H=0\), the fixed point Eq. (9) becomes

$$\begin{aligned} \left\{ \begin{aligned}&(b+q_\mathrm{soc} x_H) x_C -x_R r=0 \\&x_R r+\lambda x_C -q_\mathrm{inf} x_H x_C=0 \\&-x_C (b+q_\mathrm{soc} x_H) -\lambda x_C +q_\mathrm{inf} x_H x_C=0. \end{aligned} \right. \end{aligned}$$
(29)

Again here \(x_R=1-x_H-x_C\) and the third equation is a consequence of the first two equations, which yields the system

$$\begin{aligned} \left\{ \begin{aligned}&(b+q_\mathrm{soc} x_H) x_C -r(1-x_H-x_C)=0 \\&r (1-x_H-x_C)+\lambda x_C -q_\mathrm{inf} x_H x_C=0. \end{aligned} \right. \end{aligned}$$
(30)

From the first equation, we again get (28). Plugging this \(x_C\) in the second equation of (27), we find the equation

$$\begin{aligned} r(1-x_H)=(r-\lambda +q_\mathrm{inf}x_H)\frac{(1-x_H)r}{r+b+q_\mathrm{inf} x_H}, \end{aligned}$$

with two explicit solutions yielding the first and the second statements of (ii).

Proof of Theorem 3.2

Notice that \((d/dt) (x_R+x_H+x_C)=0\) according to (1), so that the normalization condition \(x_R+x_H+x_C=1\) is consistent with this evolution.

(i) When individually optimal behavior is to be corrupted, that is \(u_C=0, u_H=1\), system (1) written in terms of \((x_H,x_C)\) becomes

$$\begin{aligned} \left\{ \begin{aligned}&\dot{x}_H =(1-x_H-x_C) r-\lambda x_H-q_\mathrm{inf} x_H x_C, \\&\dot{x}_C =-x_C (b+q_\mathrm{soc} x_H) +\lambda x_H +q_\mathrm{inf} x_H x_C. \end{aligned} \right. \end{aligned}$$
(31)

Written in terms of \(y=x_H-x_H^*, z=x_C-x_C^*\), it takes the form

$$\begin{aligned} \left\{ \begin{aligned}&\dot{y} =-y\left( r+\lambda +q_\mathrm{inf}x_C^*\right) -z \left( r+q_\mathrm{inf}x_H^*\right) -q_\mathrm{inf}yz, \\&\dot{z} =y \left[ \lambda +(q_\mathrm{inf}-q_\mathrm{soc}) x_C^*\right] +z\left[ x_H^*(q_\mathrm{inf}-q_\mathrm{soc})-b\right] z +(q_\mathrm{inf}-q_\mathrm{soc})yz. \end{aligned} \right. \end{aligned}$$
(32)

The well-known condition of stability by the linear approximation (Hartman theo) states that if both eigenvalues of the linear approximation around the fixed point have real negative parts, the point is stable, and if at least one of the eigenvalues has a positive real part, the point is unstable. The requirement that both eigenvalues have negative real parts is equivalent to the requirement that the trace of the linear approximation is negative and the determinant is positive:

$$\begin{aligned} \begin{aligned}&x_H^*(q_\mathrm{inf}-q_\mathrm{soc})-b -r-\lambda -q_\mathrm{inf}x_C^* <0, \\&\lambda \left( r+q_\mathrm{soc} x_H^* +b\right) -rx_H^*(q_\mathrm{inf}-q_\mathrm{soc})+br +x_C^* \left[ r(q_\mathrm{inf}-q_\mathrm{soc})+q_\mathrm{inf}b\right] >0 \end{aligned} \end{aligned}$$
(33)

(note that the quadratic terms in \(x_C,x_H\) cancel in the second inequality). By (12), this rewrites in terms of \(x_H^*\) as

$$\begin{aligned} \begin{aligned}&\left[ x_H^*(q_\mathrm{inf}-q_\mathrm{soc})-b -r-\lambda \right] \left( r+b+q_\mathrm{soc} x_H^*\right) -q_\mathrm{inf}r\left( 1-x_H^*\right) <0, \\&\quad \left[ \lambda (r+q_\mathrm{soc} x_H^* +b)-rx_H^*(q_\mathrm{inf}-q_\mathrm{soc})+br\right] \left( r+b+q_\mathrm{soc}x_H^*\right) \\&\quad +r\left( 1-x_H^*\right) \left[ r(q_\mathrm{inf}-q_\mathrm{soc})+q_\mathrm{inf}b\right] >0 \end{aligned} \end{aligned}$$

or in a more concise form as

$$\begin{aligned} \begin{aligned}&(x_H^*)^2(q_\mathrm{inf}-q_\mathrm{soc})q_\mathrm{soc} +x_H^*\left[ (q_\mathrm{inf}-q_\mathrm{soc})(2r+b)\right. \\&\left. \quad -q_\mathrm{soc} (b+\lambda )\right] -(r+b)(r+b+\lambda )-rq_\mathrm{inf}<0, \\&\quad (x_H^*)^2q_\mathrm{soc}[(q_\mathrm{inf}-q_\mathrm{soc})r-\lambda q_\mathrm{soc}]+ 2x_H^*(r+b)[r(q_\mathrm{inf}-q_\mathrm{soc})r-\lambda q_\mathrm{soc}] \\&\quad -r^2(q_\mathrm{inf}-q_\mathrm{soc})-rbq_\mathrm{inf}-(r+b)(br+r\lambda +b\lambda )<0. \end{aligned} \end{aligned}$$
(34)

Let

$$\begin{aligned} 0 \le q_\mathrm{soc}-q_\mathrm{inf} \le \frac{rq_\mathrm{inf} +(r+b)(br +r\lambda +b\lambda )}{r^2}. \end{aligned}$$

Then it is seen directly that both inequalities in (34) hold trivially for any positive \(x_H\).

Assume now that

$$\begin{aligned} 0< r(q_\mathrm{inf}-q_\mathrm{soc})\le \lambda q_\mathrm{soc}. \end{aligned}$$

Then the second condition in (34) again holds trivially for any positive \(x_H\). Moreover, it follows from \(Q(x_H^*)=0\) that

$$\begin{aligned} x_H^* \le \frac{rb}{r (q_\mathrm{inf}-q_\mathrm{soc})+\lambda r +\lambda b +rb} \le \tilde{x}=\frac{b}{q_\mathrm{inf}-q_\mathrm{soc}}. \end{aligned}$$

Now the left-hand side of the first inequality of (34) evaluated at \(\tilde{x}\) is negative, because it equals

$$\begin{aligned} -\frac{bq_\mathrm{soc} \lambda }{q_\mathrm{inf}-q_\mathrm{soc}}-r^2-\lambda (r+b)-r q_\mathrm{inf}, \end{aligned}$$

and it is also negative when evaluated at \(x_H^* \le \tilde{x}\).

(ii) When individually optimal behavior is to be honest, that is \(u_C=1, u_H=0\), system (1) written in terms of \((x_H,x_C)\) becomes

$$\begin{aligned} \begin{aligned}&\dot{x}_H =(1-x_H-x_C) r+\lambda x_C-q_\mathrm{inf} x_H x_C, \\&\dot{x}_C =-x_C (b+q_\mathrm{soc} x_H) -\lambda x_C +q_\mathrm{inf} x_H x_C. \end{aligned} \end{aligned}$$
(35)

To analyze the stability of the fixed point \(x_H=1, x_C=0\), we write it in terms of \(x_C\) and \(y=1-x_H\) as

$$\begin{aligned} \begin{aligned}&\dot{y} =-ry +x_C(r-\lambda +q_\mathrm{inf}) -q_\mathrm{inf} y x_C, \\&\dot{x}_C =x_C (q_\mathrm{inf} -q_\mathrm{soc} -\lambda -b) -y x_C(q_\mathrm{inf}-q_\mathrm{soc}). \end{aligned} \end{aligned}$$

According to the linear approximation, the fixed point \(y=0,x_C=0\) of this system is stable if \(q_\mathrm{inf} -q_\mathrm{soc} -\lambda -b<0\) proving the first statement in (ii).

Assume (14) holds. To analyze the stability of the fixed point \(x_H^{**}\), we write system (35) in terms of the variables

$$\begin{aligned} y=x_H-x_H^{**}=x_H-\frac{b+\lambda }{q_\mathrm{inf} -q_\mathrm{soc}}, \quad z=x_C-x_C^{**}=x_C-\frac{r(q_\mathrm{inf}-q_\mathrm{soc}-b-\lambda )}{(r+b)q_\mathrm{inf}+(\lambda -r) q_\mathrm{soc}}, \end{aligned}$$

which is

$$\begin{aligned} \begin{aligned}&\dot{y} =-y \frac{r[(r+q_\mathrm{inf})(q_\mathrm{inf}-q_\mathrm{soc})+\lambda q_\mathrm{soc}]}{(r+b)q_\mathrm{inf} +(\lambda -r) q_\mathrm{soc}} -z \frac{(r+b)q_\mathrm{inf} +(\lambda -r) q_\mathrm{soc}}{q_\mathrm{inf}-q_\mathrm{soc}}-q_\mathrm{inf} yz, \\&\dot{z} =y \frac{r(q_\mathrm{inf}-q_\mathrm{soc}-b-\lambda )(q_\mathrm{inf}-q_\mathrm{soc})}{(r+b)q_\mathrm{inf} +(\lambda -r) q_\mathrm{soc}} +yz. \end{aligned} \end{aligned}$$

The characteristic equation of the matrix of linear approximation is seen to be

$$\begin{aligned} \xi ^2+ \frac{r[(r+q_\mathrm{inf})(q_\mathrm{inf}-q_\mathrm{soc})+\lambda q_\mathrm{soc}]}{(r+b)q_\mathrm{inf} +(\lambda -r) q_\mathrm{soc}}\xi +r(q_\mathrm{inf}-q_\mathrm{soc}-b-\lambda )=0. \end{aligned}$$

Under (14), both the free term and the coefficient at \(\xi \) are positive. Hence, both roots have negative real parts implying stability.

References

  1. 1.

    Achdou Y, Camilli F, Capuzzo-Dolcetta I (2013) Mean field games: convergence of a finite difference method. SIAM J Numer Anal 51(5):2585–2612

    MathSciNet  Article  MATH  Google Scholar 

  2. 2.

    Aidt TS (2009) Economic analysis of corruption: a survey. Econ J 113(491):F632–F652

    Article  Google Scholar 

  3. 3.

    Avenhaus R, Canty MD, Kilgour DM, von Stengel B, Zamir S (1996) Inspection games in arms control. Eur J Oper Res 90(3):383–394

    Article  MATH  Google Scholar 

  4. 4.

    Avenhaus R, Von Stengel B, Zamir S (2002) Inspection games. In: Aumann R, Hart S (eds) Handbook of game theory with economic applications, vol 3. North-Holland, Amsterdam, pp 1947–1987

    Google Scholar 

  5. 5.

    Bardi M, Caines P, Capuzzo Dolcetta I (2013) Preface: DGAA special issue on mean field games. Dyn Games Appl 3(4):443–445

    MathSciNet  Article  MATH  Google Scholar 

  6. 6.

    Basna R, Hilbert A, Kolokoltsov V (2014) An epsilon-Nash equilibrium for non-linear Markov games of mean-field-type on finite spaces. Commun Stoch Anal 8(4):449–468

    MathSciNet  Google Scholar 

  7. 7.

    Bensoussan A, Alain J Frehse, Yam Ph (2013) Mean field games and mean field type control theory. Springer Briefs in Mathematics. Springer, New York

    Google Scholar 

  8. 8.

    Binmore K, Samuelson L (1997) Muddling through: noisy equilibrium selection. J Econ Theory 74(2):235–265

    MathSciNet  Article  MATH  Google Scholar 

  9. 9.

    Caines PE (2014) Mean field games. In: Samad T, Ballieul J (eds) Encyclopedia of systems and control. Springer Reference 364780. Springer, London, pp 30–31. doi:10.1007/978-1-4471-5102-9

  10. 10.

    Cardaliaguet P, Lasry J-M, Lions P-L, Porretta A (2013) Long time average of mean field games with a nonlocal coupling. SIAM J Control Optim 51(5):3558–3591

    MathSciNet  Article  MATH  Google Scholar 

  11. 11.

    Carmona R, Delarue F (2013) Probabilistic analysis of mean-field games. SIAM J Control Optim 514:2705–2734

    MathSciNet  Article  MATH  Google Scholar 

  12. 12.

    Carmona R, Lacker D (2015) A probabilistic weak formulation of mean field games and applications. Ann Appl Probab 25(3):1189–1231

    MathSciNet  Article  MATH  Google Scholar 

  13. 13.

    Gast N, Gaujal B, Le Boudec J-Y (2012) Mean field for Markov decision processes: from discrete to continuous optimization. IEEE Trans Autom Control 57(9):2266–2280

    MathSciNet  Article  Google Scholar 

  14. 14.

    Giovannoni F, Seidmann DJ (2014) Corruption and power in democracies. Soc Choice Welf 42:707–734

    MathSciNet  Article  MATH  Google Scholar 

  15. 15.

    Gomes DA, Mohr J, Souza RR (2010) Discrete time, finite state space mean feld games. J Math Pures Appl 93(3):308–328

    MathSciNet  Article  MATH  Google Scholar 

  16. 16.

    Gomes DA, Mohr J, Souza RR (2013) Continuous time finite state space mean field games. Appl Math Optim 68(1):99–143

    MathSciNet  Article  MATH  Google Scholar 

  17. 17.

    Gomes DA, Patrizi S, Voskanyan V (2014) On the existence of classical solutions for stationary extended mean field games. Nonlinear Anal 99:49–79

    MathSciNet  Article  MATH  Google Scholar 

  18. 18.

    Gomes D, Velho RM, Wolfram M-T (2014) Socio-economic applications of finite state mean field games. Philos Trans R Soc Lond Ser A Math Phys Eng Sci 372(2028):20130405

    MathSciNet  Article  MATH  Google Scholar 

  19. 19.

    Gomes DA, Saude J (2014) Mean field games models—a brief survey. Dyn Games Appl 4(2):110–154

    MathSciNet  Article  MATH  Google Scholar 

  20. 20.

    Guéant O, Lasry J-M, Lions P-L (2003) Mean field games and applications. Paris-Princeton Lectures on Mathematical Finance 2010. Lecture Notes in Mathematics. Springer, Berlin, pp 205–266

  21. 21.

    Huang M, Malhamé R, Caines P (2006) Large population stochastic dynamic games: closed-loop Mckean-Vlasov systems and the Nash certainty equivalence principle. Commun Inf Syst 6:221–252

    MathSciNet  MATH  Google Scholar 

  22. 22.

    Huang M, Caines P, Malhamé R (2007) Large-population cost-coupled LQG problems with nonuniform agents: individual-mass behavior and decentralized \(\epsilon \)-Nash equilibria. IEEE Trans Autom Control 52(9):1560–1571

    MathSciNet  Article  Google Scholar 

  23. 23.

    Huang M (2010) Large-population LQG games involving a major player: the Nash certainty equivalence principle. SIAM J Control Optim 48:3318–3353

    MathSciNet  Article  MATH  Google Scholar 

  24. 24.

    Hurwicz L (2007) But who will guard the guardians? Prize lecture. www.nobelprize.org

  25. 25.

    Jain AK (2001) Corruption: a review. J Econ Surv 15(1):71–121

    MathSciNet  Article  Google Scholar 

  26. 26.

    Kandori M, Mailath GJ, Rob R (1993) Learning, mutation, and long run equilibria in games. Econometrica 61(1):29–56

    MathSciNet  Article  MATH  Google Scholar 

  27. 27.

    Kolokoltsov VN (2012) Nonlinear Markov games on a finite state space (mean-field and binary interactions). Int J Stat Probab (Canadian Center of Science and Education) 1(1):77–91. http://www.ccsenet.org/journal/index.php/ijsp/article/view/16682

  28. 28.

    Kolokoltsov VN The evolutionary game of pressure (or interference), resistance and collaboration. arXiv:1412.1269

  29. 29.

    Kolokoltsov V, Troeva M, Yang W (2014) On the rate of convergence for the mean-field approximation of controlled diffusions with large number of players. Dyn Games Appl 4(2):208–230

    MathSciNet  Article  MATH  Google Scholar 

  30. 30.

    Kolokoltsov VN, Malafeyev OA (2010) Understanding game theory. World Scientific, Singapore

    Google Scholar 

  31. 31.

    Kolokoltsov V, Passi H, Yang W (2013) Inspection and crime prevention: an evolutionary perspective. arXiv:1306.4219

  32. 32.

    Lambert-Mogiliansky A, Majumdar M, Radner R (2008) Petty corruption: a game-theoretic approach. J Econ Theory 4:273–297

    Google Scholar 

  33. 33.

    Lambert-Mogiliansky A, Majumdar M, Radner R (2009) Strategic analysis of petty corruption with an intermediary. Rev Econ Des 13(1–2):45–57

    MathSciNet  MATH  Google Scholar 

  34. 34.

    Lasry J-M, Lions P-L (2006) Jeux à champ moyen. I. Le cas stationnaire. CR Math Acad Sci Paris 343(9):619–625 (French)

    MathSciNet  Article  Google Scholar 

  35. 35.

    Lee J-H, Sigmund K, Dieckmann U, Iwasa Yoh (2015) Games of corruption: how to suppress illegal logging. J Theor Biol 367:1–13

    MathSciNet  Article  Google Scholar 

  36. 36.

    Levin MI, Tsirik ML (1998) Mathematical modeling of corruption. Ekon i Matem Metody 34(4):34–55 (in Russian)

    Google Scholar 

  37. 37.

    Malafeyev OA, Redinskikh ND, Alferov GV. Electric circuits analogies in economics modeling: corruption networks. In: Proceedings of ICEE-2014 (2nd international conference on emission electronics). doi:10.1109/Emission.2014.6893965

  38. 38.

    Ngendafuriyo F, Zaccour G (2013) Fighting corruption: to precommit or not? Econ Lett 120:149–154

    MathSciNet  Article  MATH  Google Scholar 

  39. 39.

    Nikolaev PV (2014) Corruption suppression models: the role of inspectors’ moral level. Comput Math Model 25(1):87–102

    MathSciNet  Article  MATH  Google Scholar 

  40. 40.

    Nourian M, Caines P (2013) \(\epsilon \)-Nash mean field game theory for nonlinear stochastic dynamical systems with major and minor agents. SIAM J Control Optim 51(4):3302–3331

    MathSciNet  Article  MATH  Google Scholar 

  41. 41.

    Ross Sh (1983) Introduction to stochastic dynamic programming. Wiley, Hoboken

    Google Scholar 

  42. 42.

    Starkermann R (1989) Unity is strength or corruption! (a mathematical model). Cybern Syst Int J 20(2):153–163

    MathSciNet  Article  Google Scholar 

  43. 43.

    Tembine H, Zhu Q, Basar T (2014) Risk-sensitive mean-field games. IEEE Trans Autom Control 59(4):835–850

    MathSciNet  Article  Google Scholar 

  44. 44.

    Vasin AA (2005) Noncooperative games in nature and society. MAKS Press, Moscow (in Russian)

    Google Scholar 

  45. 45.

    Vasin AA, Kartunova PA, Urazov AS (2010) Models of organization of state inspection and anticorruption measures. Matem Model 22(4):67–89

    MATH  Google Scholar 

  46. 46.

    Wiecek P (2015) Total reward semi-Markov mean-field games with complementarity properties. Preprint

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to V. N. Kolokoltsov.

Additional information

V. N. Kolokoltsov—Associate member of Institute of Informatics Problems, FRC CSC RAS.

Supported by RFFI Grant No. 14-06-00326.

http://arxiv.org/abs/1507.03240.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kolokoltsov, V.N., Malafeyev, O.A. Mean-Field-Game Model of Corruption. Dyn Games Appl 7, 34–47 (2017). https://doi.org/10.1007/s13235-015-0175-x

Download citation

Keywords

  • Corruption
  • Mean-field games
  • Stable equilibria
  • Social norms
  • Phase transition

Mathematics Subject Classification

  • 91A06
  • 91A15
  • 91A40
  • 91F99