Cost optimisation of hybrid institutional incentives for promoting cooperation in finite populations

In this paper, we rigorously study the problem of cost optimisation of hybrid (mixed) institutional incentives, which are a plan of actions involving the use of reward and punishment by an external decision-maker, for maximising the level (or guaranteeing at least a certain level) of cooperative behaviour in a well-mixed, finite population of self-regarding individuals who interact via cooperation dilemmas (Donation Game or Public Goods Game). We show that a mixed incentive scheme can offer a more cost-efficient approach for providing incentives while ensuring the same level or standard of cooperation in the long-run. We establish the asymptotic behaviour (namely neutral drift, strong selection, and infinite-population limits). We prove the existence of a phase transition, obtaining the critical threshold of the strength of selection at which the monotonicity of the cost function changes and providing an algorithm for finding the optimal value of the individual incentive cost. Our analytical results are illustrated with numerical investigations. Overall, our analysis provides novel theoretical insights into the design of cost-efficient institutional incentive mechanisms for promoting the evolution of cooperation in stochastic systems.


Introduction
Literature review. Evolutionary Game Theory made its debut in 1973 with John Maynard Smith's and George R. Price's work on the formalisation of animal contests, thus successfully using Classical Game Theory to create a new framework that could predict the evolutionary outcomes of the interaction between competing individuals. Ever since, it has been widely used to study myriad questions in various disciplines like Evolutionary Biology, Ecology, Physics, Sociology and Computer Science, including what the mechanisms underlying the emergence and stability of cooperation are [19] and how to mitigate climate change and its risks [28].
Cooperation is the act of paying a cost in order to convey a benefit to somebody else. Although it initially seems against the Darwinian theory of natural selection, cooperation has been, is, and will be a vital part of life, from cellular clusters to bees to humans [31,20]. Several mechanisms for promoting the evolution of cooperation have been identified, including kin selection, direct reciprocity, indirect reciprocity, network reciprocity, group selection and different forms of incentives [19,31,24,25,37]. The current work focuses on institutional incentives [29,32,39,6,4,33,37,8], which are a plan of actions involving the use of reward (i.e., increasing the payoff of cooperators) and punishment (i.e., reducing the payoff of defectors) by an external decision-maker, in particular, how they can be used in combination (i.e. hybrid or mixed incentives) in a cost-efficient way for maximising the levels of cooperative behaviour in a well-mixed, finite population of self-regarding individuals.
Promoting and implementing cooperation via incentives is costly to the incentive-providing institution, such as the United Nations or the European Union [23,37]. Thus, it is crucial to understand how to minimise the said cost while ensuring a desirable standard of cooperation. In this work, players interact via cooperation dilemmas, both the pairwise Donation Game (DG) and its multi-player version, the Public Goods Game (PGG) [31,19]. These games have been widely adopted to model social dilemma situations in which collective rationality leads to individual irrationality.
Several theoretical models studied how to combine institutional reward and punishment for enhancing the emergence and stability of cooperation [2,7,1,14,33,8]. However, little attention has been given to addressing the cost optimisation of providing incentives. Chen et al. [30] looked at a rewarding policy that switches the incentive from reward to punishment when the frequency of cooperators exceeds a certain threshold. This policy establishes cooperation at a lower cost and under a wider range of conditions than either reward or punishment alone, in both well-mixed and spatial populations. Others have applied the 'carrot and stick' idea to criminal recidivism and rehabilitation as now the justice system is switching its focus to reintegrating wrongdoers into society after the penalty has been served. Berenji et al.'s work [1] studied the game where players decide to permanently reform or continue engaging in criminal activity, eventually reaching a state where they are considered incorrigible. Since resources may be limited, they fixed the combined rehabilitation and penalty costs per crime. The most successful strategy in reducing crime is to optimally allocate resources so that after being having served the penalty, criminals are reintroduced into society via impactful programs. Wang et al. [39] explored the optimal incentive that not only minimises the total cost, but also guarantees a sufficient level of cooperation in an infinite and well-mixed population via optimal control theory.
This work however does not take into account various stochastic effects of evolutionary dynamics such as mutation and non-deterministic behavioural update. In a deterministic system of cooperators and defectors, once the latter disappear, there is no further change in the system and hence no further interference is needed. When mutation is present, defectors can reappear and become numerous over time, requiring the external institution to spend more on providing further incentives. Moreover, the intensity of selection -how strongly an individual bases their decision to copy another individual's strategy on their fitness difference -is overlooked. When selection is weak, providing incentives would make little difference in causing behavioural change as no individual would be motivated to copy another and all changes in strategy would be due to noise. When selection is strong, incentives that ensure a minimum fitness advantage to cooperators would ensure a positive behavioural change as the players would be more likely to copy one another. Recently, in [42] by simulating the weak prisoner's dilemma in finite populations, the authors find that a combination of appropriate punishment and reward mechanisms can promote cooperation's prosperity regardless of how large or small the temptation to defect is.
Whilst the above works were concerned with well-mixed populations, the following two studies deal with spatial ones. Szolnoki and Perc [35] looked at whether there are evolutionary advantages in correlating positive and negative reciprocity, as opposed to adopting only reward or punishment.
Their work supports others that use empirical data. In those studies, the data fails to support the central assumption of the strong reciprocity model that negative and positive reciprocity are correlated. In a different work [34], the authors showed how second-order free-riding on antisocial punishment restores the effectiveness of prosocial punishment, providing an evolutionary escape from adverse effects of antisocial punishment. Both these works use Monte Carlo simulations.
Moreover, several works have provided insights into how best to promote the emergence of collective behaviours such as cooperation and fairness while also considering the institutional costs of providing incentives [17,2,6,4,3,12,5], see also [40] for a recent survey on these papers.
Indeed, most relevant to our work are Duong and Han [6] and Han and Tran-Thanh [12], which derived analytical conditions for which a general incentive scheme can guarantee a given level of cooperation while at the same time minimising the total cost of investment. These results are highly sensitive to the intensity of selection. They also studied a class of incentive strategies that make an investment whenever the number of players with a desired behaviour reaches a certain threshold t ∈ {1, . . . , N − 1} (N is the population size), showing that there is a wide range of values for the threshold that outperforms standard institutional incentive strategies -those which invest in all players, i.e. the threshold is t = N − 1 [30]. These works however did not study the cost-efficiency of the mixed incentive scheme, which is the focus of the present work.
Overview of contribution of this paper. As mentioned above, in this work, we consider a well-mixed, finite population of self-regarding individuals where players interact via cooperation dilemmas (DG and PGG) and rigorously study the problem of cost optimisation of hybrid institutional incentives (combination of reward and punishment) for maximising the levels of cooperative behaviour (or guaranteeing at least a certain levels of cooperation). This problem is challenging due to the number of parameters involved such as the number of individuals in the population, the strength of selection, the game-specific quantities, as well as the efficiency ratios of providing the corresponding incentive. In particular, the Markov chain modelling the evolutionary process is of order equal to the population size, which is large but finite. The calculation of the entries of the corresponding fundamental matrix is intricate, both analytically and computationally.
Our present work provides a rigorous and delicate analysis of this problem, combining techniques from different branches of Mathematics including Markov chain theory, polynomial theory, and numerical analysis. The main results of the paper can be summarised as follows (detailed and precise statements are provided in Section 2).
(i) We show that a mixed incentive scheme can offer a more cost-efficient approach for providing incentives while ensuring the same level or standard of cooperation in the long-run.
(ii) We obtain the asymptotic behaviour of the cost function in the limits of neutral drift, strong selection as well as infinite population sizes.
(iii) We prove the existence of a phase transition for the cost function, obtaining the critical threshold of the strength of selection at which the monotonicity of the cost function changes and finding the optimal value of the individual incentive cost.
Furthermore, we provide numerical simulations to illustrate the analytical results.

Organisation of the paper
The rest of the paper is organised as follows. In Section 2 we present the model, methods, and the main results. Our main results include Proposition 1 on the efficiency of the combined reward and punishment incentives compared to implementing them separately, Theorem 1 on the asymptotic behaviour (neutral drift, strong selection, and infinite population limits) of the cost function, and Theorem 2 on the optimal incentive. In Section 3 we provide detailed computations and proofs of Theorem 2. Proof of Theorem 1 is given in Section 4. Summary and further discussions are provided in Section 5. Finally, Section 6 contains the proof of Proposition 1, detailed computations, and proofs for the technical results.

Model, methods, and main results
In this section, we present the model, methods, and main results of the paper. We first introduce the class of games, namely cooperation dilemmas, that we are interested in throughout this paper.

Cooperation dilemmas
We consider a well-mixed, finite population of N self-regarding individuals (players) who engage with one another using one of the following one-shot (i.e. non-repeated) cooperation dilemmas, the Donation Game (DG) or its multi-player version, the Public Goods Game (PGG). Strategy wise, each player can choose to either cooperate (C) or defect (D).
Let Π C (j) be the average payoff of a C player (cooperator) and Π D (j) that of a D player (defector), in a population with j C players and (N − j) D players. As can be seen below, the difference in payoffs δ = Π C (j) − Π D (j) in both games does not depend on j. For the two cooperation dilemmas considered in this paper, namely the Donation Games and the Public Goods Games, it is always the case that δ < 0. This does not cover some weak social dilemmas such as the Snowdrift Game, where δ > 0 for some j, the general prisoners' dilemma, and the collective risk game [33], where δ depends on j. We will investigate these games in future research (see Section 5 for further discussion).

Donation Game (DG)
The Donation Game is a form of Prisoners' Dilemma in which cooperation corresponds to offering the other player a benefit B at a personal cost c, satisfying that B > c. Defection means offering nothing. The payoff matrix of DG (for the row player) is given as follows Denoting π X,Y the payoff of a strategist X when playing with a strategist Y from the payoff matrix above, we obtain Thus, Public Goods Game (PGG) In a Public Goods Game, players interact in a group of size n, where they decide to cooperate, contributing an amount c > 0 to a common pool, or to defect, contributing nothing to the pool.
The total contribution in a group is multiplied by a factor r, where 1 < r < n (for the PGG to be a social dilemma), which is then shared equally among all members of the group, regardless of their strategy. Intuitively, contributing nothing offers one a higher amount of money after redistribution.
The average payoffs, Π C (j) and Π D (j), are calculated based on the assumption that the groups engaging in a public goods game are given by multivariate hypergeometric sampling. Thereby, for transitions between two pure states, this reduces to sampling, without replacement, from a hypergeometric distribution. More precisely, we obtain [13] Thus,

Cost of institutional reward and punishment
To reward a cooperator (respectively, punish a defector), the institution has to pay an amount θ/a (resp., θ/b) so that the cooperator's (defector's) payoff increases (decreases) by θ, where a, b > 0 are constants representing the efficiency ratios of providing the corresponding incentive.
In an institutional enforcement setting, we assume that the institution has full information about the population composition or statistics at the time of decision-making. That is, given the well-mixed population setting, we assume that the number j of cooperators in the population is known. Thus, if both reward and punishment are feasible options (i.e., mixed incentives), the institution can minimise its cost by choosing the minimum of j a and N −j b . Thus, the key question here is: what is the optimal value of the individual incentive cost θ that ensures a sufficient desired level of cooperation in the population (in the long-run) while minimising the total cost spent by the institution?
Note that, as discussed above, this mixed incentive, also known as the 'carrot and stick' approach, has been shown efficient for promoting cooperation in both pairwise and multi-player interactions [30,14,33,7,8]. However, these works have not studied cost optimisation and have not shown whether this approach is actually more cost-efficient and by how much.

Deriving the expected cost of providing institutional incentives
In this model, we adopt the finite population dynamics with the Fermi strategy update rule [36], stating that a player X with fitness f X adopts the strategy of another player Y with fitness f Y with a probability given by P X,Y = 1 + e −β(f Y −f X ) −1 , where β represents the intensity of selection.
We compute the expected number of times the population contains j C players, 1 ≤ j ≤ N −1.
The entries n ik of the so-called fundamental matrix N = (n ik ) N −1 i,k=1 = (I − U ) −1 of the absorbing Markov chain gives the expected number of times the population is in the state S j if it is started in the transient state S i [15]. As a mutant can randomly occur either at S 0 or S N , the expected number of visits at state S i is thus, 1 2 (n 1i + n N −1,i ). The total cost per generation is Hence, the expected total cost of interference for mixed reward and punishment is As a comparison, we recall the cost for reward and punishment incentives, E r and E p , respectively, when being used separately [6] E r (θ) = θ 2 By comparing (2) and (3) one clearly expects that the efficiency ratios a and b strongly affect the incentive cost. In the cost functions E r and E p , they are just scaling factors and do not affect the analysis of these functions. This is not the case in the combined incentive. One of the main objectives of this paper is to reveal explicitly the influence of a and b on the cost function. From a mathematical point of view, the presence and interplay of a and b make the analysis of the combined incentive much harder than that of the separate ones.
Remark (On the interference scheme). In the mixed incentive setting being considered in this paper, the institution either rewards every cooperator or punishes every defector, depending on which one is less costly. Although being rather unsophisticated, this incentive strategy is typically considered in the literature of institutional incentives modelling. However, other interference schemes are also investigated in many works, for instance, the institution only rewards C players whenever their frequency or number in the population does not exceed a given threshold t, where The scheme studied in this paper corresponds to the case where t = N − 1. We refer the reader to [12] and references therein for more information about different interference schemes. We plan to generalise the results of this paper to more complicated incentive strategies in future work, see Section 5 for further discussion.

Cooperation frequency
Since the population consists of only two strategies, the fixation probabilities of a C (D) player in a homogeneous population of D (C) players when the interference scheme is carried out are, respectively, [18] ρ D,C = 1 + Computing the stationary distribution using these fixation probabilities, we obtain the fre- Hence, this frequency of cooperation can be maximised by maximising The fraction in Equation (4) can be simplified as follows [19] ρ D,C ρ C,D = In the above transformation, u i,i−1 and u i,i−1 are the probabilities to decrease or increase the number of C players (i.e. i) by one in each time step, respectively.
We consider non-neutral selection, i.e. β > 0 (under neutral selection, there is no need to use incentives as no player is likely to copy another player and any changes in strategy that happen are due to noise as opposed to payoffs). Assuming that we desire to obtain at least an ω ∈ [0, 1] fraction of cooperation, i.e.
ρ D,C ρ D,C +ρ C,D ≥ ω, it follows from equation (5) that Therefore it is guaranteed that if θ ≥ θ 0 (ω), at least an ω fraction of cooperation can be expected. This condition implies that the lower bound of θ monotonically depends on β. Namely, when ω ≥ 0.5, it increases with β and when ω < 0.5, it decreases with β.
To summarise, we obtain the following constrained minimisation problem min θ≥θ0 E mix (θ).
Remark (On the formula of the cost of the mixed incentive). In the derivation of the cost function (2), we assumed that the population is equally likely to start in the homogeneous state S 0 as well as in the homogeneous state S N . However, in general, this might not be correct. For example, if cooperators are very likely to fixate in a population of defectors, but defectors are unlikely to fixate in a population of cooperators, mutants are on average more likely to appear in the homogeneous cooperative population (that is in S N ). Similarly, the population might also be likely to appear in S 0 rather than S N . In general, in the long-run, the population will start at i = 0 (i = N , respectively) with probability equal to the frequency of D (C) computed at the equilibrium, f D = 1/(r + 1) . Thus generally, the expected number of visits at state S i will be f D n 1i + f C n N −1,i . Therefore, instead of (2), in the general setting the formula for the cost function should be In practice, in many works based on agent-based simulations [2,5,34,11,30], it is often assumed that mutation is negligible and simulations end whenever the population fixates in a homogeneous state. Moreover, these simulations usually assume that the initial population starts at a homogeneous state or has a uniform distribution of different types. In this work, we thus assume an equal likelihood that the population starts at one of the homogeneous states and our formula (2) captures such scenarios. This assumption enables us to analytically study the cost function and its behaviour. As will be clear in the subsequent sections, the analysis is already very complicated in this simplified setting. Our results encapsulate the intermediate-run dynamics, an approximation that is valid if the time-scale is long enough for one type to reach fixation, but too short for the next mutant to appear. Our findings might thus be more practically useful for the optimisation of the institutional budget for providing incentives on an intermediate timescale.
We will study this problem in the most general case, where the initial population can start at an arbitrary state, in future work. The cost function can be obtained from extensive agent-based simulations of the evolutionary process. However, this approach is very computationally expensive especially when one wants to analytically study the cost function as a function of the individual incentive cost for large population sizes, which is the focus of this paper. Previous works have already shown that outcomes from our adopted evolutionary processes (small-mutation limit) are in line with extensive agent-based simulations, e.g. in [9,38,13,32].

Main results of the present paper
Proposition 1 (Combined incentives vs separate ones). It is always more cost efficient to use the mixed incentive approach than a separate incentive, reward or punishment, That is, if providing reward for a cooperator is much more cost-efficient for the institution than punishing a defector, i.e., when b/a ≥ N − 1, then it is optimal to use reward entirely. Symmetrically, if punishment is much more efficient, i.e. a/b ≥ N − 1, then it is optimal to use punishment entirely. Otherwise, a mixed approach is more cost-efficient. Note however that the mixed approach require the institution to be able to observe the population composition (i.e. the number of cooperators in the population, j).
The proof of this Proposition will be given in Section 6.2 and see Figure 2 for an illustration.
The following number is central to the analysis of this paper It plays a similar role as the harmonic number H N in [6], where a similar cost optimisation problem but for a separate reward or punishment incentive is studied. However, unlike the harmonic function, which has a growth of ln N + γ (where γ is the Euler-Mascheroni constant) as N → +∞, we will show that H N,a,b is always bounded and its asymptotic behaviour is given by (see more details in Proposition 4) Now, our second main result below studies the asymptotic behaviour (neutral drift limit, strong selection limit, and infinite population limit) of the cost function E mix (θ).
Theorem 1 (Asymptotic behaviour of the cost function).
1. (Growth of the cost function) The cost function satisfies the following lower and upper bound estimates In particular, since H N,a,b is uniformly bounded (with respect to N ), it follows that the cost function grows quadratically with respect to N .

Infinite population limit:
It is worth noticing that the neutral drift and strong selection limits of the cost function do not depend on the underlying games, but the infinite population limit does.
The proof of this Theorem will be given in Section 4. The following result provides a detailed analysis for the minimisation problem (7) for a = b.
Note that since N ≥ 2, this case belongs to the interesting regime where 1 N −1 ≤ b a ≤ N − 1, see Proposition 1 above. Mathematically, this case is distinguishable since it gives rise to many useful and beautiful symmetric properties and cancellations, see Section 3. We also numerically investigate the case a ̸ = b and conjecture that the result also holds true. and with u := e x , such that θ → E mix (θ) is non-decreasing for all β ≤ β * and it is non-monotonic 2. (Behaviour above the threshold value) For β > β * , the number of changes of sign of E ′ mix (θ) is at least two for all N and there exists an N 0 such that the number of changes is exactly 2 for N ≤ N 0 . As a consequence, for N ≤ N 0 , there exist θ 1 < θ 2 such that for β > β * , E mix (θ) is increasing when θ < θ 1 , decreasing when θ 1 < θ < θ 2 , and increasing when The proof of this Theorem is detailed in Section 3.

Algorithms
Based on the results above, we describe below an algorithm for the computation of the critical threshold β * of the strength selection. (1) Compute δ in DG: (3) Compute Output: θ * and E mix (θ * ). The numerical results obtained are in accordance with Theorem 2: for β ≤ β * , the cost function increases, while for β > β * it is not monotonic. incentives are less costly than either reward or punishment. 3 The expected cost function, phase transition, and the minimisation problem In this section, we study in detail the cost function, establishing the phase transition phenomena and solving the minimisation problem 7 of finding the optimal incentive, thus proving Theorem 2.
We consider the case 1 and focus on the most important case when a = b. This is due to Proposition 1, when b a is not in this interval, the mixed incentive problem reduces to either the reward or the punishment problem that has been studied in [6].

The cost function and its derivative
In this section, we provide the explicit computation for the cost function and its derivative. The class of cooperation dilemmas, namely DG and PGG, introduced in Section 2 are of crucial importance in the analysis of this paper. This is because, as already mentioned, the difference in payoffs between a cooperator and a defector, δ = Π C (i) − Π D (i), does not depend on the state i, which gives rise to explicit and analytically tractable formulas for the entries of the fundamental matrix N of the absorbing Markov chain describing the population dynamics. These entries are given in the following lemma, whose detailed proof can be found in [6].  (1)) are given by Using Lemma 1, we obtain a more explicit formula for the cost function as follows: To study the monotonicity of the cost function, in the following lemma, we compute the derivative of E mix (θ) with respect to θ.
Lemma 2. The total cost of interference for the mixed institutional incentive is given by Its derivative is given by See Section 6.3 for a proof of this lemma.

The polynomial P
This section contains details about The following proposition studies the properties of P .
Proposition 2. Let P (u) be the polynomial defined in (29). Then it is a polynomial of degree where the leading coefficient p 2N −4 is positive. When a = b, the coefficients of P are antisymmetric, that is we have As a consequence, when a = b, P has exactly one positive root, which is equal to 1.
The proof of this proposition is lengthy and delicate. The case a = b is special since it gives rise to many useful symmetric properties and nice cancellations. To focus on the main points here, we postpone the proof to the Appendix, see Section 6.4.

The derivative of F
In this section, we study the derivative of the function F defined in (31). The analysis of this section will play an important role in the study of the phase transition of the cost function in the next section.
We have The sign of F ′ (u) (thus, the monotonicity of F ) is the same as that of the polynomial M . The next proposition presents some properties of M .
where the leading coefficient is 2. When a = b, the coefficients of M are symmetric, that is for all i = 0, . . . , 4N − 6 3. M has at least two positive roots, one is less than 1 and the other is bigger than 1. For sufficiently small N , namely N ≤ N 0 , M has exactly two positive roots, u 1 and u 2 , where u 1 < 1 < u 2 . As a consequence, for 1 < u < u 2 , F ′ (u) < 0, thus F is decreasing. While for The proof of this proposition is presented in Section 6.5. We conjecture that the sequence of M has exactly two changes of signs, and thus M has exactly two positive roots.

The phase transition and the minimisation problem
In this section, we study the phase transition problem, which describes the change in the behaviour of the cost function when varying the strength of selection β, and the optimal incentive problem, thus proving Theorem 2. We focus on the case a = b.
Proof of Theorem 2. It follows from Proposition 2 that for 0 < u, P (u) > 0 if and only if u > 1.

Asymptotic behaviour of the expected cost function
In this section, we study the asymptotic behaviour (neutral drift, strong selection, and infinite population limits) of the cost function, proving the main results in Theorem 1. To this end, we will first need some auxiliary technical lemmas.

Some auxiliary lemmas
The first auxiliary lemma is an elementary inequality which will be used to estimate the cost function. Its proof can be found in [6].
In the next lemma, we provide lower and upper bounds for the number H N,a,b defined in (8).
As mentioned in the introduction, this number plays a similar role as the harmonic number H n in [6]. However, unlike the harmonic number, we will show that H N,a,b is always bounded and has a finite limit as N → +∞.
Lemma 4. It holds that The proof of this lemma is given in Section 6.6.
The following proposition characterises the asymptotic limit of H N,a,b as N → +∞, which will be used later to obtain asymptotic limits of the cost function.

Proposition 4. It holds that
Proof. We recall that H N,a,b = As in the proof of the above lemma, we will link H N,a,b to the harmonic number H N by splitting the sum at an appropriate point, which allows us to determine the minimum between j/a and (N − j)/a explicitly, given by We have Letĵ = N − j. Thus: Note that N − N a,b = N (1 − a a+b ) = N b a+b and N a,b = N a a+b . Thus both N a,b and N − N a,b go to +∞ as N → +∞. Then it follows from the following well-known asymptotic behaviour of the harmonic number (see (52) This completes the proof of the proposition.
The following lemma provides lower and upper bounds for the cost function, which show that it grows quadratically with respect to the population size.

Lemma 5. It holds that
where . Using Lemma 3, we get we obtain the following estimates Thus for θ > 0 we have which completes the proof of the lemma.
We can further estimate m and M in (16) from below and above respectively in terms of the population size N as follows.
Lemma 6. For m and M defined in (16), it holds that As a consequence, we have The proof of this lemma is given in Section 6.7.

Neutral drift limit
In this section, we study the neutral drift limit of the cost function, proving the second part of Theorem 1.

Proposition 5 (neutral drift limit). It holds that
It is worth noting that the neutral drift limit of the cost function depends on the population size N , the reward and punishment efficiency ratios a and b through the number H N,a,b , but does not depend on the underlying game.

Strong selection limits
In this section, we study the strong selection limits of the cost function, proving the third statement of Theorem 1.
, for θ = −δ, Similarly to the neutral drift limit, the strong selection limit of the cost function depends on the population size N , the reward and punishment efficiency ratios a and b, but does not depend on the underlying game.

Infinite population limits
In this section, we establish the infinite population limits of the cost function, proving the fourth statement of Theorem 1.

Proposition 7 (Infinite population limit). We have
Unlike the neutral drift and strong selection limits, the infinite population limit of the cost function strongly depends on the underlying game. The strength of selection, β, is fixed. See Figure 5 below for an illustration of these limits.
Proof. Using we can estimate where h = min h j and h = max h j .
We recall that x = β(θ + δ), where Then we have Since for fixed θ, x equals to 0 for only one value of N , we can consider x ̸ = 0 when we study the limit N → +∞. We have for PGG. (20) In addition, for DG, we have While for PGG, we have Substituting these above limits in (22), it follows that, if θ − c(1 − r n ) ≤ 0, we have From the above limits, (22) and (23) we obtain .

Discussion
The use of institutional incentives such as reward and punishment is an effective tool for the promotion of cooperation in social dilemmas, as proven both by theoretical [13,30,32,10,6] and experimental results [23,37,41]. In past works, although mixed incentives were used, the aspect of minimisation of the cost function while ensuring a minimum level of cooperation was overlooked.
Moreover, existing works that consider this question usually omit the stochastic effects that drive population dynamics, namely when the strength of selection, β, varies.
In this work, we used a stochastic evolutionary game theoretic approach for finite, well-mixed populations and obtained theoretical results for the optimal cost of mixed incentives that ensure a desired level of cooperation, for a given intensity of selection, β. We show that this cost strongly depends on the value of β due to the existence of a phase transition in the cost function for providing mixed incentives. This behaviour is missing in works that consider a deterministic evolutionary approach [39]. We also characterised asymptotic behaviours of the cost function and showed that mixed incentives are always cost efficient than either using only reward or only punishment. In particular, we successfully obtained an infinite population limit, as well as those for neutral drift and strong selection.
For the mathematical analysis of the mixed incentive cost function to be possible, we made some assumptions. Firstly, in order to derive the analytical formula for the frequency of cooperation, we assumed a small mutation limit [27,21,31]. Despite the simplified assumption, this small mutation limit approach has wide applicability to scenarios which go well beyond the strict limit of very small mutation rates [43,13,32,26]. If we were to relax this assumption, the derivation of a closed form for the frequency of cooperation would be intractable. Secondly, we focused on two important cooperation dilemmas, the Donation Game and the Public Goods Game. Both have in common that the difference in average payoffs between a cooperator and a defector does not depend on the population composition. This special property allowed us to simplify the fundamental matrix of the Markov chain to a tridiagonal form and apply the techniques of matrix analysis to obtain a closed form of its inverse matrix. In games with more complex payoff matrices such as the general prisoners' dilemma and the collective risk game [33], this property no longer holds (e.g. in the former case the payoff difference, Π C (i) − Π D (i), depends additively on i) and the technique in this paper cannot be directly applied. In these scenarios, we might consider other approaches to approximate the inverse matrix, exploiting its block structure.
More recent works looked at the effect of indirect exclusion on cooperation, having found that the introduction of indirect exclusion could induce the stable coexistence of cooperators and defectors or the dominance of cooperators, successfully promoting cooperation [16]. Looking at prior agreement before an interaction takes place also showed that cooperation would be increased.
Thus, individuals choose whether to take part in a social dilemma and those who do are rewarded [22,10]. Our future work will consider cost-efficiency in these different form of institutional incentives, including the problem of providing incentives whenever the frequency or number of cooperators (or defectors) in the population does not exceed a given threshold.
We furthermore aim to investigate the optimisation problems of incentives such as reward, punishment, and exclusion in complex networks. There has been little attention to providing analytical results for cost-efficient incentives in structured populations or in more complex games, so this would also be an interesting research avenue. Finally, since time is most precious, we intent to explore the time that the system needs in order to achieve a desired level of cooperation.

Appendix
In this appendix, we provide explicit calculations for some small population cases, as well as detailed proofs and computations of technical results in the previous sections.

Small population examples
The cost function is The function F is The critical threshold is β * = 3.67.

Proof of Proposition 1
In this section, we prove Proposition 1.
Proof. We recall that: The first statement follows directly from these formulae, the fact that n 1,j , n N −1,j > 0, and the elementary inequalities Now we prove the second statement. The third one is similar.
Thus, min j a , N −j b = j a for all j = 1, 2, . . . , N − 1, where N is the number of individuals in the population. Therefore,

Proof of Lemma 2
In this section, we give the proof of Lemma 2.
Proof. Since x = β(θ + δ), we have and g(x) = 1 + e x + . . . + e (N −1)x , it is more convenient to express the right-hand side of (28) in terms of u. We have where we define We also have Note that Therefore, we only need to consider the case f (x)g ′ (x) − f ′ (x)g(x) = uP (u) > 0 (i.e., when P (u) > 0). Let u = e x and define Then The next step is to understand the sign of the term F (u) + βδ, which is required to understand the polynomials P and Q. In the next section, we analyse the polynomial P (Q is explicit and is much simpler).

Proof of Proposition 2
In this section, we provide a proof of Proposition 2. We delicately analyse the coefficients of P and employ the discrete integration by parts techniques from numerical analysis.
Therefore, we obtain In particular, Hence, in particular, if a = b then Now we will show that when a = b, p N −2 = 0 and p k + p 2N −4−k = 0 for all 1 ≤ k ≤ N − 3.
In addition, Therefore, for 1 ≤ k ≤ N − 3, we have For k = N − 2, we have It follows from the above computations that We have Therefore, recalling that hence, in this case, we have p N −2 = 0. Similarly, For N − 1 ≤ k ≤ 2N − 5, let k := 2N − 4 −k, then 1 ≤k ≤ N − 3 and Now we will show that p 2N −4−k + pk = 0, for all 1 ≤k ≤ N − 3. We have To proceed further, we need to simplify the two sums using discrete integration by parts techniques.
For the first sum in S 1 , by changing of index j tok − j + 1, we get Similarly, by changing of index j tok − j we can rewrite the second sum in S 1 aŝ Hence, we have By proceeding similarly, we can transform S 2 as Substituting (38) and (39) back into (37) we have It follows immediately from the formula of m j that when a = b.

So we have
where in the last equality, we have used the fact that m N −k−3 = mk +1 .
Next we show that p k < 0 for k = 1, . . . , N − 3. Substituting (38) back into (36) we get for k = 1, . . . , N − 3 In the above expression, only the first term is positive, the other ones are negative. We will show that the first term is dominated by the last term. We recall that H N,a,b = Therefore, p k < 0 for all k = 0, . . . , N − 3. In conclusion, we have for 0 ≤ k ≤ N − 2 As a consequence, when a = b, the sequence of coefficients of P has exactly one change of sign.
Thus according to Descartes' rule of sign, P has exactly one positive root. Because p k = −p 2N −4−k for all k = 0, . . . , N − 2, the root is equal to 1. This finishes the proof of this proposition.

Proof of Proposition 3
In this section, we provide a detailed proof of Proposition 3.

Proof. Recall that
Thus Q is a polynomial of degree 2N − 2 and the leading coefficient is q 2N −2 = a N −2 , which is Hence Q ′ (u) is a polynomial of degree 2N − 3 whose leading coefficient is (2N − 2)a N −2 .
According to Proposition 2 P is a polynomial of degree 2N − 4 whose leading coefficient is Hence P ′ is a polynomial of degree 2N − 5 whose leading coefficient is (2N − 4)a N −3 .

It follows that
where we have used the fact that a j = a N −2−j for all j = 0, . . . , N − 2. It follows that We now do similar computations for P . We have where we have used the fact that p 2N −4−i = −p i for all i = 0, . . . , 2N − 4. It also follows that Thus P ′ (1/u) = 1 u 2N −6 P ′ (u) + (2N − 4)u 2N −5 P (1/u) .
(48) Therefore, from (43) to (48) is the harmonic number. This important number plays a central role in number theory. Its appearance in our analysis, as well as in [6], is rather interesting. Using the above relationship and the following well-known estimates for the harmonic number where γ = 0.5772156649 is the celebrated Euler-Mascheroni constant, we obtain the following lower and upper bounds for H N,1,1 : Substituting these estimates back to (51) we obtain the desired lower and upper bounds for