Efficiency in a forced contribution threshold public good game

Open Access
Original Paper

DOI: 10.1007/s00182-017-0570-1

Cite this article as:
Cartwright, E. & Stepanova, A. Int J Game Theory (2017). doi:10.1007/s00182-017-0570-1
  • 317 Downloads

Abstract

We contrast and compare three ways of predicting efficiency in a forced contribution threshold public good game. The three alternatives are based on ordinal potential, quantal response and impulse balance theory. We report an experiment designed to test the respective predictions and find that impulse balance gives the best predictions. A simple expression detailing when enforced contributions result in high or low efficiency is provided.

Keywords

Public good Threshold Impulse balance theory Quantal response Forced contribution Ordinal potential 

JEL Classification

C72 H41 C92 

1 Introduction

A threshold public good is provided if and only if total contributions towards its provision are sufficiently high. The classic example would be a capital project such as a new community school (Andreoni 1998). The notion of threshold public good is, however, far more general than this classic example. Consider, for example, a charity that requires sufficient funds to cover large fixed costs. Or, consider a political party deciding whether to adopt a policy which is socially efficient but, for some reason, unpopular with voters; the policy will be enacted if and only if enough party members are willing to back the policy (Goeree and Holt 2005).

In a threshold public good game the provision of the public good is consistent with Nash equilibrium. There are, however, typically multiple equilibria (Palfrey and Rosenthal 1984; Alberti and Cartwright 2016). This leads to a coordination problem that creates a natural uncertainty about total contributions. The literature has decomposed this uncertainty into a fear and greed motive for non contribution (Dawes et al. 1986; Rapoport 1987, see also Coombs 1973). The fear motive recognizes that a person may decide not to contribute because he is pessimistic that sufficiently many others will contribute.1 The greed motive recognizes that a person may decide not to contribute in the hope that others will fund the public good.

Dawes et al. (1986) noted that the fear motive can be alleviated by providing a refund (or money back guarantee) if contributions are short of the threshold (see also Isaac et al. 1989). Similarly, the greed motive can be alleviated by forcing everyone to contribute if sufficiently many people volunteer to contribute. In three independent experimental studies Dawes et al. (1986) observed significantly higher efficiency in a forced contribution game. On this basis they concluded that inefficiency was primarily caused by the greed motive. Rapoport and Eshed-Levy (1989) challenged this conclusion by showing that the fear motive can cause inefficiency (see also Rapoport 1987). They still, however, observed highest efficiency in a forced contribution game.2

These experimental results suggest that enforcing contributions is an effective way to obtain high efficiency. This is a potentially important finding in designing mechanisms for the provision of public goods. Existing evidence, however, is limited to the two papers mentioned above. Our objective in this paper is to explore in detail, both theoretically and experimentally, the conditions under which forced contributions leads to high efficiency in binary threshold public good games.3

Our theoretical contribution consists of applying three, alternative approaches to modelling behavior that are, respectively, based on ordinal potential (Monderer and Shapley 1996), quantal response (McKelvey and Palfrey 1995), and impulse balance (Selten 2004). We demonstrate that the three approaches give very different predictions on the efficiency of enforcing contributions. We complement the theory with an experimental study where the number of players and return to the public good are systematically varied in order to test the respective predictions of the three theoretical models. We find that impulse balance provides the best fit with the experimental data. This allows us to derive a simple expression with which to predict when enforced contributions result in high or low efficiency. Our predictions are consistent with the uniformly high efficiency observed in previous studies. We also find, however, that enforced contributions are not a guarantee of high efficiency. The interpretation of this finding will be discussed more in the conclusion.

Our analysis shows that a forced contribution game is of theoretical interest; for instance, it’s tractability allows a direct test on the predictive power of three commonly used theoretical models. A point we also want to emphasize, however, is that the forced contribution game is of applied interest as well. To motivate this latter point it is important to explain why forced contribution is not inconsistent with the notion of voluntary provision of a public good. A forced contribution game encapsulates the following basic properties: (1a) If enough people voluntarily contribute to the public good then the public good is provided and (b) everybody gets the same payoff, irrespective of whether they contributed or not.4 (2a) If not enough people voluntarily contribute then the public good is not provided and (b) those who contributed are worse off than those who did not contribute. Property (2a) means that it is endogenously determined whether the public good is provided; hence, public good provision is voluntary at the level of the group. Property (2b) means that the fear motive for not contributing is present and so it is far from trivial whether the efficient outcome will be obtained.

To illustrate further, we provide three examples of a forced contribution game.5 First, consider an organization or department being run by an incompetent manager. To get rid of the manager will require a sufficiently large number of colleagues to complain. Hence to complain can be interpreted as contributing towards the public good. Suppose that if the manager is removed then everyone benefits and no one (including those who complained) will receive any recrimination. Further, suppose that if the manager is not removed then things carry on as before except that those who complained will receive recriminations. One can readily check that this situation satisfies all the properties required of a forced contribution game (as we more formally show in footnote 10). In particular, those who contribute only earn a lower payoff than those who did not contribute if the manager is not removed.

As a second example, consider a firm attempting a takeover of a competitor. Various rules on the conditions for takeover are possible (e.g. Kale and Noe 1997). Of interest to us is the case where the takeover will proceed if and only if the proportion of shareholders willing to sell reaches some threshold. Moreover, it must be the case that no shares are sold if the threshold is not met while all shares are compulsorily purchased if the threshold is met. This is an all-or-nothing, restricted-conditional offer (Holmström and Nalebuff 1992).6 Again, properties (1a) and (2a) are trivially satisfied and so the focus is on property (2b). For a forced contribution game we require that there are some legal, anticipatory, or other costs that mean a person would prefer not to offer to sell if the takeover will not take place.

As a final example, consider a political party deciding whether to endorse a particular policy. Suppose the policy is unpopular with voters but ultimately beneficial for the party. Also suppose that party will adopt the policy if and only if sufficiently many members back it. If the policy is not adopted then one could reasonably expect that only those members of the party who were seen to promote the policy will incur a cost with voters. If, however, the policy is adopted and becomes party policy then it is likely that all members of the party will incur a cost. This makes it a forced contribution game.

The preceding examples illustrate that the forced contribution game is of practical relevance, even though we would not want to argue it is the most commonly observed type of threshold public good game. The examples also illustrate that enforcing contributions is a practical possibility in numerous situations. Our analysis will provide insight on when this possibility is worth pursuing. In particular, enforcing contributions is likely to be costly to implement and so it is crucial to know whether enforcement will lead to high efficiency.7

As a final preliminary we highlight that an important contribution of the current paper is to apply impulse balance theory in a novel context. Impulse balance theory, which builds on learning direction theory, says that players will tend to change their behavior in a way that is consistent with ex-post rationality (Selten and Stoeker 1986; Selten 1998, 2004; Ockenfels and Selten 2005; Selten and Chmura 2008, see also Cason and Friedman 1997, 1999). In Alberti et al. (2013) we apply impulse balance to look at continuous threshold public good games. Here we focus on the binary forced contribution game. As already previewed, we find that impulse balance successfully predicts observed efficiency. This is clearly a positive finding in evaluating the merit of impulse balance theory.8 It should be noted, however, that the predictive power of impulse balance is dependent on its one degree of freedom, an issue we discuss more below.

We proceed as follows: in Sect. 2 we describe the forced contribution game. In Sect. 3 we provide some theoretical preliminaries, in Sect. 4 we describe three models to predict efficiency and in Sect. 5 we compare the three models predictions. In Sect. 6 we report our experimental results and in Sect. 7 we conclude.

2 Forced contribution game

In this section we describe the forced contribution threshold public good game. There is a set of players \(N=\left\{ 1,\ldots ,n\right\} \). Each player is endowed with E units of private good. Simultaneously, and independently of each other, every player \(i\in N\) chooses whether to contribute 0 or to contribute E towards the provision of a public good. Note that this is a binary, all or nothing, decision. For any \(i\in N\), let \(a_{i}\in \left\{ 0,1\right\} \) denote the action of player i, where \(a_{i}=0\) indicates his choice to contribute 0 and \(a_{i}=1\) indicates his choice to contribute E . Action profile \(a=\left( a_{1},\ldots ,a_{n}\right) \) details the action of each player. Let A denote the set of action profiles. Given action profile \(a\in A\), let
$$\begin{aligned} c(a)=\sum _{i=1}^{n}a_{i} \end{aligned}$$
denote the number of players who contribute E.
There is an exogenously given threshold level \(1<t<n\).9 The payoff of player i given action profile a is
$$\begin{aligned} u_{i}\left( a\right) =\left\{ \begin{array}{ll} V &{}\text { if }c(a)\ge t \\ E(1-a_{i}) &{}\text { otherwise} \end{array} \right. , \end{aligned}$$
where \(V>E\) is the value of the public good. So, if t or more players contribute E then the public good is provided and every player gets a return of V. In interpretation, every player is forced to contribute E irrespective of whether they chose to contribute 0 or E. If less than t players contribute E then the public good is not provided and there is no refund for a player who chose to contribute E.10

For any player \(i\in N\) the strategy of player i is given by \(\sigma _{i}\in [0,1]\) where \(\sigma _{i}\) is the probability with which he chooses to contribute E (and \(1-\sigma _{i}\) is the probability with which he chooses to contribute 0). Let \(\sigma =\left( \sigma _{1},\ldots ,\sigma _{n}\right) \) be a strategy profile. With a slight abuse of notation we use \( u_{i}\left( \sigma _{i},\sigma _{-i}\right) \) to denote the expected payoff of player i given strategy profile \(\sigma \), where \(\sigma _{-i}\) lists the strategies of every player except i.

3 Theoretical preliminaries

We say that a strategy profile \(\sigma =\left( \sigma _{1},\ldots ,\sigma _{n}\right) \) is symmetric if \(\sigma _{i}=\sigma _{j}\) for all \(i,j\in N\). Given that choices are made simultaneously and independently it is natural to impose a homogeneity assumption on beliefs (Rapoport 1987; Rapoport and Eshed-Levy 1989).11 This justifies a focus on symmetric strategy profiles. Symmetric strategy profiles \(\sigma ^{0}=\left( 0,\ldots ,0\right) \) and \(\sigma ^{1}=\left( 1,\ldots ,1\right) \) will prove particularly important in the following. We shall refer to \(\sigma ^{0}\) as the zero contribution strategy profile and \( \sigma ^{1}\) as the full contribution strategy profile.

Any symmetric strategy profile \(\sigma =\left( \sigma _{1},\ldots ,\sigma _{n}\right) \) can be summarized by real number \(p\left( \sigma \right) \in [0,1]\) where \(p\left( \sigma \right) =\sigma _{1}=\cdots =\sigma _{n}\). In interpretation, \(p(\sigma )\) is the probability that each player independently chooses to contribute E. Where it shall cause no confusion we simplify notation by writing p instead of \(p\left( \sigma \right) \). Given symmetric strategy profile \(\sigma \), the expected payoff of player i if he chooses, ceteris paribus, to contribute E is
$$\begin{aligned} u_{i}\left( 1,\sigma _{-i}\right)= & {} V\Pr \left( t-1\text { or more other players contribute }E\right) \\= & {} V\sum _{y=t-1}^{n-1}\left( {\begin{array}{c}n-1\\ y\end{array}}\right) p^{y}\left( 1-p\right) ^{n-1-y}. \end{aligned}$$
If he chooses to contribute 0 his expected payoff is
$$\begin{aligned} u_{i}\left( 0,\sigma _{-i}\right)= & {} E+\left( V-E\right) \Pr \left( t\text { or more contribute }E\right) \\= & {} E+\left( V-E\right) \sum _{y=t}^{n-1}\left( {\begin{array}{c}n-1\\ y\end{array}}\right) p^{y}\left( 1-p\right) ^{n-1-y}. \end{aligned}$$
Note that player i’s expected payoff from strategy profile \(\sigma \) is
$$\begin{aligned} u_{i}\left( \sigma _{i},\sigma _{-i}\right) =p\left( \sigma \right) u_{i}\left( 1,\sigma _{-i}\right) +\left( 1-p\left( \sigma \right) \right) u_{i}\left( 0,\sigma _{-i}\right) . \end{aligned}$$
The following function will prove useful in the subsequent analysis,
$$\begin{aligned} \Delta \left( p\left( \sigma \right) \right)= & {} u_{i}\left( 1,\sigma _{-i}\right) -u_{i}\left( 0,\sigma _{-i}\right) \nonumber \\= & {} V\left( {\begin{array}{c}n-1\\ t-1\end{array}}\right) p^{t-1}\left( 1-p\right) ^{n-t}-E\sum _{y=0}^{t-1}\left( {\begin{array}{c} n-1\\ y\end{array}}\right) p^{y}\left( 1-p\right) ^{n-1-y}.\qquad \end{aligned}$$
(1)
To illustrate, Fig. 1 plots \(\Delta (p)\) for \(p\in [0,1]\) when \( n=5,t=3,E=6\) and \(V=13\). If \(\Delta \left( p\right) <0\) then player i’s expected payoff is highest if he chooses to contribute 0. If \(\Delta \left( p\right) =0\) then player i is indifferent between choosing to contribute 0 and E. Finally, if \(\Delta \left( p\right) >0\) then player i’s expected payoff is highest if he chooses to contribute E.
Fig. 1

The value of \(\Delta \left( p\right) \) when \( n=5,t=3,E=6\) and \(V=13\)

3.1 Nash equilibrium

Previous theoretical analysis of binary threshold public good games has largely focussed on Nash equilibria (see, in particular, Palfrey and Rosenthal 1984). The set of Nash equilibria for the forced contribution game has not, however, been explicitly studied and so we begin the analysis by considering this. Strategy profile \(\sigma ^{*}=\left( \sigma _{1}^{*},\ldots ,\sigma _{n}^{*}\right) \) is a Nash equilibrium if and only if \( u_{i}\left( \sigma _{i}^{*},\sigma _{-i}^{*}\right) \ge u_{i}\left( s,\sigma _{-i}^{*}\right) \) for any \(s\in [0,1]\) and all \(i\in N\) . In the following we focus on symmetric Nash equilibria.12

The set of symmetric Nash equilibria is easily discernible from the function \(\Delta (p)\). To illustrate, consider again Fig. 1. In this example there are three Nash equilibria. The zero contribution strategy profile \(\sigma ^{0}\) is a Nash equilibrium because \(\Delta (0)<0\). The ‘mixed’ strategy profile \(\sigma ^{m}\) where \(p\left( \sigma ^{m}\right) =0.43\), and the full contribution strategy profile \(\sigma ^{1}\) are also Nash equilibria because \(\Delta (0.43)=\Delta (1)=0\).

Our first result shows that Fig. 1 is representative of the general case (see also Rapoport 1987).

Proposition 1

For any value of \(V>E\) and \(n>t>1\) there are three symmetric Nash equilibria: (i) the zero contribution strategy profile \(\sigma ^{0}\), (ii) a mixed strategy profile \(\sigma ^{m}\) where \(p\left( \sigma _{i}^{m}\right) \in (0,1)\), (iii) the full contribution strategy profile \(\sigma ^{1}\).

Proof

For \(p=0\) it is simple to show that \( \Delta \left( p\right) =-E\). This proves part (i) of the proposition. For \( p=1\) it is simple to show that \(\Delta \left( p\right) =0\). This proves part (iii) of the proposition. In order to prove part (ii) consider separately the two terms in \(\Delta (p)\) by writing \(\Delta (p)=V\alpha (p)-E\beta (p)\) . Term \(\alpha (p)\) is the probability that exactly \(t-1\) out of \(n-1\) players contribute E and so it takes a bell shape. Formally, \(\alpha (0)=0,\alpha (1)=0\) and
$$\begin{aligned} \frac{d}{dp}\alpha \left( p\right) =\left( {\begin{array}{c}n-1\\ t-1\end{array}}\right) p^{t-2}\left( 1-p\right) ^{n-t-1}\left( t-1-p(n-1)\right) \end{aligned}$$
implying \(\frac{d}{dp}\alpha \left( p\right) \gtrless 0\) for \(p\lessgtr \frac{t-1}{n-1}\). Term \(\beta (p)\) is the probability \(t-1\) or less of \(n-1\) players contribute E and so is a decreasing function of p. Formally, \( \beta (0)=1,\beta (1)=0\) and
$$\begin{aligned} \frac{d}{dp}\beta \left( p\right)= & {} -\left( n-1\right) \left( 1-p\right) ^{n-2}\\&-\sum _{y=1}^{t-1}\left( {\begin{array}{c}n-1\\ y\end{array}}\right) \left( p^{y-1}\left( 1-p\right) ^{n-2-y}\right) \left( n-y-1\right) <0. \end{aligned}$$
For \(p<1\) it is clear that \(\alpha \left( p\right) <\beta \left( p\right) \). As \(p\rightarrow 1\) we know \(\beta \left( p\right) -\alpha \left( p\right) \rightarrow 0\). Given that \(V>E\) this means there exists some \({\overline{p}} \in (0,1)\) such that \(\Delta \left( {\overline{p}}\right) >0\). This proves part (ii) of the theorem. Note that we have also done enough to show that there exists a unique value \(p^{*}\in (0,1)\) where \(\Delta \left( p^{*}\right) =0\). \(\square \)

Proposition 1 shows that there are multiple symmetric Nash equilibria. In the following section we shall consider and contrast three possible approaches to predict which, if any, of these equilibria are most likely to occur. Before doing that let us briefly comment on the experimental evidence concerning \(\Delta \left( p\right) \). Rapoport (1987) and Rapoport and Eshed-Levy (1989) proposed the relatively weak hypothesis (their monotonicity hypothesis) that a player is more likely to contribute the higher is \(\Delta \left( p\right) \). Rapoport and Eshed-Levy (1989) experimentally elicit subjects beliefs in order to test this hypothesis and find only weak support for it. Offerman et al. (2001) obtain similar results. The challenge, therefore, is to develop a model that can not only predict outcomes but also capture the forces behind individual choice.

4 Main theoretical analysis

In this section we describe three alternative approaches to ‘predict’ behavior in a forced contribution game. The three alternatives are based on ordinal potential, logit equilibrium and impulse balance theory.

4.1 Ordinal potential

A potential game is one in which a single function, called the ordinal potential of the game, can capture the change in payoff that any player obtains from a unilateral change in action (Monderer and Shapley 1996). Examples of potential games include the minimum effort game, Cournot quantity competition and congestion games (e.g. Rosenthal 1973). If a game is a potential game then the set of Nash equilibria can be refined by finding the Nash equilibria that maximize potential (Monderer and Shapley 1996). We now demonstrate that this idea can be applied to the forced contribution game.

Using the definition of Monderer and Shapley (1996), see their equation (2.1), function \(W:A\rightarrow {\mathbb {R}} \) is an ordinal potential of the forced contribution game if for every \(i\in N\) and \(a\in A\)
$$\begin{aligned} u_{i}\left( a_{i},a_{-i}\right)>u_{i}\left( 1-a_{i},a_{-i}\right) \text { if and only if }W(a_{i},a_{-i})>W\left( 1-a_{i},a_{-i}\right) . \end{aligned}$$
Our next result shows that the forced contribution game admits an ordinal potential and is, therefore, a potential game. Moreover, the full contribution strategy profile maximizes potential. In this sense the full contribution Nash equilibrium is ‘selected’.

Proposition 2

The forced contribution game is a potential game and the ordinal potential is maximized at the full contribution strategy profile \(\sigma ^{1}\).

Proof

The aggregate payoff, given action profile \(a=\left( a_{1},\ldots ,a_{n}\right) \), is
$$\begin{aligned} W\left( a\right) =\left\{ \begin{array}{ll} nV &{} \text { if }c(a)\ge t \\ E(n-c\left( a\right) ) &{} \text { otherwise} \end{array} \right. . \end{aligned}$$
If W is an ordinal potential then the potential is maximized for \(c(a)\ge t\). In order to verify that W is an ordinal potential there are five cases to consider:
  1. (i)

    If \(c\left( a\right) >t\) or \(c(a)=t\) and \(a_{i}=0\) then \(c\left( 1-a_{i},a_{-i}\right) \ge t\) implying \(u_{i}\left( a_{i},a_{-i}\right) =u_{i}\left( 1-a_{i},a_{-i}\right) =V\) and \(W(a_{i},a_{-i})=W\left( 1-a_{i},a_{-i}\right) =nV\).

     
  2. (ii)

    If \(c\left( a\right) =t\) and \(a_{i}=1\) then \(c\left( 1-a_{i},a_{-i}\right) =t-1\) implying \(u_{i}\left( a_{i},a_{-i}\right) =V>u_{i}\left( 1-a_{i},a_{-i}\right) =E\) and \(W(a_{i},a_{-i})=nV>W\left( 1-a_{i},a_{-i}\right) =E\left( n-t+1\right) \).

     
  3. (iii)

    If \(c\left( a\right) =t-1\) and \(a_{i}=0\) then \(c\left( 1-a_{i},a_{i}\right) =t\) implying \(u_{i}\left( a_{i},a_{-i}\right) =E<u_{i}\left( 1-a_{i},a_{-i}\right) =V\) and \(W(a_{i},a_{-i})=E\left( n-t+1\right) <W\left( 1-a_{i},a_{-i}\right) =nV\).

     
  4. (iv)

    If \(c\left( a\right) \le t-1\) and \(a_{i}=1\) then \(u_{i}\left( a_{i},a_{-i}\right) =0<u_{i}\left( 1-a_{i},a_{-i}\right) =E\) and \( W(a_{i},a_{-i})=E\left( n-c(a)\right) <W\left( 1-a_{i},a_{-i}\right) =E\left( n-c\left( a\right) +1\right) \).

     
  5. (v)

    If \(c\left( a\right) <t-1\) and \(a_{i}=0\) then \(u_{i}\left( a_{i},a_{-i}\right) =E>u_{i}\left( 1-a_{i},a_{-i}\right) =0\) and \( W(a_{i},a_{-i})=E\left( n-c(a)\right) >W\left( 1-a_{i},a_{-i}\right) =E\left( n-c\left( a\right) -1\right) \). \(\square \)

     

With a slight abuse of terminology we shall interpret Proposition 2 as saying ordinal potential predicts perfect efficiency in the forced contribution game. Interestingly, this prediction is consistent with the prior experimental evidence (Dawes et al. 1986; Rapoport and Eshed-Levy 1989). However, while Monderer and Shapley (1996) show that ordinal potential can be used to refine the set of Nash equilibria they also openly admit that they have no explanation for why ordinal potential would be maximized. So, to paraphrase Monderer and Shapley (p. 136), ‘it may be just a coincidence’ that ordinal potential is consistent with the prior evidence. The conjecture that ordinal potential can predict behavior in the forced contribution game needs a more rigorous empirical test.

4.2 Logit equilibrium

Quantal response provides a way to model behavior that allows for ‘noisy’ decision making (McKelvey and Palfrey 1995). In particular, quantal response equilibrium (QRE) is a generalization of Nash equilibrium that allows for mistakes or random perturbations to payoffs, while maintaining an assumption of rational expectations. QRE has proved successful in explaining deviations from Nash equilibrium in a number of settings including auctions and coordination games (Goeree et al. 2008). Offerman et al. (1998) apply a quantal response model to a no refund threshold public good game (see also Goeree and Holt 2005).13 Here we apply the approach to a forced contribution game. Specifically, we consider the logit equilibrium (McKelvey and Palfrey 1995).

Symmetric contribution profile \(\sigma \) is a logit equilibrium if
$$\begin{aligned} p\left( \sigma \right) =\frac{e^{\gamma u_{i}\left( 1,\sigma _{-i}\right) }}{ e^{\gamma u_{i}\left( 1,\sigma _{-i}\right) }+e^{\gamma u_{i}\left( 0,\sigma _{-i}\right) }}=\frac{1}{1+e^{-\gamma \Delta \left( p\left( \sigma \right) \right) }} \end{aligned}$$
for any player \(i\in N\) where \(\gamma \ge 0\) is a parameter. In interpretation, \(\gamma \) is inversely related to the level of error, where error can be thought of as resulting from random mistakes in calculating expected payoff.14 Figure 2 plots the logit equilibria for the example \(n=5,t=3,E=6\) and \(V=13\) . We see that there is a unique equilibrium for small \(\gamma \) (i.e. a high level of error) and three equilibria for large \(\gamma \). If there is no error (\(\gamma \rightarrow \infty \)) the set of logit equilibria coincides with the set of Nash equilibria (Tumennasan 2013). The higher the level of error (the smaller is \(\gamma \)) the more the set of logit equilibria diverges from the set of Nash equilibria.
Fig. 2

Logit equilibria when \(n=5,t=3,E=6\) and \(V=13\)

One criticism of quantal response is that it can rationalize any behavior (Haile et al. 2008, see also Goeree et al. 2005). This criticism does not always apply to logit equilibrium but it is a concern in our case. In Fig. 2, for instance, we see that just about any value of p is consistent with logit equilibrium. To obtain a testable prediction we, therefore, need to either fix \(\gamma \) or restrict attention to a particular set of logit equilibria. We shall focus on the latter option here (although in the data analysis we also explore the former option). McKelvey and Palfrey (1995) demonstrate that a graph of the logit equilibrium can be used to select a Nash equilibrium. Specifically, the graph of logit equilibria contains a unique branch starting at 0.5 and converging to a Nash equilibrium as \(\gamma \rightarrow \infty \).15 The resultant Nash equilibrium is called the limiting logit equilibrium. If players are initially inexperienced (\(\gamma \) is near 0) and become more experienced over time (\(\gamma \) increases) then one can argue play should move along this branch of equilibria towards the limiting logit equilibria (McKelvey and Palfrey 1995). Offerman et al. (1998) found that their experimental data did lay on the branch starting at 0.5, although there was little evidence of learning with experience.

In the example of Fig. 2 the limiting logit equilibrium is the full contribution Nash equilibrium \(\sigma ^{1}\). For different parameter values the limiting logit equilibrium can be the zero contribution Nash equilibrium \(\sigma ^{0}\). To illustrate, Fig. 3 plots the logit equilibria when \( n=7,t=5,E=6\) and \(V=13\). The proceeding examples demonstrate that the limiting logit equilibrium can be \(\sigma ^{0}\) or \(\sigma ^{1}\) depending on the parameters of the game. We shall pick up on this point further in Sect. 5. For now we note that (for fixed values of nt and E) there exists a critical value \({\widetilde{V}}\) such that \(\sigma ^{0}\) is the limiting logit equilibrium for \(V<{\widetilde{V}}\) and \(\sigma ^{1}\) is the limiting logit equilibrium for \(V>{\widetilde{V}}\). We highlight that this critical value is also relevant for interpreting the branch of logit equilibria starting at 0.5. More specifically, if we restrict attention to this branch of equilibria then, for any \(\gamma \), the logit equilibrium value of p is less than 0.5 if \(V<{\widetilde{V}}\) and greater than 0.5 if \(V>{\widetilde{V}}\).
Fig. 3

Logit equilibria when \(n=7,t=5,E=6\) and \(V=13\)

4.3 Impulse balance theory

A key contribution of the current paper is to apply impulse balance theory. Impulse balance theory provides a quantitative prediction on outcomes based on ex-post rationality (Ockenfels and Selten 2005; Selten and Chmura 2008; Chmura et al. 2012). It posits that a player who could have gained by playing a different action will have an impulse to change his action the next time he plays the game. The size of impulse is proportional to the difference between the payoff he could have received and the one he did. The player is said to have an upward or downward impulse depending on whether a ‘higher’ or ‘lower’ action is ex-post rational. At a (weighted) impulse balance equilibrium the expected upward and (weighted) downward impulse are equalized. Impulse balance theory has been applied in many contexts including first price auctions, the newsvendor game and minimum effort game (Ockenfels and Selten 2005, 2014, 2015; Goerg et al. 2016).

To apply impulse balance theory to a forced contribution game we need to determine the direction and strength of impulse of each player for any action profile (Selten 1998). In order to do this we distinguish the four experience conditions defined below. Take as given an action profile \( (a_{1},\ldots ,a_{n})\) and a player \(i\in N\). Let \({\overline{u}}_{i}=u_{i}\left( a_{i},a_{-i}\right) \) denote the realized payoff of player i and let \( {\overline{gu}}_{i}=u_{i}\left( 1-a_{i},a_{-i}\right) \) denote the payoff player i would have got from choosing the alternative action.

Zero no: Player i faces the zero no experience condition if \(c(a)<t-1\) and \(a_{i}=0\). In this case \({\overline{u}}_{i}=E\) and \({\overline{gu}}_{i}=0\). Given that \({\overline{gu}}_{i}<{\overline{u}}_{i}\) we say that player i has no impulse. Equivalently, the strength of impulse is 0.

Wasted contribution: Player i faces the wasted contribution experience condition if \(c(a)\le t-1\) and \(a_{i}=1\). In this case \({\overline{u}}_{i}=0\) while \({\overline{gu}}_{i}=E>0\). We say that player i has a downward impulse of strength \({\overline{gu}}_{i}-{\overline{u}}_{i}=E\).

Lost opportunity: Player i faces the lost opportunity experience condition if \(c(a)=t-1\) and \(a_{i}=0\). In this case \( {\overline{u}}_{i}=E\) while \({\overline{gu}}_{i}=V>E\). We say that player i has an upward impulse of strength \({\overline{gu}}_{i}-{\overline{u}}_{i}=V-E\).

Spot on: Player i faces the spot on experience condition if \(c(a)\ge t\). In this case \({\overline{u}}_{i}=V\) and \({\overline{gu}}_{i}\le V\) so we say player i has no impulse.

The direction and size of impulse for each of the experience conditions are summarized in Table 1.
Table 1

The conditions on \(a_{i}\) and c(a), the direction and size of impulse for each experience condition

Experience condition

Properties of a

Impulse

\(a_{i}\)

c(a)

Direction

Size

Zero no

0

\(<t-1\)

0

Wasted contribution

1

\(\le t-1\)

\(\downarrow \)

E

Lost Opportunity

0

\(t-1\)

\(\uparrow \)

\(V-E\)

Spot on

0 or 1

\(\ge t\)

0

We can now define expected upward and downward impulse. In doing this we retain a focus on symmetric strategy profiles. The upward impulse of player \(i\in N\) comes from the lost opportunity experience condition. So, given a symmetric strategy profile \(\sigma \) the expected upward impulse of player i is
$$\begin{aligned} I^{+}(p\left( \sigma \right) )= & {} \left( V-E\right) \Pr (i\text { chooses to contribute }0)\Pr (t-1\text { others contribute }E) \\= & {} \left( V-E\right) \left( {\begin{array}{c}n-1\\ t-1\end{array}}\right) p^{t-1}\left( 1-p\right) ^{n-t+1}. \end{aligned}$$
We note at this point that
$$\begin{aligned} \frac{dI^{+}(p)}{dp}=\left( V-E\right) \left( {\begin{array}{c}n-1\\ t-1\end{array}}\right) \left( 1-p\right) ^{n-t}p^{t-2}\left( t-1-pn\right) \end{aligned}$$
(2)
implying that
$$\begin{aligned} \frac{dI^{+}(p)}{dp}\gtrless 0\text { as }p\lessgtr \frac{t-1}{n}. \end{aligned}$$
Thus, the upward impulse is an inverse U shaped function of p (on interval [0, 1]). To illustrate, Fig. 4 plots \(I^{+}(p)\) (and \(I^{-}(p)\) to be defined shortly) for the example \(n=5,t=3,E=6\) and \(V=13\).
Fig. 4

Upward and downward impulse as a function of p when \(n=5,t=3,E=6\) and \(V=13\)

The expected downward impulse of player i comes from the wasted contribution experience condition. It is given by
$$\begin{aligned} I^{-}(p\left( \sigma \right) )= & {} E\Pr (i\text { contributes }E)\Pr (t-2\text { or less others contribute }E) \\= & {} E\sum _{y=0}^{t-2}\left( {\begin{array}{c}n-1\\ y\end{array}}\right) p^{y+1}\left( 1-p\right) ^{n-1-y}. \end{aligned}$$
Note that
$$\begin{aligned} \frac{dI^{-}(p)}{dp}=E\sum _{y=0}^{t-2}\left( {\begin{array}{c}n-1\\ y\end{array}}\right) p^{y}\left( 1-p\right) ^{n-2-y}\left( y+1-np\right) \end{aligned}$$
(3)
and so the downward impulse is also an inverse U shaped function of p. Moreover,
$$\begin{aligned} \frac{dI^{-}(p)}{dp}<0\text { if }p>\frac{t-1}{n} \end{aligned}$$
implying that the maximum downward impulse occurs for a lower value of p than the maximum upward impulse. This is readily apparent in Fig. 4.

Symmetric strategy profile \(\sigma ^{*}\) is a weighted impulse balance equilibrium if \(I^{+}(p\left( \sigma ^{*}\right) )=\lambda I^{-}(p\left( \sigma ^{*}\right) )\), where \(\lambda \) is an exogenously given weight on the downward impulse. Note that a value of \(\lambda <1\) indicates that, in equilibrium, the downward impulse must be larger than the upward impulse. In interpretation this would suggest that players are less responsive in the wasted contribution condition than the lost opportunity condition. This could reflect a desire to contribute or to provide the public good which is not captured in monetary payoffs (Rapoport 1987). Under this interpretation \( \lambda \) is a ‘psychological’ parameter to be estimated empirically from individual behavior (Ockenfels and Selten 2005).

We shall say that an impulse balance equilibrium \(\sigma ^{*}\) is stable if \(I^{+}(p)>\lambda I^{-}\left( p\right) \) for \(p\in (p\left( \sigma ^{*}\right) -\varepsilon ,p\left( \sigma ^{*}\right) )\) and \( I^{+}(p)<\lambda I^{-}\left( p\right) \) for \(p\in (p\left( \sigma ^{*}\right) ,p\left( \sigma ^{*}\right) +\varepsilon )\) for some \( \varepsilon >0\).16 Otherwise we say the equilibrium is unstable. Intuitively, an equilibrium is stable if a small deviation from the equilibrium does not result in impulses that drive strategies further away from the equilibrium. In Fig. 4, where \( \lambda =1\), there are two stable impulse balance equilibria: (i) the zero strategy profile \(\sigma ^{0}\), and (ii) full contribution strategy profile \( \sigma ^{1}\). There is also (iii) an unstable mixed strategy equilibrium \( \sigma ^{m}\) where \(p\left( \sigma ^{m}\right) =0.25\). Note that this mixed strategy impulse balance equilibrium takes a different value of p to the mixed strategy Nash equilibrium.

We are now in a position to state our main theoretical result.

Proposition 3

  1. (a)
    If \(V\le \overline{ V}\left( \lambda \right) \) where
    $$\begin{aligned} {\overline{V}}\left( \lambda \right) =\frac{E\left( n-\left( t-1\right) \left( 1-\lambda \right) \right) }{n-t+1} \end{aligned}$$
    then there are two impulse balance equilibria: the zero strategy profile \( \sigma ^{0}\) is a stable equilibrium, and the full contribution strategy profile \(\sigma ^{1}\) is an unstable equilibrium.
     
  2. (b)

    If \(V>{\overline{V}}\left( \lambda \right) \) and \(t\ge 3\) there are three impulse balance equilibria: the zero strategy profile \(\sigma ^{0}\) is a stable equilibrium, the full contribution strategy profile \(\sigma ^{1}\) is a stable equilibrium, and there is an unstable mixed strategy equilibrium \(\sigma ^{m}\) where \(p\left( \sigma ^{m}\right) \in \left( 0,1\right) \).

     
  3. (c)

    If \(V>{\overline{V}}\left( \lambda \right) \) and \(t=2\) there are two impulse balance equilibria: the zero strategy profile \(\sigma ^{0}\) is an unstable equilibrium, and the full contribution strategy profile \(\sigma ^{1}\) is a stable equilibrium.

     

Proof

Let \(C_{k}^{\nu }=\left( {\begin{array}{c}\nu \\ k\end{array}}\right) \) and let
$$\begin{aligned} DI\left( p\right) =I^{+}\left( p\right) -\lambda I^{-}\left( p\right) \end{aligned}$$
denote the difference between upward and downward impulse. We have
$$\begin{aligned} DI\left( p\right) =C_{t-1}^{n-1}p^{t-1}\left( 1-p\right) ^{n-t+1}\left( V-E\right) -\lambda E\sum _{y=0}^{t-2}C_{y}^{n-1}p^{y+1}\left( 1-p\right) ^{n-1-y}. \end{aligned}$$
Symmetric strategy profile \(\sigma \) is an impulse balance equilibrium if and only if \(DI\left( p\left( \sigma \right) \right) =0\). If \(p=0\) then \( DI\left( p\right) =0\) implying the zero strategy profile is an impulse balance equilibrium. If \(p=1\) then \(DI\left( p\right) =0\) implying the full contribution strategy profile is also an impulse balance equilibrium.
Suppose for now that \(t\ge 3\). Then
$$\begin{aligned} DI\left( p\right)= & {} p^{t-1}\left( 1-p\right) ^{n-t+1}\left[ C_{t-1}^{n-1}\left( V-E\right) -\lambda EC_{t-2}^{n-1}\right] \\&-\lambda E\sum _{y=0}^{t-3}C_{y}^{n-1}p^{y+1}\left( 1-p\right) ^{n-1-y} \\= & {} p^{t-1}\left( 1-p\right) ^{n-t+1}C_{t-1}^{n-1}\left( V-{\overline{V}} \right) -\lambda E\sum _{y=0}^{t-3}C_{y}^{n-1}p^{y+1}\left( 1-p\right) ^{n-1-y} \\= & {} p\left( 1-p\right) ^{n-t+1}\left( p^{t-2}C_{t-1}^{n-1}\left( V-{\overline{V}}\right) -\lambda E\sum _{y=0}^{t-3}C_{y}^{n-1}p^{y}\left( 1-p\right) ^{t-2-y}\right) . \end{aligned}$$
If \(V\le {\overline{V}}\) then \(DI\left( p\right) <0\) for all \(p\in (0,1)\). This implies that there is no mixed strategy impulse balance equilibrium. It also implies that the zero strategy profile is stable and the full contribution strategy profile is unstable.
If \(V>{\overline{V}}\) we need look in more detail at
$$\begin{aligned} G\left( p\right) =p^{t-2}C_{t-1}^{n-1}\left( V-{\overline{V}}\right) -\lambda E\sum _{y=0}^{t-3}C_{y}^{n-1}p^{y}\left( 1-p\right) ^{t-2-y}. \end{aligned}$$
It is simple to see that \(G(0)<0\) and \(G(1)>0\). Continuity of G(p) implies at least one value \(p^{*}\in \left( 0,1\right) \) such that \(G\left( p^{*}\right) =0\). At \(p^{*}\) we obtain an impulse balance equilibrium. Moreover, we obtain stable equilibria corresponding to \(p=0\) and \(p=1\).
It remains to consider the case \(t=2\). Now
$$\begin{aligned} DI\left( p\right)= & {} \left( n-1\right) p\left( 1-p\right) ^{n-1}\left( V-E\right) -\lambda Ep\left( 1-p\right) ^{n-1} \\= & {} p\left( 1-p\right) ^{n-1}\left( n-1\right) \left( V-{\overline{V}}\right) . \end{aligned}$$
As before, if \(V\le {\overline{V}}\) then \(DI\left( p\right) <0\) for all \(p\in (0,1)\). In this case there are two equilibria, the zero strategy profile is stable and the full contribution strategy profile is unstable. If \(V> {\overline{V}}\) then \(DI\left( p\right) >0\) for all \(p\in (0,1)\). In this case there are still only two equilibria but the zero strategy profile is unstable and the full contribution strategy profile is stable. \(\square \)

Proposition 3 shows that if \(V\le {\overline{V}}\left( \lambda \right) \) then impulse balance theory gives a sharp prediction—the zero strategy profile is the unique stable impulse balance equilibrium. If \(V> {\overline{V}}\left( \lambda \right) \) then, with the exception of the extreme case \(t=2\), we obtain a less sharp prediction - both the zero and full contribution equilibria are stable. In this case we shall hypothesize that play converges to the Pareto optimal, full contribution, impulse balance equilibrium. Given this hypothesis, we shall informally say that impulse balance theory predicts perfect efficiency if \(V>{\overline{V}}\left( \lambda \right) \) and zero contributions if \(V\le {\overline{V}}\left( \lambda \right) .\)

In justifying our hypothesis that play will converge on the Pareto optimal impulse balanceequilibrium we first emphasize that this hypothesis differs from saying play will converge on the Pareto optimal Nash equilibrium. To appreciate this point note that the full contribution strategy profile is the Pareto optimal Nash equilibrium for any \(V>E\). So, the Pareto optimal stable impulse balance equilibrium is the same as the Pareto optimal Nash equilibrium if and only if \(V>{\overline{V}}\left( \lambda \right) \). If \(V\le {\overline{V}}\left( \lambda \right) \) the Pareto optimal stable impulse balance equilibrium is the zero strategy profile while the Pareto optimal Nash equilibrium is the full contribution strategy profile. Our approach, therefore, makes a testable prediction. Moreover, our approach is not inconsistent with the evidence that play in many games, such as the minimum effort game, does not converge to the Pareto optimal Nash equilibrium. It remains an open question whether play converges on Pareto optimal, stable impulse balance equilibria. For further insight on this issue we quote from Harsanyi and Selten (1988, p. 356), ‘[O]ur theory in general gives precedence to payoff dominance. ... [P]ayoff dominance is based on collective rationality: it is based on the assumption that in the absence of special reasons to the contrary, rational players will choose an equilibrium point yielding all of them higher payoffs, rather than one yielding them lower payoffs’. Essentially, we are suggesting that instability of the full contribution equilibrium counts as ‘special reasons to the contrary’.

5 Comparing model predictions

Having introduced three alternative approaches of modelling behavior in the forced contribution game we will now demonstrate that they can give very different predictions. To do so we begin by analyzing the four games detailed in Table 2. This analysis will serve to illustrate the stark differences between model predictions. Our focus is on games with \(n=5\) or 7 players where we vary V keeping \(E=6\) and \(n-t=2\) fixed. When comparing models we shall focus on predicted efficiency measured by the probability of the public good being provided.17
Table 2

Parameters in the four games

Name

n

t

V

E

Few-small

5

3

7

6

Few-large

5

3

13

6

Many-small

7

5

7

6

Many-large

7

5

13

6

Ordinal potential (see Proposition 2) predicts perfect efficiency for all four games. Consider next quantal response. For \(n=5\) and \(t=3\) one can show numerically that the full contribution strategy profile is the limiting logit equilibrium if and only if \(V>{\widetilde{V}}\) where \( {\widetilde{V}}\approx 11\). Otherwise, the zero strategy profile is the limiting logit equilibrium. For \(n=7\) and \(t=5\) the analogous cut-off point is \({\widetilde{V}}\approx 22.8\). Only in the few-large game, therefore, efficiency is predicted to be high. This prediction does not change significantly if we consider (non-limiting) logit equilibria (on the branch of equilibria starting at 0.5). To illustrate, Table 3 details predicted efficiency for a range of values of \(\gamma \). There is clearly a stark contrast between the predictions obtained using ordinal potential and quantal response. Note that Offerman et al. (1998) obtained fitted estimates of \(\gamma \) between 0.001 and 0.34 in threshold public good games (while McKelvey and Palfrey (1995) obtain estimates of \(\gamma \) consistently above 0.2 and as a high as 3).
Table 3

Predicted efficiency with logit equilibrium

Game

\(\gamma =0.05\)

\(\gamma =0.1\)

\(\gamma =0.2\)

\(\gamma =0.5\)

\( \gamma =1\)

\(\gamma =4\)

\(\gamma =\infty \)

Few-small

0.46

0.41

0.24

0.001

0

0

0

Few-large

0.52

0.54

0.61

0.79

0.90

0.98

1

Many-small

0.15

0.08

0.01

0

0

0

0

Many-large

0.17

0.10

0.01

0

0

0

0

Consider next impulse balance and the case \(n=5\) and \(t=3\). From Proposition 3 we know that the full contribution strategy profile \(\sigma ^{1}\) is a stable impulse balance equilibrium if and only if
$$\begin{aligned} \lambda <\frac{3}{2}\left( \frac{V}{E}-1\right) . \end{aligned}$$
So, if \(V=7\) (recalling \(E=6\)) the equilibrium \(\sigma ^{1}\) is stable if and only if \(\lambda <0.25\). If \(V=13\) the equilibrium \(\sigma ^{1}\) is stable if and only if \(\lambda <\frac{7}{4}\). When \(n=7\) and \(t=5\) we obtain analogous condition
$$\begin{aligned} \lambda <\frac{3}{4}\left( \frac{V}{E}-1\right) . \end{aligned}$$
So, if \(V=7\) the equilibrium \(\sigma ^{1}\) is stable if and only if \(\lambda <\frac{1}{8}\) and if \(V=13\) it is stable if and only if \(\lambda <\frac{7}{8} \).
Prior estimates of \(\lambda \) are in the range of 0.3 to 1 (Ockenfels and Selten 2005, Alberti et al. 2013). Recall, that we predict play will converge to the full contribution equilibrium if and only if it is stable. Table 4 summarizes predicted efficiency for five different values of \(\lambda \). Efficiency is predicted to be high in the few-large game and low in the many-small game. In the few-small and many-large game predictions depend on \(\lambda \). Comparing Tables 3 and 4 we see that predicted efficiency with impulse balance lies somewhere in-between the extremes obtained with ordinal potential and quantal response.
Table 4

Predicted efficiency with impulse balance

Game

\(\lambda =0.2\)

\(\lambda =0.4\)

\(\lambda =0.6\)

\(\lambda =0.8\)

\(\lambda =1.0\)

Few-small

1

0

0

0

0

Few-large

1

1

1

1

1

Many-small

0

0

0

0

0

Many-large

1

1

1

1

0

Having looked at the four games above as illustrative examples let us now turn to the general setting. We have already shown (Proposition 2) that ordinal potential gives the ‘optimistic’ prediction of perfect efficiency. We have also shown (Proposition 3) that impulse balance gives a less optimistic prediction of zero efficiency if \(V<{\overline{V}}\left( \lambda \right) \). While a general prediction for quantal response is not possible, one can show numerically that it gives the least optimistic prediction. In particular, the critical value above which \(\sigma ^{1}\) is the limiting logit equilibrium is greater than the critical value above which \(\sigma ^{1} \) is a stable impulse balance equilibrium, \({\widetilde{V}}>{\overline{V}} \left( \lambda \right) \) for \(\lambda \le 1\). This is clear in the examples, and illustrated more generally in Fig. 5.

Figure 5 plots the critical values \({\overline{V}}\left( \lambda \right) /E\) and \({\widetilde{V}}/E\) above which the full contribution equilibrium \(\sigma ^{1}\) is a stable impulse balance equilibrium (for \(\lambda =0.2\) and 1) and a limiting logit equilibrium. We consider 6 possible values of n and all relevant values of t. As one would expect, the higher is the threshold t the higher has to be the return on the public good V in order to predict full efficiency. The main thing we wish to highlight is that \( {\widetilde{V}}>{\overline{V}}\left( 1\right) \) across the entire range of n and t. In other words, there are always values of V where the full contribution strategy profile is a stable impulse balance equilibrium but not the limiting logit equilibrium. This gap between \({\widetilde{V}}\) and \( {\overline{V}}\left( 1\right) \) widens the higher is t.

Recall, see the introduction, that forced contributions have been suggested as a means to promote efficiency in public good games. This conjecture is consistent with the predictions of ordinal potential but not of impulse balance or quantal response. It is natural, therefore, to want to test which model is more powerful at predicting observed efficiency, and to explore whether efficiency can be low despite forced contributions. That motivates the experiments that we shall discuss shortly. Before doing that we briefly comment on experimental results from the previous literature.
Fig. 5

The critical value \({\widetilde{V}}/E\) for the limiting logit equilibrium (LLE) and of \({\overline{V}}\left( 1\right) /E\) and \({\overline{V}}\left( 0.2\right) /E\) for the impulse balance equilibrium IBE(\( \lambda =1\)) and IBE(\(\lambda =0.2\)) for different combinations of n and t

Table 5

Parameters, critical values of V and observed efficiency for games considered in the literature

 

n

t

E

V

\({\overline{V}}\left( 0.2\right) \)

\({\overline{V}} \left( 1\right) \)

\({\widetilde{V}}\)

Observed efficiency

Dawes et al. experiment 1

7

3

5

10

5.4

7.0

7.3

1.00

Dawes et al. experiment 2

7

5

5

10

6.4

11.7

19.0

0.93

Rapoport and Eshed-Levy

5

3

2

5

2.3

3.3

3.7

0.72

Table 5 summarizes the forced contribution experiments reported by Dawes et al. (1986) and Rapoport and Eshed-Levy (1989).18 For the game in experiment 1 of Dawes et al. (1986) and that of Rapoport and Eshed-Levy (1989) all three approaches we have considered predict high efficiency and this is what was observed. Experiment 2 of Dawes et al. (1986) is more interesting in that the zero contribution profile is the limiting logit equilibrium while the full contribution profile is a stable impulse balance equilibrium for low values of \(\lambda \) (but not for values of \(\lambda \) near 1). The observed high efficiency appears inconsistent with the former prediction. It is difficult, however, to infer much from this one experiment. We shall now introduce our experiments, which provide a more detailed test of the three models.

6 Experiment design and results

Our experiment was designed to test the predictive power of the three theoretical approaches discussed above. In order to do this we used a between subject design in which the four games introduced in Table 2 were compared. This gives four treatments corresponding to the four games.

Subjects were randomly assigned to a group and interacted anonymously via computer. We used z-Tree (Fischbacher 2007). The instructions given to subjects were game specific, in detailing nt and V, and so subjects could not have known that these differed across groups. In order to observe dynamic effects subjects played the game for 30 periods in fixed groups. The instructions given to subjects are available in the appendix. As detailed in Table 6, we observed a total of 27 groups and 155 subjects. A typical session lasted 30–40 min and the average payoff was £9.
Table 6

Treatments and the number of observations per treatment

Treatment

Subjects

Groups

Few-small

45

9

Few-large

40

8

Many-small

35

5

Many-large

35

5

 

155

27

6.1 Observed efficiency

Table 7 summarizes average efficiency (measured by the proportion of periods out of 30 the public good was provided) in the four treatments. In interpreting these numbers we highlight that in the last 10 periods, every group provided the public good either (i) 8, 9 or 10 times or (ii) 0 or 1 time. We observed, therefore, a very clear distinction between groups that, we shall say, converged on efficiency and those that converged on inefficiency. (Group specific data is provided in Table 10 in the appendix). This means that observed efficiency in periods 21–30 is essentially measuring the proportion of groups that converged on efficiency.

In the few-large treatment efficiency was very high, with 7 of the 8 groups converging on efficiency. This result is consistent with the predictions derived from all three theoretical approaches. In the many-small treatment efficiency was very low, with all of the 5 groups converging on inefficiency. Efficiency was significantly lower than in all other treatments (\(p\le 0.02\), proportions test).19 This matches the predictions derived from impulse balance and quantal response but not that of ordinal potential. Let us remark at this point that the very low efficiency we observed in the many-small treatment is clear evidence that enforcing contributions does not guarantee high efficiency.

In the few-small and many-large treatments efficiency was not as high as that in the few-large treatment but the differences are statistically insignificant (\(p>0.15\), proportions test). A total of 7 out of 9 and 3 out of 5 groups, respectively, converged on efficiency. The success rate in the many-large treatment did decline over the 30 periods (\(p=0.02\), LR test). Even if we focus on periods 11 to 30, however, the differences between the many-large, few-small and few-large treatments are insignificant (\(p>0.1\), proportions test). The relatively high level of efficiency in the few-small and many-large treatments matches our predictions derived from ordinal potential and impulse balance (provided the weight on the downward impulse is within the bound, \(0.125<\lambda <0.25\)) but not that from quantal response.
Table 7

Average efficiency in the four treatments

Treatment

Observed efficiency

Overall

Periods 1–10

Periods 11–20

Periods 21–30

Few-small

0.71

0.68

0.71

0.73

Few-large

0.90

0.89

0.93

0.88

Many-small

0.04

0.12

0.00

0.00

Many-large

0.69

0.84

0.66

0.58

Efficiency is measured as the proportion of periods the public good is provided

The proceeding discussion suggests that the approach most consistent with observed efficiency across all four games is impulse balance. Ordinal potential does not capture the low efficiency in the many-small treatment and quantal response does not capture the high efficiency in the few-small and many-small treatments. This interpretation, however, is focussed primarily on limiting logit equilibria. Moreover, the predictive power of impulse balance is dependent on \(\lambda \) being relatively small. We shall now look at each of these issues in turn in the following two sections.

6.2 Goodness of fit

Both impulse balance and quantal response have one degree of freedom, the weight on downward impulse \(\lambda \) and the inverse error rate \(\gamma \), respectively. Our claim that impulse balance is the only approach (of the three we consider) that is consistent with observed efficiency was based on \( 0.125<\lambda <0.25\) and \(\gamma =\infty \). In this section we consider alternative values of \(\gamma \) in order to give a fair comparison across models. Before we get to the analysis let us make one remark.

Recall that every group converged to either efficiency or inefficiency in terms of aggregate success at providing the public good. This is not the same as saying groups converged on the full contribution or zero contribution strategy profile. In some groups that were highly efficient (providing the public good 10 times in the last 10 periods) we see an average probability of contributing around 70–80%. Similarly, in some groups that were highly inefficient (never providing the public good in the last 10 periods) we see an average probability of contributing around 20–30%. (See Table 10 in the appendix for the full data.) The only stable impulse balance equilibria are the zero contribution and full contribution strategy profiles and so impulse balance suggests convergence on one of these equilibria. This, as we have said, was not the case in all groups. Quantal response, by contrast, is a story of noisy decision making and so can more easily accommodate non-convergence to the zero or full contribution strategy profiles.

Estimates of \(\gamma \) can be obtained for each treatment by finding the value of \(\gamma \) that fits the observed probability with which subjects contributed to the public good (Offerman et al. 1998). In estimating \(\gamma \) we do not restrict attention to the branch of equilibria starting at 0.5.20 Table 8 provides the estimates of the \(\gamma \)s we obtain and the corresponding log likelihood. In the few-small treatment logit equilibria performs no better than a random model (in which each subject chooses to contribute E with probability 0.5). In the other three treatments logit equilibria does outperform a random model (\(p<0.001\), LR test). Clearly, however, the estimates of \(\gamma \) differ across treatments. While one can make the argument that \(\gamma \) (and \(\lambda \)) may vary across different experimental studies because of framing or subject effects it is harder to make this argument within a particular study. We, therefore, also solve for the value of \(\gamma \) that maximizes the likelihood of observed contributions across all four treatments. This is given by the aggregate estimate in Table 8. Interestingly, we do see evidence of \(\gamma \) increasing through the 30 periods (\(p<0.001\), LR test).
Table 8

Estimates of \(\gamma \) and corresponding log likelihood

Treatment

All

1–10

11–20

21–30

\(\gamma ^{*}\)

Log L

\(\gamma ^{*}\)

Log L

\(\gamma ^{*}\)

Log L

\(\gamma ^{*}\)

Log L

Few-small

0

\(-936\)

0

\(-312\)

0

\(-312\)

0

\(-312\)

Few-large

2.55

\(-523\)

2.11

\(-229\)

3.06

\(-166\)

2.64

\(-173\)

Many-small

0.31

\(-412\)

0.19

\(-192\)

0.41

\(-95\)

0.42

\(-93\)

Many-large

1.44

\(-660\)

1.12

\(-210\)

1.63

\(-222\)

\(\infty \)

\(-239\)

Aggregate

0.097

\(-3141\)

0.065

\(-1065\)

0.112

\(-1034\)

0.130

\(-1019\)

We now compare the predictive power of the three theoretical approaches. Following the method of Erev et al. (2010) we derive the mean squared deviation of observed from predicted values. We focus on predictions of group efficiency (proportion of times the public good is provided) and individual contributions (proportion of times a player chooses to contribute E). Table 9 presents the results (and Table 11 in an appendix provides the relevant observations and predictions). In terms of predicting group efficiency we see that impulse balance performs best, followed by the random model and then logit equilibrium. This is consistent with the analysis of the preceding section (Sect. 6.1). In terms of predicting individual contributions we see that logit equilibrium is best, followed by the random model and impulse balance. This is consistent with the preceding discussion on quantal responses ability to capture noisy decision making. In terms of overall performance we see that impulse balance is best, followed by the random model and logit equilibrium. Impulse balance does well because it can predict both efficiency and contributions relatively well.
Table 9

Mean squared deviation of model prediction from observed efficiency and probability of choosing to contributing E

Model

Efficiency

Contributions

Overall

Random choice (\(p=0.5\))

0.113

0.070

0.092

Ordinal potential

0.278

0.268

0.273

Impulse balance (\(\lambda <0.25\))

0.048

0.085

0.066

Limiting logit and impulse balance (\(\lambda =1\))

0.248

0.204

0.226

Logit equilibrium (\(\gamma =0.1\))

0.142

0.062

0.102

6.3 Impulse and behavior

It remains to question why the weight on the downward impulse appears to be relatively low, \(\lambda <0.25\). To get some insight on this issue we shall look at how subjects changed contribution from one period to the next. Recall that impulse balance theory assumes players will change contribution based on ex-post rationality. We want to check whether subjects behaved consistent with this assumption. A relatively low weight on the downward impulse would imply that subjects are less responsive to a downward impulse than an upward impulse. Figure 6 details the proportion of players who changed contribution aggregating across all four treatments. We distinguish three cases. Recall that c(a) denotes the number of players who contributed E.
  1. (i)

    If \(c(a)<t-1\) then any player who contributed E has a downward impulse (because they face the wasted contribution experience condition) and any player who contributed 0 has no impulse (zero no). Consistent with this we see, in Fig. 6, a strong tendency for those who contributed E to reduce their contribution and a weak tendency for those who contributed 0 to increase their contribution.

     
  2. (ii)

    If \(c(a)=t-1\) then any player who contributed E has a downward impulse (wasted contribution) and any player who contributed 0 has an upward impulse (lost opportunity). Consistent with this we see a strong tendency for both those who contributed E and those who contributed 0 to change their contribution. Importantly, those who contributed 0 are more likely to increase contribution than those who contributed E are to decrease contribution. This is consistent with a low weight on the downward impulse and pushes the group towards successful provision of the public good in the next period.

     
  3. (iii)

    If \(c(a)\ge t\) then no player has an impulse (spot on). What we observe is a relatively strong tendency for those who contributed 0 to increase their contribution, particularly when \(c(a)=t\). This could be interpreted as a reaction to the ‘near-miss’ of the lost opportunity experience condition (Kahneman and Miller 1986; De Cremer and van Dijk 2011). The effect is to push the group towards sustained provision of the public good.

     
Fig. 6

The proportion of subjects who changed contribution from one period to the next distinguishing by initial contribution and the number of players who contributed E. The number of observations is given in square brackets. Also, ZN denotes zero no, LO lost opportunity, SO spot on and WC wasted contribution

In all the three cases discussed above we observe that subjects change contribution consistent with ex-post rationality. Of particular note is that for \(c(a)\ge t-1\) we see a stronger tendency to increase than decrease contributions. This explains why we find that the weight on the downward impulse is relatively low. Not only, therefore, does impulse balance theory predict aggregate success rates it is also consistent with individual behavior.

7 Conclusion

In this paper we contrast three approaches to predicting efficiency in a forced contribution threshold public good game. The three approaches are based on ordinal potential, quantal response and impulse balance theory. We also report an experiment to test the respective predictions. We found that impulse balance theory provides the best overall predictions. The predictive power of impulse balance is, however, highly dependent on its one degree of freedom, the weight on the downward impulse, \(\lambda \). Our estimate of \( 0.125<\lambda <0.25\) is lower than those (\(\lambda =0.32\) and 0.37) obtained by Ockenfels and Selten (2005) or those (\(\lambda =0.5\) and 1) obtained by Alberti et al. (2013). We take the view that \( \lambda \) can differ depending on the game, and the framing of the game, and so different estimates of \(\lambda \) are not unexpected. Application of impulse balance theory is, though, almost entirely reliant on knowing the appropriate value of \(\lambda \) and so it should be a priority for future work to build a better understanding of the determinants of \(\lambda \).
Fig. 7

The value of V / E above which high efficiency is predicted, where \(\alpha =t/n\)

To put our results in context we highlight that impulse balance theory allows us to derive a simple expression with which we can predict when forced contributions result in high or low efficiency. This prediction depends on the number of players n, threshold t, relative return to the public good V / E and weight on the downward impulse \(\lambda \). If we set \( \lambda =0.25\) then we get a prediction of high or low efficiency as
$$\begin{aligned} \frac{V}{E}\gtrless \frac{n-\frac{3}{4}\left( t-1\right) }{n-\left( t-1\right) }. \end{aligned}$$
Thus, a ceteris paribus increase in the number of players lowers the critical value of the return to the public good. In other words, an increase in the number of players is predicted to enhance efficiency. Conversely, a ceteris paribus increase in the threshold is predicted to lower efficiency.
Consider next what happens if we fix the ratio between t and n at \( t=\alpha n\). Figure 7 plots the critical value of the return to the public good as a function of \(\alpha \). One can also derive that high efficiency is predicted if
$$\begin{aligned} \frac{V}{E}\ge \frac{1-\frac{3}{4}\alpha }{1-\alpha }. \end{aligned}$$
High efficiency is predicted, therefore, provided t is not ‘too large’ a proportion of n. For example, if the relative return to the public good is 2 then we need \(\alpha \le 0.8\). This prediction is consistent with the high efficiency observed in previous forced contribution experiments (Dawes et al. 1986; Rapoport and Eshed-Levy 1989). It also shows, however, that enforcing contributions does not always lead to high efficiency. This is clearly demonstrated in our many-small treatment where \(\alpha =5/7\approx 0.71\), \(V/E=7/6\approx 1.17\) and efficiency is near zero.
Footnotes
1

This has also been called the assurance problem (Isaac et al. 1989; Bchir and Willinger 2013).

 
2

For a general overview of the experimental literature on threshold public goods see Croson and Marks (2000), Schram et al. (2008), and Cadsby et al. (2008). For more on the role of refunds see Cartwright and Stepanova (2015).

 
3

Our focus in this paper will be on binary threshold public good games where people decide either to contribute or not towards the public good. The alternative, continuous threshold public good game, is that people can choose how much to contribute on a continuum (e.g. Suleiman and Rapoport 1992).

 
4

Property (1b) captures the notion of ‘forced’ contribution in that there is no gain from not volunteering to contribute to a public good that is provided. In specific situations, see for instance the example in the next paragraph, the term ‘forced’ need not be taken literally.

 
5

A further example, looking at a firm trying to acquire an apartment block for redevelopment, is considered by Dawes et al. (1986).

 
6

To provide some background: Consider a simple, any-and-all takeover bid where a raider offers to buy any shares sold but only takes over the company if sufficiently many shares (e.g. 50%) are sold. This structure does not give rise to a threshold public good game, let alone a forced contribution game. There are two basic reasons why a raider may not prefer an any-and-all bid. First, it can give incentives for shareholders to not sell in the hope the takeover will increase the value of the firm (Grossman and Hart 1980). A freezout rule is one way to overcome this problem (Amihud et al. 2004) and essentially involves forcing those who hold out to sell in the event of a takeover. A second issue is that the raider may end up buying shares and yet fall short of the threshold for ownership. One way to potentially overcome this problem is for the firm to only buy shares conditional on the takeover going ahead (e.g. Cadsby and Maynes 1998). An all-or-nothing bid involves both a freezout rule and conditional buying of shares (Bagnoli and Lipman 1988, see also Holmström and Nalebuff 1992).

 
7

Voting may be a simple solution to obtaining efficiency if forced contributions are possible. (We thank a referee for pointing this out.) Voting, however, may not be practicable. For instance, in the takeover example there may be no way to implement a binding vote. Moreover, as our first and third examples illustrate, if there is an asymmetry whereby voting ‘yes for the public good’ is more costly than voting no we still have a forced contribution game. This asymmetry may arise if abstention is treated as a no vote.

 
8

Impulse balance theory and quantal response are compared by Selten and Chmura (2008) (see also Chmura et al. 2012), and Berninghaus et al. (2014). No strong difference in predictive power is found between the two.

 
9

If \(t=n\) then we have the weak link game. If \(t=1\) then we have a form of best shot game. For simplicity we exclude these ‘special cases’ from the analysis.

 
10
To see how this description of the game relates to our earlier examples consider our first example of an incompetent manager. To get rid of the manager will require t or more colleagues to complain. Hence to complain can be interpreted as contributing towards the public good. If the manager is removed then everyone benefits and no one (including those who complained) will receive any recrimination. Let X denote payoffs in this case. If the manager is not removed then things carry on as before except that those who complained will receive recriminations. Let Y denote current payoffs and R the cost of recrimination. So,
$$\begin{aligned} u_{i}\left( a\right) =\left\{ \begin{array}{ll} X &{} \text { if }c(a)\ge t \\ Y-a_{i}R &{} \text { otherwise} \end{array} \right. . \end{aligned}$$
To fit this into our framework, we can set \(E=R\) and \(V=X-Y+R\). This gives,
$$\begin{aligned} u_{i}\left( a\right) =\left\{ \begin{array}{ll} V+Y-R &{}\text { if }c(a)\ge t \\ E(1-a_{i})+Y-R &{}\text { otherwise} \end{array} \right. . \end{aligned}$$
The linearity of payoffs means we can subtract the fixed term \(Y-R\).
 
11

See Offerman et al. (1996) for an alternative perspective.

 
12

There are many asymmetric Nash equilibria. For example, it is a Nash equilibrium for t players to contribute E (with probability 1) and \(n-t \) players to contribute 0 (with probability 1). If players have some form of pre-play communication such equilibria have been seen to arise in related games (Van de Kragt et al. 1983). If, however, players choose simultaneously and independently it is difficult to see how players could coordinate on such equilibria.

 
13

They also consider a naive Bayesian quantal response model.

 
14

Conventionally \(\lambda \) is used rather than \(\gamma \). We use \(\gamma \) to avoid confusion with a \(\lambda \) term used in impulse balance theory.

 
15

More formally, they state that it holds for ‘almost all games’. The games that we consider in this paper do have this property.

 
16

If \(p^{*}=0\) or \(p^{*}=1\) the definition is amended as appropriate.

 
17

The logit equilibrium and impulse balance equilibrium give a value for p, the probability of a player choosing to contribute E. From this one can obtain the probability of the public good being provided.

 
18

Dawes et al. (1986) report the results of 3 experiments. We have combined their experiments 2 and 3 because they are identical for our purposes.

 
19

All of the statistical tests in this section treat the group as the unit of observation. We, thus, have 27 observations in total (see Table 6).

 
20

For the few-large and many-small the best fit does lie on the branch of equilibria starting at 0.5. For the many-large it does not. In the few small treatment the best fit is the random model, \(p=0.5\).

 

Funding information

Funder NameGrant NumberFunding Note
University of Kent

    Copyright information

    © The Author(s) 2017

    Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

    Authors and Affiliations

    1. 1.School of EconomicsUniversity of KentCanterburyUK

    Personalised recommendations