Designing feedback in voluntary contribution games: the role of transparency

Abstract

We analyze the effects of limited feedback on beliefs and contributions in a repeated public goods game setting. In a first experiment, we test whether exogenously determined feedback about a good example (i.e., the maximum contribution in a period) in contrast to a bad example (i.e., the minimum contribution in a period) induces higher contributions. We find that when the type of feedback is not transparent to the group members, good examples boost cooperation while bad examples hamper it. There is no difference when the type of feedback is transparent. In a second experiment, feedback is endogenously chosen by a group leader. The results show that a large majority of the group leaders count on the positive effect of providing a good example. This is true regardless whether they choose the feedback type to be transparent or non-transparent. Half of the group leaders make the type of feedback transparent. With endogenously chosen feedback about good examples no difference in contributions can be observed among transparent and non-transparent feedback selection. In both experiments feedback shapes subjects’ beliefs. With exogenously chosen feedback, transparent feedback tends to reduce beliefs when good examples are provided as feedback and tends to increase beliefs in when bad examples are provided as feedback compared to the respective non-transparent cases. Our results shed new light on the design of feedback provision in public goods settings.

This is a preview of subscription content, log in to check access.

Fig. 1

Notes

  1. 1.

    Neugebauer et al. (2009) use the term ‘selfishly-biased conditional cooperation’. Because the underlying motive of not perfectly cooperating could have several reasons, we stick to the term ‘imperfect conditional cooperation’ throughout the paper.

  2. 2.

    Previously, experimental economists put huge efforts into investigating mechanisms that stabilize contribution behavior over time. Chaudhuri (2011) provides an excellent survey of laboratory experiments on this topic. Research focuses on punishment (Xiao and Houser 2011; Fehr and Gächter 2000; Gürerk et al. 2006), incentives (Bracht et al. 2008), communication (Bochet et al. 2006; Isaac and Walker 1988), sorting and group formation (Ahn et al. 2009; Gächter and Thöni 2005; Page et al. 2005), moral suasion (Masclet et al. 2003), and recommendations (Chaudhuri and Paichayontvijit 2010) affecting contribution levels.

  3. 3.

    See, for example, the City of London’s ‘Clean City Awards Scheme’ initiative (www.cityoflondon.gov.uk). In addition, consider the field experiment by Costa and Kahn (2013), where households receive tailored feedback about the energy consumption of their neighbors.

  4. 4.

    See, e.g., npENGAGE, a platform for nonprofit fundraising (www.npengage.com, see Figure A.1 of the electronic supplementary material in the “Appendix” for a screenshot), INDIEGOGO or Crowdfunder, two major crowdfunding platforms (www.indiegogo.com, www.crowdfunder.co.uk, see Figure A.2 and A.3 of the electronic supplementary material in the “Appendix” for screenshots).

  5. 5.

    One recent exception is Thomas and Thornock (2015) who study the role of feedback in team production.

  6. 6.

    As argued above, in many environments people have only limited information on the contributions of others and the used feedback rules. They might only be able to suspect what others do or which feedback rules are implemented by whom. To control for this suspicion, we make the available feedback rules salient.

  7. 7.

    Experimental instructions and quiz questions are taken from Fehr and Gächter (2000) and adapted to our experiment. Original instructions were in German. These and an English translation are available from the authors upon request.

  8. 8.

    Belief elicitation is incentivized in our experiment. If subjects either correctly predict the average of the other three participants or their predictions lie in a \(+/-\) 0.5 range of the real average, they additionally receive 5 points. Since feedback about the accuracy of their estimation may influence subjects’ decisions in the following periods, we deliver feedback about the accuracy of their predictions only after the final period.

  9. 9.

    In our experiment, feedback on earnings in the period is provided at the end of the last period.

  10. 10.

    When the feedback selection rule is non-transparent the wording of the feedback on the screen was "A contribution of one person was: XX", whereas when transparent feedback was presented the sentence "The person with the maximum (minimum) contribution has contributed: XX" was displayed. For the feedback selection rule RAND it said "A randomly drawn contribution of one person was: XX".

  11. 11.

    If not denoted otherwise, all reported significance levels are based on two-sided Mann–Whitney U tests. Moreover, non-parametric comparisons are conducted with group averages as independent observations. Standard errors in our regressions are clustered on independent subject groups.

  12. 12.

    For pairwise comparisons between the treatments regarding group average contributions, beliefs and feedback for all periods, the first and last period, see Table 5 in the “Appendix”.

  13. 13.

    According to a Kruskal Wallis test, we find no difference in the distribution of subjects’ first period contributions between the feedback selection rules for non-transparent (\(p=.140\)) and transparent (\(p=.720\)) feedback selection.

  14. 14.

    Due to only one observation for MIN and RAND feedback selections, respectively, we refrain from running statistical tests.

  15. 15.

    This also holds if the data from MIN and RAND are pooled. All pairwise comparisons for endogenous feedback selection can be found in Table 6 in the “Appendix”.

  16. 16.

    According to a Kruskal Wallis test, we find no difference in the distribution of subjects’ first period contributions between the feedback selection rules for non-transparent (\(p=.303\)) and transparent (\(p=.452\)) feedback selection.

  17. 17.

    For the feedback rules MIN and RAND, we cannot make such comparison when they are not made transparent because of the low number of observations.

  18. 18.

    The interaction terms “XXX - Transparent × \(\text {Feedback}_{t-1}\)” capture the influence of feedback from the previous period with the respective feedback selection rule relative to the reference category (here: non-transparent feedback selection) on contributions. A positive coefficient of “XXX - Transparent × \(\text {Feedback}_{t-1}\)” indicates that a change of previous period’s feedback by one unit increases the contribution by the coefficient in XXX-Transparent in addition to the change in contribution caused by the feedback change in the reference category.

  19. 19.

    We also find that the coefficient for “Belief\(_{i;t-1}\)” is highly significantly positive. It has to be noted that in Fischbacher and Gächter (2010) the feedback subjects received during their experiment consisted of the sum of contributions from all group members. Nevertheless, in their regressions the coefficients of previous beliefs (“Belief\(_{i;t-1}\)”) and feedback (“\(\text {Feedback}_{t-1}\)”) appear to be very similar to ours (see p. 548, Table 1).

  20. 20.

    The latter variable captures a potentially non-linear impact of beliefs on contributions. We thank an anonymous referee for suggesting to include this variable.

  21. 21.

    Similar to Bigoni and Suetens (2012), Nikiforakis (2010) studies the difference between these feedback formats but includes a punishment option. His results also suggest a negative influence of earnings-feedback on cooperation but no difference in the frequency of punishment. In a real-effort experiment, Thomas and Thornock (2015) vary the feedback free-riders receive. They contrast input (time) versus output (production output) feedback. They observe that good examples over output feedback impact free-riders efforts more than input feedback.

  22. 22.

    Somewhat related is the experimental literature on leadership in public goods settings (Güth et al. 2007; Potters et al. 2007; Gächter and Renner 2004). Typically, in these studies one member of the group—the leader—contributes first. All other members—the followers—observe the contribution of the leader and subsequently decide on their own contribution. At the end of a period, all group members receive feedback about all decisions from the group. The results suggest, that followers tend to contribute a bit less than leaders, which, on average, drives contributions down.

  23. 23.

    Sass and Weimann (2012) show in a public goods experiment, conducted with the same participants four times a week, that even when participants receive no feedback at all, contributions tend to decline.

  24. 24.

    To elicit donations, non-profit organizations frequently emphasize donations of previous donors or name previous donors on their websites. For potential new donors, it is often not clear how to classify these examples (see, e.g., our introductory examples). The same argument applies to the provision of average feedback which is typically applied in repeated public goods games.

References

  1. Ahn, T., Isaac, R. M., & Salmon, T. C. (2009). Coming and going: Experiments on endogenous group sizes for excludable public goods. Journal of Public Economics, 93, 336–351.

    Article  Google Scholar 

  2. Bigoni, M., & Suetens, S. (2012). Feedback and dynamics in public good experiments. Journal of Economic Behavior & Organization, 82, 86–95.

    Article  Google Scholar 

  3. Bochet, O., Page, T., & Putterman, L. (2006). Communication and punishment in voluntary contribution experiments. Journal of Economic Behavior & Organization, 60(1), 11–26.

    Article  Google Scholar 

  4. Bracht, J., Figuières, C., & Ratto, M. (2008). Relative performance of two simple incentive mechanisms in a public goods experiment. Journal of Public Economics, 92, 54–90.

    Article  Google Scholar 

  5. Bradler, C., Dur, R., Neckermann, S., & Non, A. (2013). Employee recognition and performance: A field experiment. ZEW-Centre for European Economic Research Discussion Paper (13-017).

  6. Chaudhuri, A. (2011). Sustaining cooperation in laboratory public goods experiments: A selective survey of literature. Experimental Economics, 14, 47–83.

    Article  Google Scholar 

  7. Chaudhuri, A., & Paichayontvijit, T. (2010). Recommended play and performance bonuses in the minimum effort coordination game. Experimental Economics, 13, 1–18.

    Article  Google Scholar 

  8. Costa, D. L., & Kahn, M. E. (2013). Energy conservation “nugdes” and enviromentalist ideology: Evidence from a randomized residential electricity field experiment. Journal of the European Economic Association, 11(3), 680–702.

    Article  Google Scholar 

  9. Croson, R., & Shang, J. (2008). The impact of downward social information on contribution decisions. Experimental Economics, 11(3), 221–233.

    Article  Google Scholar 

  10. Fehr, E., & Gächter, S. (2000). Cooperation and punishment in public goods experiments. The American Economic Review, 90(4), 980–994.

    Article  Google Scholar 

  11. Fehr, E., & Gächter, S. (2002). Altruistic punishment in humans. Nature, 415(6868), 137–140.

    Article  Google Scholar 

  12. Fischbacher, U. (2007). z-tree: Zurich toolbox for ready-made economic experiments. Experimental Economics, 10(2), 171–178.

    Article  Google Scholar 

  13. Fischbacher, U., & Gächter, S. (2010). Social preferences, beliefs, and the dynamics of free riding in public goods experiments. The American Economic Review, 100(1), 541–556.

    Article  Google Scholar 

  14. Fischbacher, U., Gächter, S., & Fehr, E. (2001). Are people conditionally cooperative? Evidence from a public goods experiment. Economics Letters, 71(3), 397–404.

    Article  Google Scholar 

  15. Frey, B. S., & Meier, S. (2004). Social comparisons and pro-social behavior: Testing “conditional cooperation” in a field experiment. The American Economic Review, 94(5), 1717–1722.

    Article  Google Scholar 

  16. Gächter, S., & Renner, E. (2004). Leading by example in the presence of free rider incentives. Working Paper.

  17. Gächter, S., & Thöni, C. (2005). Social learning and voluntary cooperation among like-minded people. Journal of the European Economic Association, 3(2–3), 303–314.

    Article  Google Scholar 

  18. Greiner, B. (2015). Subject pool recruitment procedures: Organizing experiments with ORSEE. Journal of the Economic Science Association, 1(1), 114–125.

    Article  Google Scholar 

  19. Gürerk, O., Irlenbusch, B., & Rockenbach, B. (2006). The competitive advantage of sanctioning institutions. Science, 312(5770), 108–111.

    Article  Google Scholar 

  20. Güth, W., Levati, V. M., Sutter, M., & Van Der Heijden, E. (2007). Leading by example with and without exclusion power in voluntary contribution experiments. Journal of Public Economics, 91(5–6), 1023–1042.

    Article  Google Scholar 

  21. Hartig, B., Irlenbusch, B., & Kölle, F. (2015). Conditioning on what? The effects of heterogeneous contributions on cooperation. Journal of Behavioral and Experimental Economics, 55, 48–64.

    Article  Google Scholar 

  22. Hoffmann, M., Lauer, T., & Rockenbach, B. (2013). The royal lie. Journal of Economic Behavior & Organization, 93, 305–313.

    Article  Google Scholar 

  23. Huck, S., & Rasul, I. (2011). Matched fundraising: Evidence from a natural field experiment. Journal of Public Economics, 95, 351–362.

    Article  Google Scholar 

  24. Isaac, R. M., & Walker, J. M. (1988). Communication and free-riding behavior: The voluntary contribution mechanism. Economic Inquiry, 26(4), 585–608.

    Article  Google Scholar 

  25. Johnson, D. A., & Dickinson, A. M. (2010). Employee-of-the-month programs: Do they really work? Journal of Organizational Behavior Management, 30(4), 308–324.

    Article  Google Scholar 

  26. Kosfeld, M., & Neckermann, S. (2011). Getting more work for nothing? Symbolic awards and worker performance. American Economic Journal: Microeconomics, 3, 86–99.

    Google Scholar 

  27. Luthans, K. (2000). Recognition: A powerful, but often overlooked, leadership tool to improve employee performance. Journal of Leadership & Organizational Studies, 7(1), 31–39.

    Article  Google Scholar 

  28. Masclet, D., Noussair, C., Tucker, S., & Villeval, M.-C. (2003). Monetary and nonmonetary punishment in the voluntary contributions mechanism. The American Economic Review, 93(1), 366–380.

    Article  Google Scholar 

  29. Muller, L., Sefton, M., Steinberg, R., & Vesterlund, L. (2008). Strategic behavior and learning in repeated voluntary contribution experiments. Journal of Economic Behavior & Organization, 67, 782–793.

    Article  Google Scholar 

  30. Neugebauer, T., Perote, J., Schmidt, U., & Loos, M. (2009). Selfish-biased conditional cooperation: On the decline of contributions in repeated public goods experiments. Journal of Economic Psychology, 30(1), 52–60.

    Article  Google Scholar 

  31. Nikiforakis, N. (2010). Feedback, punishment and cooperation in public good experiments. Games and Economic Behavior, 68(2), 689–702.

    Article  Google Scholar 

  32. Nordstrom, R., Lorenzi, P., & Hall, R. V. (1991). A review of public posting of performance feedback in work settings. Journal of Organizational Behavior Management, 11(2), 101–124.

    Article  Google Scholar 

  33. Page, T., Putterman, L., & Unel, B. (2005). Voluntary association in public goods experiments: Reciprocity, mimicry and efficiency. The Economic Journal, 115(506), 1032–1053.

    Article  Google Scholar 

  34. Potters, J., Sefton, M., & Vesterlund, L. (2007). Leading-by-example and signaling in voluntary contribution games: An experimental study. Economic Theory, 33(1), 169–182.

    Article  Google Scholar 

  35. Samak, A., & Sheremeta, R. M. (2013). Visibility of contributions and cost of information: An experiment on public goods. Working Paper.

  36. Sass, M., & Weimann, J. (2012). The dynamics of individual preferences in repeated public good experiments. Working Paper.

  37. Steiger, E.-M., & Zultan, R. (2014). See no evil: Information chains and reciprocity. Journal of Public Economics, 109, 1–12.

    Article  Google Scholar 

  38. Thomas, T. F., & Thornock, T. A. (2015). Me versus we: The effect of incomplete team member feedback on cooperation of self-regarding individuals. AAA 2016 Management Accounting Section (MAS) Meeting Paper.

  39. Weimann, J. (1994). Individual behaviour in a free riding experiment. Journal of Public Economics, 54(2), 185–200.

    Article  Google Scholar 

  40. Xiao, E., & Houser, D. (2011). Punish in public. Journal of Public Economics, 95, 1006–1017.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Rainer Michael Rilke.

Additional information

We would like to thank Julian Conrads, Patrick Kampkötter, David Cooper, two anonymous referees, and participants of the Research Seminar in Applied Microeconomics at the University of Cologne for helpful comments. We are grateful to Katrin Recktenwald for her excellent research assistance. Financial support of the German Research Foundation through the research unit “Design and Behavior” (FOR 1371) and of the University of Cologne by the Center of Social and Economic Behavior (C-SEB) is gratefully acknowledged.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1351 KB)

Appendix

Appendix

See Tables 5, 6 and 7.

Table 5 Exogenous feedback selection: Group members’ average contributions, beliefs, provided feedback
Table 6 Endogenous feedback selection: Group members’ average contributions, beliefs, provided feedback
Table 7 Estimating contributions with previously provided feedback

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Irlenbusch, B., Rilke, R.M. & Walkowitz, G. Designing feedback in voluntary contribution games: the role of transparency. Exp Econ 22, 552–576 (2019). https://doi.org/10.1007/s10683-018-9575-2

Download citation

Keywords

  • Feedback design
  • Transparency
  • Public goods
  • Imperfect conditional cooperation
  • Experiment

JEL Classification

  • H41
  • C92
  • D82