Advertisement

Experimental Economics

, Volume 18, Issue 2, pp 195–214 | Cite as

Self-selection into laboratory experiments: pro-social motives versus monetary incentives

Original Paper

Abstract

Laboratory experiments have become a wide-spread tool in economic research. Yet, there is still doubt about how well the results from lab experiments generalize to other settings. In this paper, we investigate the self-selection process of potential subjects into the subject pool. We alter the recruitment email sent to first-year students, either mentioning the monetary reward associated with participation in experiments; or appealing to the importance of helping research; or both. We find that the sign-up rate drops by two-thirds if we do not mention monetary rewards. Appealing to subjects’ willingness to help research has no effect on sign-up. We then invite the so-recruited subjects to the laboratory to measure their pro-social and approval motivations using incentivized experiments. We do not find any differences between the groups, suggesting that neither adding an appeal to help research, nor mentioning monetary incentives affects the level of social preferences and approval seeking of experimental subjects.

Keywords

Methodology Selection bias Laboratory experiment Field experiment Other-regarding behavior Social preferences Social approval Experimenter demand 

JEL

C90 D03 

Notes

Acknowledgments

We thank Jacob K. Goeree, two anonymous referees, Steffen Altmann, Stephen V. Burks, Simon Gächter, David Gill, David Huffman, John List, Nikos Nikiforakis, Collin Raymond, and Chris Starmer for helpful comments. We gratefully acknowledge support from the Leverhulme Trust (ECF/2010/0636).

Supplementary material

10683_2014_9397_MOESM1_ESM.docx (61 kb)
Supplementary material 1 (DOCX 61 kb)

References

  1. Anderson, J., Burks, S. V., Carpenter, J., Götte, L., Maurer, K., Nosenzo, D., et al. (2013). Self-selection and variations in the laboratory measurement of other-regarding preferences across subject pools: Evidence from one college student and two adult samples. Experimental Economics, 16(2), 170–189.CrossRefGoogle Scholar
  2. Banks, J., & Oldfield, Z. (2007). Understanding pensions: Cognitive function, numerical ability and retirement saving. Fiscal Studies, 28(2), 143–170.CrossRefGoogle Scholar
  3. Bauman, Y., & Rose, E. (2011). Selection or indoctrination: Why do economics students donate less than the rest? Journal of Economic Behavior and Organization, 79(3), 318–327.CrossRefGoogle Scholar
  4. Belot, M., Duch, R., & L. Miller. (2010). Who should be called to the lab? A comprehensive comparison of students and non-students in classic experimental games. University of Oxford, Nuffield College Discussion Papers (2010-001).Google Scholar
  5. Bénabou, R., & Tirole, J. (2003). Intrinsic and extrinsic motivation. Review of Economic Studies, 70(3), 489–520.CrossRefGoogle Scholar
  6. Bohnet, I., & Frey, B. S. (1999). Social distance and other-regarding behavior in dictator games: Comment. American Economic Review, 89(1), 335–339.CrossRefGoogle Scholar
  7. Brosig, J. (2002). Identifying cooperative behavior: Some experimental results in a prisoner’s dilemma game. Journal of Economic Behavior and Organization, 47(3), 275–290.CrossRefGoogle Scholar
  8. Burks, S., Carpenter, J., & Götte, L. (2009). Performance pay and worker cooperation: Evidence from an artefactual field experiment. Journal of Economic Behavior and Organization, 70(3), 458–469.CrossRefGoogle Scholar
  9. Camerer, C. (2011). The promise and success of lab-field generalizability in experimental economics: A critical reply to Levitt and List. SSRN Working Paper.Google Scholar
  10. Charness, G., & Gneezy, U. (2008). What’s in a name? Anonymity and social distance in dictator and ultimatum games. Journal of Economic Behavior and Organization, 68(1), 29–35.CrossRefGoogle Scholar
  11. Chen, Y., & Li, S. X. (2009). Group identity and social preferences. American Economic Review, 99(1), 431–457.CrossRefGoogle Scholar
  12. Cleave, B. L., Nikiforakis, N., & Slonim, R. (2013). Is there selection bias in laboratory experiments? The case of social and risk preferences. Experimental Economics, 16(3), 372–382.CrossRefGoogle Scholar
  13. Coppock, A., & Green, D. P. (2013). Assessing the correspondence between experimental results obtained in the lab and field: A review of recent social science. Mimeo: Columbia University.Google Scholar
  14. Cubitt, R. P., Drouvelis, M., Gächter, S., & Kabalin, R. (2011). Moral judgments in social dilemmas: How bad is free riding? Journal of Public Economics, 95(3–4), 253–264.CrossRefGoogle Scholar
  15. Dohmen, T. J., Falk, A., Huffman, D., Sunde, U., Schupp, J., & Wagner, G. G. (2011). Individual risk attitudes: Measurement, determinants, and behavioral consequences. Journal of the European Economic Association, 9(3), 522–550.CrossRefGoogle Scholar
  16. Eckel, C. C., & Grossman, P. J. (2000). Volunteers and pseudo-volunteers: The effect of recruitment method in dictator experiments. Experimental Economics, 3(2), 107–120.Google Scholar
  17. Falk, A., & Heckman, J. J. (2009). Lab experiments are a major source of knowledge in the social sciences. Science, 326(5952), 535–538.CrossRefGoogle Scholar
  18. Falk, A., Meier, S., & Zehnder, C. (2013). Do lab experiments misrepresent social preferences? The case of self-selected student samples. Journal of the European Economic Association, 11(4), 839–852.CrossRefGoogle Scholar
  19. Fischbacher, U. (2007). z-Tree: Zurich toolbox for ready-made economic experiments. Experimental Economics, 10(2), 171–178.CrossRefGoogle Scholar
  20. Fischbacher, U., Gächter, S., & Fehr, E. (2001). Are people conditionally cooperative? Evidence from a public goods experiment. Economics Letters, 71(3), 397–404.CrossRefGoogle Scholar
  21. Frank, B. (1998). Good news for experimenters: Subjects do not care about your welfare. Economics Letters, 61(2), 171–174.CrossRefGoogle Scholar
  22. Frank, R. H., Gilovich, T., & Regan, D. T. (1993). Does studying economics inhibit cooperation? Journal of Economic Perspectives, 7(2), 159–171.CrossRefGoogle Scholar
  23. Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42.CrossRefGoogle Scholar
  24. Frey, B. S., & Meier, S. (2003). Are political economists selfish and indoctrinated? Evidence from a natural experiment. Economic Inquiry, 41(3), 448–462.CrossRefGoogle Scholar
  25. Glaeser, E. L., Laibson, D. I., Scheinkman, J. A., & Soutter, C. L. (2000). Measuring trust. Quarterly Journal of Economics, 115(3), 811–846.CrossRefGoogle Scholar
  26. Greiner, B. (2004). An online recruitment system for economic experiments. In K. Kremer & V. Macho (Eds.), Forschung und wissenschaftliches Rechnen. GWDG Bericht 63 (pp. 79–93). Göttingen: Ges. für Wiss. Datenverarbeitung.Google Scholar
  27. Gudjonsson, G. H. (1989). Compliance in an interrogative situation: A new scale. Personality and Individual Differences, 10(5), 535–540.CrossRefGoogle Scholar
  28. Guillén, P., & Veszteg, R. F. (2012). On “lab rats”. The Journal of Socio-Economics, 41(5), 714–720.CrossRefGoogle Scholar
  29. Harrison, G. W., Lau, M. I., & Elisabet Rutström, E. (2009). Risk attitudes, randomization to treatment, and self-selection into experiments. Journal of Economic Behavior and Organization, 70(3), 498–507.CrossRefGoogle Scholar
  30. Heyman, J., & Ariely, D. (2004). Effort for payment a tale of two markets. Psychological Science, 15(11), 787–793.CrossRefGoogle Scholar
  31. Hoffman, E., McCabe, K., & Smith, V. L. (1996). Social distance and other-regarding behavior in dictator games. American Economic Review, 86(3), 653–660.Google Scholar
  32. Jenni, K. E., & Loewenstein, G. (1997). Explaining the identifiable victim effect. Journal of Risk and Uncertainty, 14(3), 235–257.CrossRefGoogle Scholar
  33. Kagel, J. H., Battalio, R. C., & Walker, J. M. (1979). Volunteer artifacts in experiments in economics: Specification of the problem and some initial data from a small-scale field experiment. In V. L. Smith (Ed.), Research in experimental economics (Vol. 1, pp. 169–197). Greenwich: JAI Press.Google Scholar
  34. Krawczyk, M. (2011). What brings your subjects to the lab? A field experiment. Experimental Economics, 14(4), 482–489.CrossRefGoogle Scholar
  35. Levati, M. V., Ploner, M., & Traub, S. (2011). Are conditional cooperators willing to forgo efficiency gains? Evidence from a public goods experiment. New Zealand Economic Papers, 45(1), 47–57.CrossRefGoogle Scholar
  36. Levitt, S. D., & List, J. A. (2007). What do laboratory experiments measuring social preferences reveal about the real world? Journal of Economic Perspectives, 21(2), 153–174.CrossRefGoogle Scholar
  37. Liebrand, W. B. G. (1984). The effect of social motives, communication and group size on behaviour in an N-person multi-stage mixed-motive game. European Journal of Social Psychology, 14(3), 239–264.CrossRefGoogle Scholar
  38. List, J. A. (2007). Field experiments: A bridge between lab and naturally occurring data. The B.E. Journal of Economic Analysis & Policy, 6(2).Google Scholar
  39. Marwell, G., & Ames, R. E. (1981). Economists free ride, does anyone else? Experiments on the provision of public goods, IV. Journal of Public Economics, 15(3), 295–310.CrossRefGoogle Scholar
  40. Offerman, T., Sonnemans, J., & Schram, A. (1996). Value orientations, expectations and voluntary contributions in public goods. Economic Journal, 106, 817–845.CrossRefGoogle Scholar
  41. Park, E.-S. (2000). Warm-glow versus cold-prickle: A further experimental study of framing effects on free-riding. Journal of Economic Behavior and Organization, 43, 405–421.CrossRefGoogle Scholar
  42. Rosenthal, R., & Rosnow, R. L. (1969). The volunteer subject. In Robert Rosenthal & Ralph L. Rosnow (Eds.), Artifact in behavioral research (pp. 61–118). New York: Academic Press.Google Scholar
  43. Rosenthal, R., & Rosnow, R. L. (1975). The volunteer subject. Wiley series of personality processes. New York: Wiley.Google Scholar
  44. Rustagi, D., Engel, S., & Kosfeld, M. (2010). Conditional cooperation and costly monitoring explain success in forest commons management. Science, 330(6006), 961–965.CrossRefGoogle Scholar
  45. Slonim, R., Wang, C., Garbarino, E., & Merrett, D. (2013). Opting-In: Participation biases in the lab. Journal of Economic Behavior and Organization, 90, 43–70.CrossRefGoogle Scholar
  46. Small, D. A., & Loewenstein, G. (2003). Helping a victim or helping the victim: Altruism and identifiability. Journal of Risk and Uncertainty, 26(1), 5–16.CrossRefGoogle Scholar
  47. Stöber, J. (2001). The social desirability scale-17 (SDS-17): Convergent validity, discriminant validity, and relationship with age. European Journal of Psychological Assessment, 17(3), 222–232.CrossRefGoogle Scholar
  48. Stoop, J., Noussair, C. N., & van Soest, D. (2012). From the lab to the field: Cooperation among fishermen. Journal of Political Economy, 120(6), 1027–1056.CrossRefGoogle Scholar
  49. Tishler, C. L., & Bartholomae, S. (2002). The recruitment of normal healthy volunteers: A review of the literature on the use of financial incentives. Journal of Clinical Pharmacology, 42(4), 365–375.CrossRefGoogle Scholar
  50. Tomporowski, P. D., Simpson, R. G., & Hager, L. (1993). Method of recruiting subjects and performance on cognitive tests. The American Journal of Psychology, 106(4), 499–521.CrossRefGoogle Scholar
  51. Tsutsui, K., & Zizzo, D. J. (2013). Group status, minorities and trust. Experimental Economics. doi: 10.1007/s10683-013-9364-x
  52. von Gaudecker, H.-M., van Soest, A., & Wengström, E. (2012). Experts in experiments. Journal of Risk and Uncertainty, 45(2), 159–190.CrossRefGoogle Scholar
  53. Zizzo, D. J. (2010). Experimenter demand effects in economic experiments. Experimental Economics, 13(1), 75–98.CrossRefGoogle Scholar

Copyright information

© Economic Science Association 2014

Authors and Affiliations

  1. 1.University of Oxford, IZA, and CESifoOxfordUK
  2. 2.University of NottinghamNottinghamUK

Personalised recommendations