Laboratory experiments have become a wide-spread tool in economic research. Yet, there is still doubt about how well the results from lab experiments generalize to other settings. In this paper, we investigate the self-selection process of potential subjects into the subject pool. We alter the recruitment email sent to first-year students, either mentioning the monetary reward associated with participation in experiments; or appealing to the importance of helping research; or both. We find that the sign-up rate drops by two-thirds if we do not mention monetary rewards. Appealing to subjects’ willingness to help research has no effect on sign-up. We then invite the so-recruited subjects to the laboratory to measure their pro-social and approval motivations using incentivized experiments. We do not find any differences between the groups, suggesting that neither adding an appeal to help research, nor mentioning monetary incentives affects the level of social preferences and approval seeking of experimental subjects.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
Recruitment into lab experiments is usually a two-step process: first, students sign up for the subject pool. This only documents their interest in receiving invitations for experiments. Invitations to actual experiments, including exact time slots, are then sent to sub-groups of the subject pool and recipients of these invitations can decide whether to participate or not.
Other studies have investigated how differences in recruitment procedures may affect participation rates and subsequent behavior. Harrison et al. (2009), for instance, study how information about guaranteed show-up fee for participating in experiments affects selection on risk attitudes. There has also been research on the effects of various forms of recruitment outside economics. Tomporowski et al. (1993), for example, compare the performance on laboratory tests of attention and memory of subjects who received monetary incentives, course-credit incentives, or for whom participation was a course requirement. Tishler and Bartholomae (2002) review the literature on the role of financial incentives in healthy volunteers’ decision to participate in clinical trials.
Relatedly, Cubitt et al. (2011) report that monetary incentives increase rates of survey participation. They also find that the different participation incentives do not affect responses in the survey. Also related is the study by Guillén and Veszteg (2012) who find that the decision to repeatedly participate in laboratory experiments is positively related to subjects’ financial performance in previous experiments.
Since each student ID number can end with 1 of 10 possible digits and there are only 3 treatments, we had to assign 4 possible end-digits to one treatment.
For a comprehensive review of this literature see Rosenthal and Rosnow (1975).
There are several reasons why people may not act on their pro-social motivations in certain situations. For example, people may be altruistic towards others only if these are in their reference group. Chen and Li (2009), for instance, find that subjects are more altruistic when matched with an ‘ingroup’ member than with an ‘outgroup’. In the context of our study, students may perceive the researchers/experimenters as an ‘outgroup’ member and may therefore not act altruistically towards them. The laboratory experiment by Frank (1998) is consistent with this view as it shows that subjects do not seem to care about the experimenter’s welfare. Heyman and Ariely (2004) propose that social relationships can be based on economic or social exchanges, where the former are regulated by monetary and market considerations while the latter rely on nonmonetary exchanges. The extent to which subjects are willing to act on their pro-social motivation may thus depend on whether they view laboratory experiments participation as an economic or social exchange.
We thank an anonymous referee for pointing this out.
There may also be a potential composition effect between Money&Appeal and MoneyOnly that we can investigate using the lab experiment. Assume that subjects do not know the level of \( (a + 1)m \). After all, the emails only mentioned that there will be a payment but did not specify the level. Subjects may then try to infer \( (a + 1)m \) from the invitation email. Subjects may conclude that \( (a + 1)m \) is lower in Money&Appeal than in MoneyOnly since the experimenters apparently believe they need to offer both a monetary incentive and an appeal to make subjects sign up (this idea is similar to Bénabou and Tirole 2003). This could lead to different types of subjects selecting into the two Money treatments, even if the overall sign-up rate is the same. This channel relies on the assumption that subjects in Money&Appeal believe that the experimenters believe that \( \beta b > 0 \), which is not an unreasonable belief given the email we sent, but do not rely on the true values of \( \beta \) and \( b \).
The difference between MoneyOnly and AppealOnly is also highly significant (χ2(1) = 108.01, p < 0.001).
For each first-year undergraduate student we contacted via email we have information on gender, field of study and fee status (Home/EU or Overseas).
Given the discussion in the literature about economics students behaving more selfishly than other students in experiments (e.g. Marwell and Ames 1981; Frank et al. 1993; Frey and Meier 2003), one may wonder whether economics students are also more sensitive to the email containing information about financial incentives. To test for this, we ran an additional regression where we interact the treatment dummies with the Economics dummy in Table 1. We find that neither interaction term is significantly different from zero (all p > 0.712).
One referee pointed out that the drop in registrations in AppealOnly may reflect a "social learning" effect: if subjects in AppealOnly had somehow become aware that economics experiments usually involve monetary incentives (e.g., by word-of-mouth from students from previous years), they may have decided to decline the invitation to sign up for the unpaid experiments as they waited to receive an invitation for the paid studies. While we think that this is not very likely (the recruitment took place at the very beginning of the academic year, when first-year students had only limited opportunities to engage with students from previous years), we view such a mechanism in line with our conclusion that selection into experiments is mostly driven by the desire to earn money.
To check whether the order in which subjects took the questionnaires affected responses, we had half of the subjects in a session complete the SDS-17 scale first, while the other half completed the GCS scale first. We do not find significant differences between the two orderings (p > 0.173 using Wilcoxon rank-sum tests), and we thus pool the data from the two sub-groups.
No subjects could be classified as ‘aggressive’ or ‘altruistic’ in our experiment.
We also find no significant differences in subjects’ unconditional contributions (Mann–Whitney tests; Money&Appeal vs. MoneyOnly: p = 0.989; Money&Appeal vs. AppealOnly: p = 0.383) or subjects’ self-reported trust attitudes as measured by the GSS trust question included in the post-experimental questionnaire (Fisher’s exact tests; Money&Appeal vs. MoneyOnly: p = 0.407; Money&Appeal vs. AppealOnly: p = 0.119).
Money&Appeal versus MoneyOnly: lottery task: p = 0.574 using Fisher’s exact test; SOEP risk question: p = 0.353; cognitive reflection test: p = 0.813; Financial Literacy test: p = 0.185, all using Mann–Whitney tests. Money&Appeal versus AppealOnly: lottery task: p = 0.562 using Fisher’s exact test; SOEP risk question: p = 0.524; Cognitive Reflection test: p = 0.420; Financial Literacy test: p = 0.843, all using Mann–Whitney tests.
Comparisons of pooled Money treatments versus AppealOnly (using the same tests as for individual treatment comparisons mentioned above): social value orientation: p = 0.579, cooperativeness type in the public-goods game: p = 0.807, unconditional contribution in the public-goods game: p = 0.373, GSS trust question: p = 0.098, GCS score: p = 0.118, SDS score: p = 0.828, Tsutsui and Zizzo task: p = 0.830.
Anderson, J., Burks, S. V., Carpenter, J., Götte, L., Maurer, K., Nosenzo, D., et al. (2013). Self-selection and variations in the laboratory measurement of other-regarding preferences across subject pools: Evidence from one college student and two adult samples. Experimental Economics, 16(2), 170–189.
Banks, J., & Oldfield, Z. (2007). Understanding pensions: Cognitive function, numerical ability and retirement saving. Fiscal Studies, 28(2), 143–170.
Bauman, Y., & Rose, E. (2011). Selection or indoctrination: Why do economics students donate less than the rest? Journal of Economic Behavior and Organization, 79(3), 318–327.
Belot, M., Duch, R., & L. Miller. (2010). Who should be called to the lab? A comprehensive comparison of students and non-students in classic experimental games. University of Oxford, Nuffield College Discussion Papers (2010-001).
Bénabou, R., & Tirole, J. (2003). Intrinsic and extrinsic motivation. Review of Economic Studies, 70(3), 489–520.
Bohnet, I., & Frey, B. S. (1999). Social distance and other-regarding behavior in dictator games: Comment. American Economic Review, 89(1), 335–339.
Brosig, J. (2002). Identifying cooperative behavior: Some experimental results in a prisoner’s dilemma game. Journal of Economic Behavior and Organization, 47(3), 275–290.
Burks, S., Carpenter, J., & Götte, L. (2009). Performance pay and worker cooperation: Evidence from an artefactual field experiment. Journal of Economic Behavior and Organization, 70(3), 458–469.
Camerer, C. (2011). The promise and success of lab-field generalizability in experimental economics: A critical reply to Levitt and List. SSRN Working Paper.
Charness, G., & Gneezy, U. (2008). What’s in a name? Anonymity and social distance in dictator and ultimatum games. Journal of Economic Behavior and Organization, 68(1), 29–35.
Chen, Y., & Li, S. X. (2009). Group identity and social preferences. American Economic Review, 99(1), 431–457.
Cleave, B. L., Nikiforakis, N., & Slonim, R. (2013). Is there selection bias in laboratory experiments? The case of social and risk preferences. Experimental Economics, 16(3), 372–382.
Coppock, A., & Green, D. P. (2013). Assessing the correspondence between experimental results obtained in the lab and field: A review of recent social science. Mimeo: Columbia University.
Cubitt, R. P., Drouvelis, M., Gächter, S., & Kabalin, R. (2011). Moral judgments in social dilemmas: How bad is free riding? Journal of Public Economics, 95(3–4), 253–264.
Dohmen, T. J., Falk, A., Huffman, D., Sunde, U., Schupp, J., & Wagner, G. G. (2011). Individual risk attitudes: Measurement, determinants, and behavioral consequences. Journal of the European Economic Association, 9(3), 522–550.
Eckel, C. C., & Grossman, P. J. (2000). Volunteers and pseudo-volunteers: The effect of recruitment method in dictator experiments. Experimental Economics, 3(2), 107–120.
Falk, A., & Heckman, J. J. (2009). Lab experiments are a major source of knowledge in the social sciences. Science, 326(5952), 535–538.
Falk, A., Meier, S., & Zehnder, C. (2013). Do lab experiments misrepresent social preferences? The case of self-selected student samples. Journal of the European Economic Association, 11(4), 839–852.
Fischbacher, U. (2007). z-Tree: Zurich toolbox for ready-made economic experiments. Experimental Economics, 10(2), 171–178.
Fischbacher, U., Gächter, S., & Fehr, E. (2001). Are people conditionally cooperative? Evidence from a public goods experiment. Economics Letters, 71(3), 397–404.
Frank, B. (1998). Good news for experimenters: Subjects do not care about your welfare. Economics Letters, 61(2), 171–174.
Frank, R. H., Gilovich, T., & Regan, D. T. (1993). Does studying economics inhibit cooperation? Journal of Economic Perspectives, 7(2), 159–171.
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42.
Frey, B. S., & Meier, S. (2003). Are political economists selfish and indoctrinated? Evidence from a natural experiment. Economic Inquiry, 41(3), 448–462.
Glaeser, E. L., Laibson, D. I., Scheinkman, J. A., & Soutter, C. L. (2000). Measuring trust. Quarterly Journal of Economics, 115(3), 811–846.
Greiner, B. (2004). An online recruitment system for economic experiments. In K. Kremer & V. Macho (Eds.), Forschung und wissenschaftliches Rechnen. GWDG Bericht 63 (pp. 79–93). Göttingen: Ges. für Wiss. Datenverarbeitung.
Gudjonsson, G. H. (1989). Compliance in an interrogative situation: A new scale. Personality and Individual Differences, 10(5), 535–540.
Guillén, P., & Veszteg, R. F. (2012). On “lab rats”. The Journal of Socio-Economics, 41(5), 714–720.
Harrison, G. W., Lau, M. I., & Elisabet Rutström, E. (2009). Risk attitudes, randomization to treatment, and self-selection into experiments. Journal of Economic Behavior and Organization, 70(3), 498–507.
Heyman, J., & Ariely, D. (2004). Effort for payment a tale of two markets. Psychological Science, 15(11), 787–793.
Hoffman, E., McCabe, K., & Smith, V. L. (1996). Social distance and other-regarding behavior in dictator games. American Economic Review, 86(3), 653–660.
Jenni, K. E., & Loewenstein, G. (1997). Explaining the identifiable victim effect. Journal of Risk and Uncertainty, 14(3), 235–257.
Kagel, J. H., Battalio, R. C., & Walker, J. M. (1979). Volunteer artifacts in experiments in economics: Specification of the problem and some initial data from a small-scale field experiment. In V. L. Smith (Ed.), Research in experimental economics (Vol. 1, pp. 169–197). Greenwich: JAI Press.
Krawczyk, M. (2011). What brings your subjects to the lab? A field experiment. Experimental Economics, 14(4), 482–489.
Levati, M. V., Ploner, M., & Traub, S. (2011). Are conditional cooperators willing to forgo efficiency gains? Evidence from a public goods experiment. New Zealand Economic Papers, 45(1), 47–57.
Levitt, S. D., & List, J. A. (2007). What do laboratory experiments measuring social preferences reveal about the real world? Journal of Economic Perspectives, 21(2), 153–174.
Liebrand, W. B. G. (1984). The effect of social motives, communication and group size on behaviour in an N-person multi-stage mixed-motive game. European Journal of Social Psychology, 14(3), 239–264.
List, J. A. (2007). Field experiments: A bridge between lab and naturally occurring data. The B.E. Journal of Economic Analysis & Policy, 6(2).
Marwell, G., & Ames, R. E. (1981). Economists free ride, does anyone else? Experiments on the provision of public goods, IV. Journal of Public Economics, 15(3), 295–310.
Offerman, T., Sonnemans, J., & Schram, A. (1996). Value orientations, expectations and voluntary contributions in public goods. Economic Journal, 106, 817–845.
Park, E.-S. (2000). Warm-glow versus cold-prickle: A further experimental study of framing effects on free-riding. Journal of Economic Behavior and Organization, 43, 405–421.
Rosenthal, R., & Rosnow, R. L. (1969). The volunteer subject. In Robert Rosenthal & Ralph L. Rosnow (Eds.), Artifact in behavioral research (pp. 61–118). New York: Academic Press.
Rosenthal, R., & Rosnow, R. L. (1975). The volunteer subject. Wiley series of personality processes. New York: Wiley.
Rustagi, D., Engel, S., & Kosfeld, M. (2010). Conditional cooperation and costly monitoring explain success in forest commons management. Science, 330(6006), 961–965.
Slonim, R., Wang, C., Garbarino, E., & Merrett, D. (2013). Opting-In: Participation biases in the lab. Journal of Economic Behavior and Organization, 90, 43–70.
Small, D. A., & Loewenstein, G. (2003). Helping a victim or helping the victim: Altruism and identifiability. Journal of Risk and Uncertainty, 26(1), 5–16.
Stöber, J. (2001). The social desirability scale-17 (SDS-17): Convergent validity, discriminant validity, and relationship with age. European Journal of Psychological Assessment, 17(3), 222–232.
Stoop, J., Noussair, C. N., & van Soest, D. (2012). From the lab to the field: Cooperation among fishermen. Journal of Political Economy, 120(6), 1027–1056.
Tishler, C. L., & Bartholomae, S. (2002). The recruitment of normal healthy volunteers: A review of the literature on the use of financial incentives. Journal of Clinical Pharmacology, 42(4), 365–375.
Tomporowski, P. D., Simpson, R. G., & Hager, L. (1993). Method of recruiting subjects and performance on cognitive tests. The American Journal of Psychology, 106(4), 499–521.
Tsutsui, K., & Zizzo, D. J. (2013). Group status, minorities and trust. Experimental Economics. doi:10.1007/s10683-013-9364-x
von Gaudecker, H.-M., van Soest, A., & Wengström, E. (2012). Experts in experiments. Journal of Risk and Uncertainty, 45(2), 159–190.
Zizzo, D. J. (2010). Experimenter demand effects in economic experiments. Experimental Economics, 13(1), 75–98.
We thank Jacob K. Goeree, two anonymous referees, Steffen Altmann, Stephen V. Burks, Simon Gächter, David Gill, David Huffman, John List, Nikos Nikiforakis, Collin Raymond, and Chris Starmer for helpful comments. We gratefully acknowledge support from the Leverhulme Trust (ECF/2010/0636).
Electronic supplementary material
Below is the link to the electronic supplementary material.
About this article
Cite this article
Abeler, J., Nosenzo, D. Self-selection into laboratory experiments: pro-social motives versus monetary incentives. Exp Econ 18, 195–214 (2015). https://doi.org/10.1007/s10683-014-9397-9
- Selection bias
- Laboratory experiment
- Field experiment
- Other-regarding behavior
- Social preferences
- Social approval
- Experimenter demand