Skip to main content

Social preferences in the online laboratory: a randomized experiment

Abstract

Internet is a very attractive technology for the implementation of experiments, both in order to obtain larger and more diverse samples and as a field of economic research in its own right. This paper reports on an experiment performed both online and in the laboratory, designed to strengthen the internal validity of decisions elicited over the Internet. We use the same subject pool, the same monetary stakes and the same decision interface, and control the assignment of subjects between the Internet and a traditional university laboratory. We apply the comparison to the elicitation of social preferences in a Public Good game, a dictator game, an ultimatum bargaining game and a trust game, coupled with an elicitation of risk aversion. This comparison concludes in favor of the reliability of behaviors elicited through the Internet. We moreover find a strong overall parallelism in the preferences elicited in the two settings. The paper also reports some quantitative differences in the point estimates, which always go in the direction of more other-regarding decisions from online subjects. This observation challenges either the predictions of social distance theory or the generally assumed increased social distance in internet interactions.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Notes

  1. 1.

    In a recent paper, Henrich et al. (2010) warned against behavioral scientists’ current over-reliance on data overwhelmingly gathered from populations of Western undergraduate students and recommended a major effort to broaden the sample base. The Internet is a promising medium for conducting experiments with large and diverse samples. It is now possible to reach 78.3 % of the North American population through the Internet, and while only 11.4 % of the African population can currently be reached through this method, the exponential growth of its user base (from 4 million users in 2000 to 118 million users in 2011) could soon make it an attractive tool for conducting experiments in the developing world as well (source:www.Internetworldstats.com).

  2. 2.

    The second decision is a variant of the “strategy method” (Selten 1967), introduced by Fischbacher et al. (2001) to elicit conditional cooperation. As in the original strategy method, subjects are asked decisions for each possible state of the world, but these states are reduced to average contributions of other subjects instead of all possible combinations of their individual decisions. In order to give subjects a monetary incentive to take both decisions seriously, we applied the same compensation rule as in Fischbacher et al. (2001): for one randomly chosen subject, the table of unconditional decisions is binding; for the other three the relevant decisions are the unconditional ones. These realizations of the draw are the monetary outcomes of this stage for each subject.

  3. 3.

    The interface has been developed under Lime Survey (http://www.limesurvey.org/), a highly customizable open-source survey tool.

  4. 4.

    The system considered the mouse inactive when it was moving over screens not belonging to the experimental economics platform.

  5. 5.

    The database is managed using Orsee (Greiner 2004).

  6. 6.

    Since we apply a sequential matching rule for online subjects, the queue has to be initialized somewhere. We used data from 3 pilot sessions in the laboratory run during summer 2010 in preparation for the current study.

  7. 7.

    Overall, 208 subjects logged into the platform to participate in the online experiment, of whom 6 dropped out before completion.

  8. 8.

    Our robustness treatments, presented in Section 4.3, provide some preliminary insights on this issue.

  9. 9.

    The 2010 version of the experimental economics platform did not elicit subjects’ level of confidence in the experimental instructions, nor did it collect detailed data on the time spent by subjects on each screen of the interface. After observing that overall response times did indeed significantly differ between treatments, we decided to include those features before conducting further sessions.

  10. 10.

    Note that in constructing this figure, we excluded from the analysis the five laboratory and 22 Internet subjects who arguably misunderstood the task and choose option A in decision 10. Apart from the last data point, including those subjects has no impact on the figure.

  11. 11.

    The table actually reports two statistically significant coefficients: one associated with the fact of not being born in France, the other associated with the fact of having a father not born in France. It turns out that these two variables are heavily related in the sample (corr = 0.51; p < 0.001).

  12. 12.

    Here we define confusion as either choosing the secure option (i.e. option A) in the last decision or switching back from option B to option A at least once. Results are available from the authors upon request.

  13. 13.

    We ran two additional robustness checks confirming the reliability of these results (Results are available from the authors upon request). First, we excluded from the Internet sample all subjects who logged into the online platform after the target of 20 participants per experimental session had already been reached (so that we obtained a perfectly balanced sample between laboratory and Internet subjects). We thus explored the possibility that our findings were driven by those Internet subjects who logged in to the experiment last in each session. Second, we ran the analysis on social preferences while explicitly controlling for individual levels of risk aversion in the Holt and Laury task. Contrary to Internet subjects, laboratory subjects had to incur some physical and monetary costs in order to get to the lab and play. Those costs incurred a priori could have made laboratory subjects relatively more willing to secure their earnings from the experiment, which could be the reason behind the higher levels of risk aversion in decision-making that we observed among laboratory subjects. This higher level of risk aversion, in turn, could have induced laboratory subjects to behave in a more conservative way (i.e. less pro-socially) in certain games. In neither case, however, do we find any impact on the magnitude and significance of our estimates.

  14. 14.

    Even if online subjects do seem to play faster on average, some of them spent quite a lot of time on the experiment. One extreme case was a subject who spent more than 3 hours on the experiment without once triggering the 5-minutes inactivity indicator.

  15. 15.

    The evidence reported in Piovesan and Wengstrom (2009) is an exception.

  16. 16.

    The change in the magnitude of these coefficients is explained by the negative correlation between the Internet treatment and average decision time, which is found to be positively and significantly associated with our measures of trust and trustworthiness.

  17. 17.

    Theses measures are very likely to be correlated with unobserved factors determining behavior in our games, and so we do not include them as control variables in the regressions.

  18. 18.

    We only report a short summary of the main results from these treatments. A complete description of their design and a detailed analysis of the results are available in an online Appendix.

  19. 19.

    For instance, less than 20 % of subjects make no transfer in the Dictator game in the online treatment, while this proportion is more than doubled in the other three laboratory treatments. Similarly, less than 10 % of subjects make no transfer in the Trust game in the online condition, while this proportion is again more than doubled in the other treatments.

  20. 20.

    Parametric regressions on pooled data confirm the qualitative conclusions. First, some of the previously significant differences are no longer significant once the laboratory sessions incorporate the differences in design. Focusing on social preferences, sequential matching in the laboratory seems to replicate the higher levels of trust and trustworthiness found online in the Trust game. The higher level of donation in the Dictator game, by contrast, is robust to both changes and appears as specific to the online elicitation field. A last result is that the risk preferences elicited online are no longer different from the ones observed in the lab, once it features either PayPal compensation or sequential matching.

  21. 21.

    The exact p value on the test of mean equality in transfers in the dictator game from Table 3 is 7.39e-7, which drives rejection even if one accounts for more than 1,000 outcomes. If we instead focus separately on positive transfers and conditional transfers, i.e. restricting to positive contributions only, the p value of the difference in contributions in the dictator game is 0.0003 leading to more mix conclusions (in the trust game, the p-value on the share of positive returns is 0.015, it is 0.0212 for the comparison in mean amounts returned if positive). For instance, the equality in social preferences between the in-lab and online treatments is rejected at the 1 % level if we consider that each game yields one outcome of interest per decision role (i.e. k = 6, adjusted threshold = 0,0017), or if we consider each variable reported in Table 3 as one outcome of interest (i.e. k = 14; adjusted threshold = 0.0007). The conclusion is reversed if the variance of outcome behavior (14 outcomes), as well as the beliefs over the experiment (5) and the self reported measures of trust (5) are accounted for (k = 38; adjusted threshold = 0.00026).

  22. 22.

    The lack of an “institutional” way of securing social and economic interactions over the Internet is often invoked as a reason why many Internet users who value their anonymity online are nonetheless willing to stick to and invest in a unique online identity or pseudonym.

References

  1. Akerlof, G. A. (1997). Social distance and social decisions. Econometrica, 65(5), 1005–1027.

    Article  Google Scholar 

  2. Amir, O., Rand, D. G., & Gal, Y. K. (2012). Economic games on the Internet: The effect of $1 stakes. PLoS ONE, 7(2), 1–4.

    Article  Google Scholar 

  3. Anderhub, V., Müller, R., & Schmidt, C. (2001). Design and evaluation of an economic experiment via the Internet. Journal of Economic Behavior & Organization, 46(2), 227–247.

    Article  Google Scholar 

  4. Bainbridge, W. S. (2007). The scientific research potential of virtual worlds. Science, 317(5837), 472–476.

    Article  Google Scholar 

  5. Bland, J. M., & Altman, D. G. (1995). Multiple significance tests: The Bonferroni method. BMJ: British Medical Journal, 310(6973), 170.

    Article  Google Scholar 

  6. Charness, G., Haruvy, E., & Sonsino, D. (2007). Social distance and reciprocity: An Internet experiment. Journal of Economic Behavior & Organization, 63(1), 88–103.

    Article  Google Scholar 

  7. Chesney, T., Chuah, S.-H., & Hoffmann, R. (2009). Virtual world experimentation: An exploratory study. Journal of Economic Behavior & Organization, 72(1), 618–635.

    Article  Google Scholar 

  8. Cooper, D. J., & Saral, K. J. (2013). Entrepreneurship and team participation: An experimental study. European Economic Review, 59, 126–140.

    Article  Google Scholar 

  9. Dohmen, T., et al. (2011). Individual risk attitudes: Measurement, determinants and behavioral consequences. Journal of the European Economic Association, 9(3), 522–550.

    Article  Google Scholar 

  10. Eckel, C. C., & Wilson, R. K. (2006). Internet cautions: Experimental games with Internet partners. Experimental Economics, 9(1), 53–66.

    Article  Google Scholar 

  11. Fehr, E., & Camerer, C. F. (2004). Measuring social norms and preferences using experimental games: A guide for social scientists. Foundations of Human Sociality, 1(9), 55–96.

    Google Scholar 

  12. Fiedler, M., & Haruvy, E. (2009). The lab versus the virtual lab and virtual field—An experimental investigation of trust games with communication. Journal of Economic Behavior & Organization, 72(2), 716–724.

    Article  Google Scholar 

  13. Fiedler, M., Haruvy, E., & Li, S. X. (2011). Social distance in a virtual world experiment. Games and Economic Behavior, 72(2), 400–426.

    Article  Google Scholar 

  14. Fischbacher, U., Gächter, S., & Fehr, E. (2001). Are people conditionally cooperative? Evidence from a public goods experiment. Economics Letters, 71(3), 397–404.

    Article  Google Scholar 

  15. Glaeser, E. L., et al. (2000). Measuring Trust. The Quarterly Journal of Economics, 115(3), 811–846.

    Article  Google Scholar 

  16. Greif, A. (2006). Institutions and the path to the modern economy: Lessons from medieval trade. New York: Cambridge University Press.

    Book  Google Scholar 

  17. Greiner, B. (2004). An online recruitment system for economic experiments. Germany: University Library of Munich.

    Google Scholar 

  18. Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83.

    Article  Google Scholar 

  19. Hoffman, E., McCabe, K., & Smith, V. L. (1996). Social distance and other-regarding behavior in dictator games. The American Economic Review, 86(3), 653–660.

    Google Scholar 

  20. Hoffman, M., & Morgan, J. (2011). Who’s Naughty? Who’s Nice? Social Preferences in Online Industries. UC Berkeley Working Paper.

  21. Holt, C. A., & Laury, S. K. (2002). Risk aversion and incentive effects. The American Economic Review, 92(5), 1644–1655.

    Article  Google Scholar 

  22. Horton, J. J., Rand, D. G., & Zeckhauser, R. J. (2011). The online laboratory: Conducting experiments in a real labor market. Experimental Economics, 14(3), 399–425.

    Article  Google Scholar 

  23. Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics. The American Economic Review, 93(5), 1449–1475.

    Article  Google Scholar 

  24. Lotito, G., Migheli, M., & Ortona, G. (2013). Is cooperation instinctive? Evidence from the response times in a public goods game. Journal of Bioeconomics, 15(2), 123–133.

    Article  Google Scholar 

  25. Piovesan, M., & Wengstrom, E. (2009). Fast or fair? A study of response times. Economics Letters, 105(2), 193–196.

  26. Rand, D. G., Greene, J. D., & Nowak, M. A. (2012). Spontaneous giving and calculated greed. Nature, 489(7416), 427–430.

    Article  Google Scholar 

  27. Resnick, P., et al. (2006). The value of reputation on eBay: A controlled experiment. Experimental Economics, 9(2), 79–101.

    Article  Google Scholar 

  28. Rubinstein, A. (2007). Instinctive and cognitive reasoning: A study of response times. The Economic Journal, 117(523), 1243–1259.

    Article  Google Scholar 

  29. Selten, R. (1967). Die strategiemethode zur erforschung des eingeschrankt rationalen verhaltens im rahmen eines oligopol experiments. In H. Sauermann (Ed.), Beitrage zur Experimentellen Wirtschaftsforschung (pp. 136–168). Tubingen: J.C.B. Mohr.

    Google Scholar 

  30. Shavit, T., Sonsino, D., & Benzion, U. (2001). A comparative study of lotteries-evaluation in class and on the Web. Journal of Economic Psychology, 22(4), 483–491.

    Article  Google Scholar 

Download references

Acknowledgments

This paper is a revised and augmented version of CES Working Paper n° 2012-70. We are grateful to Anne l’Hôte, Andrews-Junior Kimbembe and Ivan Ouss for their outstanding research assistance, as well as Maxim Frolov and Joyce Sultan for their help in running the laboratory experimental sessions. We are especially indebted to Yann Algan for his help during the development of this project. We gratefully thank the editor, Jacob Goeree, two anonymous referees and Guillaume Fréchette, Olivier L’haridon, Stéphane Luchini, David Margolis, Ken Boum My, Paul Pézanis-Christou, Dave Rand, Al Roth, Antoine Terracol, Laurent Weill and the members of the Berkman Cooperation group for helpful remarks and discussions. We also thank seminar participants at the Berkman Center for Internet & Society at Harvard and the 2012 North American Economic Science Association conference for their comments. We gratefully acknowledge financial support from the European Research Council (ERC Starting Grant). Jacquemet acknowledges the Institut Universitaire de France.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Nicolas Jacquemet.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (DOCX 195 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Hergueux, J., Jacquemet, N. Social preferences in the online laboratory: a randomized experiment. Exp Econ 18, 251–283 (2015). https://doi.org/10.1007/s10683-014-9400-5

Download citation

Keywords

  • Social experiment
  • Field experiment
  • Internet
  • Methodology
  • Randomized assignment

JEL Classification

  • C90
  • C93
  • C70