Experienced vs. inexperienced participants in the lab: do they behave differently?

Abstract

We analyze whether subjects with extensive laboratory experience and first-time participants, who voluntarily registered for the experiment, differ in their behavior. Subjects play four one-shot, two-player games: a trust game, a beauty contest, an ultimatum game, a traveler’s dilemma and, in addition, we conduct a single-player lying task and elicit risk preferences. We find few significant differences. In the trust game, experienced subjects are less trustworthy and they also trust less. Furthermore, experienced subjects submit fewer non-monotonic strategies in the risk elicitation task. We find no differences whatsoever in the other decisions. Nevertheless, the minor differences observed between experienced and inexperienced subjects may be relevant because we document a potential recruitment bias: the share of inexperienced subjects may be lower in the early recruitment waves.

This is a preview of subscription content, log in to check access.

Notes

  1. 1.

    Employing the strategy method in the trust game may have an impact on decisions. See Burks et al. (2003), Brandts and Charness (2011) or Casari and Cason (2009). Also the fact that participants played both roles in the trust game might have an impact on fairness considerations. Playing both roles may reduce the degree of trust and trustworthiness, see for example Burks et al. (2003) or Johnson and Mislin (2011).

  2. 2.

    As the UG was conducted effectively as a simultaneous-move game, we have multiple Nash equilibria but selection criteria such as trembling-hand perfection or weak dominance would yield the equilibrium reported above.

  3. 3.

    Using the strategy method might also influence the behavior in the UG. See Oxoby and McLeish (2004), Oosterbeek et al. (2004) or Brandts and Charness (2011).

  4. 4.

    After the payoff-relevant throw, we encouraged the subjects to keep on throwing the die in order to test whether the die was fair. In the end, subjects had to report the result of their first throw. Participants were seated in individual cubicles and were not monitored by anyone during this task, so lying can only be detected at the aggregate level and not at the individual level.

  5. 5.

    An English translation of the instructions can be found in the online appendix.

  6. 6.

    Our power calculations are (with the exception of BC) based on treatment effects reported in the literature where subjects are randomly assigned to treatments, whereas we are not looking at random assignments when we compare experienced vs. inexperienced subjects.

  7. 7.

    We note that we have two hypotheses here regarding UG1 which might, and in our data actually do, give rise to different hypotheses. Hypothesis 1 (2) suggests lower offers made by experienced subjects but since the payoff-maximizing offer turns out to be above the average, Hypothesis 2 (3) suggests experienced subjects make higher offers. We do not maintain hypotheses regarding RE. If experienced subjects earn higher payoffs, this might translate into less risk-averse attitudes. This, however, would be an indirect conclusion about preferences whereas direct evidence on risk preferences (Cleave et al., 2013) is not in favor of such a hypothesis.

  8. 8.

    Since we conduct eight different tests here, we may encounter false positives due to multiple testing. If we Bonferroni-correct our p-values, we obtain a critical p-value of \(0.05/8=0.00625\), and so only TG1 would remain significant. The Bonferroni method controls for the family-wise error rate and is known to be rather conservative. Following Benjamini and Hochberg (1995), we can alternatively control for the false discovery rate. This suggests a critical p-value of \(0.056\cdot2/8 = 0.014\), on the basis of a significance level of 0.056, and we can maintain significance of our results for TG1 and TG2.

  9. 9.

    This result does not change when we include maos of one (which are also payoff maximizing) in the analysis.

  10. 10.

    Choices in TG1 and TG2 are strongly correlated (\(\rho =0.51\), \(p<0.01\)). Blanco et al. (2014) suggest that a consensus effect may be driving this correlation: subjects who choose exploit in TG2 overestimate the share of people who exploit. In other words, the belief when making the TG1 choice is biased toward players’ own TG2 decision and the two choices are positively correlated. This would explain why, in TG1, experienced subjects choose optimally less often than inexperienced subjects.

  11. 11.

    Following Holt and Laury (2002), we simply count the number of safe choices, also for subjects with non-monotone decisions.

  12. 12.

    The direction of causality is not obvious here: inexperienced subjects may be slow because they are inexperienced, or inexperienced subjects may be (or remain) inexperienced because they respond slowly.

  13. 13.

    While we do not study treatments, the recruitment bias we find has implications when (in-)experienced subjects are unevenly distributed across treatments.

References

  1. Andersen, S., Ertaç, S., Gneezy, U., Hoffman, M., & List, J. A. (2011). Stakes matter in ultimatum games. The American Economic Review, 101(7), 3427–3439.

    Article  Google Scholar 

  2. Benjamini Y, Hochberg Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B (Methodological), pp. 289–300.

  3. Blanco, M., Engelmann, D., & Normann, H. T. (2011). A within-subject analysis of other-regarding preferences. Games and Economic Behavior, 72(2), 321–338.

    Article  Google Scholar 

  4. Blanco, M., Engelmann, D., Koch, A. K., & Normann, H. T. (2014). Preferences and beliefs in a sequential social dilemma: a within-subjects analysis. Games and Economic Behavior, 87, 122–135.

    Article  Google Scholar 

  5. Bolton, G. E., Katok, E., & Ockenfels, A. (2004). How effective are electronic reputation mechanisms? an experimental investigation. Management science, 50(11), 1587–1602.

    Article  Google Scholar 

  6. Brandts, J., & Charness, G. (2011). The strategy versus the direct-response method: a first survey of experimental comparisons. Experimental Economics, 14(3), 375–398.

    Article  Google Scholar 

  7. Burks, S. V., Carpenter, J. P., & Verhoogen, E. (2003). Playing both roles in the trust game. Journal of Economic Behavior & Organization, 51(2), 195–216.

    Article  Google Scholar 

  8. Capra, C. M., Goeree, J. K., Gomez, R., & Holt, C. A. (1999). Anomalous behavior in a traveler’s dilemma? American Economic Review, 89(3), 678–690.

    Article  Google Scholar 

  9. Casari, M., & Cason, T. N. (2009). The strategy method lowers measured trustworthy behavior. Economics Letters, 103(3), 157–159.

    Article  Google Scholar 

  10. Casari, M., Ham, J. C., & Kagel, J. H. (2007). Selection bias, demographic effects, and ability effects in common value auction experiments. American Economic Review, 97(4), 1278–1304.

    Article  Google Scholar 

  11. Cleave, B. L., Nikiforakis, N., & Slonim, R. (2013). Is there selection bias in laboratory experiments? the case of social and risk preferences. Experimental Economics, 16(3), 372–382.

    Article  Google Scholar 

  12. Eckel, C. C., & Grossman, P. J. (2000). Volunteers and pseudo-volunteers: The effect of recruitment method in dictator experiments. Experimental Economics, 3(2), 107–120.

    Article  Google Scholar 

  13. Falk, A., Meier, S., & Zehnder, C. (2013). Do lab experiments misrepresent social preferences? the case of self-selected student samples. Journal of the European Economic Association, 11(4), 839–852.

    Article  Google Scholar 

  14. Fehr, E., Schmidt, K.M. (1999). A theory of fairness, competition, and cooperation. Quarterly journal of Economics pp. 817–868

  15. Fischbacher, U. (2007). z-tree: Zurich toolbox for ready-made economic experiments. Experimental economics, 10(2), 171–178.

    Article  Google Scholar 

  16. Fischbacher, U., & Föllmi-Heusi, F. (2013). Lies in disguise-an experimental study on cheating. Journal of the European Economic Association, 11(3), 525–547.

    Article  Google Scholar 

  17. Garbarino, E., Slonim, R., Villeval, M.C. (2016). Loss aversion and lying behavior: theory, estimation and empirical evidence. Mimeo.

  18. Greiner, B. (2015). Subject pool recruitment procedures: organizing experiments with orsee. Journal of the Economic Science Association, 1(1), 114–125.

    Article  Google Scholar 

  19. Grosskopf, B., & Nagel, R. (2008). The two-person beauty contest. Games and Economic Behavior, 62(1), 93–99.

    Article  Google Scholar 

  20. Guillén, P., & Veszteg, R. F. (2012). On “lab rats”. Journal of Socio-Economics, 41(5), 714–720.

    Article  Google Scholar 

  21. Güth, W., Schmittberger, R., & Schwarze, B. (1982). An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization, 3(4), 367–388.

    Article  Google Scholar 

  22. Holt, C. A., & Laury, S. K. (2002). Risk aversion and incentive effects. American Economic Review, 92(5), 1644–1655.

    Article  Google Scholar 

  23. Johnson, N. D., & Mislin, A. A. (2011). Trust games: a meta-analysis. Journal of Economic Psychology, 32(5), 865–889.

    Article  Google Scholar 

  24. Matthey, A., & Regner, T. (2013). On the independence of history: experience spill-overs between experiments. Theory and Decision, 75(3), 403–419.

    Article  Google Scholar 

  25. Nagel, R. (1995). Unraveling in guessing games: an experimental study. American Economic Review, 85(5), 1313–1326.

    Google Scholar 

  26. Oosterbeek, H., Sloof, R., & Van De Kuilen, G. (2004). Cultural differences in ultimatum game experiments: evidence from a meta-analysis. Experimental Economics, 7(2), 171–188.

    Article  Google Scholar 

  27. Oxoby, R. J., & McLeish, K. N. (2004). Sequential decision and strategy vector methods in ultimatum bargaining: evidence on the strength of other-regarding behavior. Economics Letters, 84(3), 399–405.

    Article  Google Scholar 

  28. Slonim, R., Wang, C., Garbarino, E., & Merrett, D. (2013). Opting-in: participation bias in economic experiments. Journal of Economic Behavior & Organization, 90, 43–70.

    Article  Google Scholar 

Download references

Acknowledgements

We are grateful to our editor, Bob Slonim, and two anonymous referees for helpful comments. Also Tim Cason’s and Hannah Schildberg-Hörisch’s comments helped improving the paper. Thanks also to Brit Grosskopf and Rosemarie Nagel for sharing their data with us.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Hans-Theo Normann.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (PDF 303 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Benndorf, V., Moellers, C. & Normann, H. Experienced vs. inexperienced participants in the lab: do they behave differently?. J Econ Sci Assoc 3, 12–25 (2017). https://doi.org/10.1007/s40881-017-0036-z

Download citation

Keywords

  • Dilemma
  • Experienced subjects
  • Laboratory methods
  • Trust game

JEL Classification

  • C90
  • C70
  • C72