Abstract
While both simultaneous and sequential contests are mechanisms used in practice such as crowdsourcing, job interviews and sports contests, few studies have directly compared their performance. By modeling contests as incomplete information all-pay auctions with linear costs, we analytically and experimentally show that the expected maximum effort is higher in simultaneous contests, in which contestants choose their effort levels independently and simultaneously, than in sequential contests, in which late entrants make their effort choices after observing all prior participants’ choices. Our experimental results also show that efficiency is higher in simultaneous contests than in sequential ones. Sequential contests’ efficiency drops significantly as the number of contestants increases. We also discover that when participants’ ability follows a power distribution, high ability players facing multiple opponents in simultaneous contests tend to under-exert effort, compared to theoretical predictions. We explain this observation using a simple model of overconfidence.
Similar content being viewed by others
Notes
We assume that each participant’s effort unambiguously determines the quality of the submission, and that the quality is objectively quantifiable. Therefore, we use the words “bid,” “quality,” and “effort” interchangeably throughout the paper. We also use “participant,” “contestant,” and “player” as synonyms.
This assumption enables the existence of an explicit BNE in sequential contests (Segev and Sela 2014).
When there is a tie, the winner is randomly selected.
As a special case of ours (\(c=1\)), Chawla et al. (2012) show that with a uniform distribution, the EME monotonically increases in n.
It is worthwhile to note that with more general distribution functions, the EME in simultaneous all-pay auctions is not necessarily monotonic (Moldovanu and Sela 2006).
Segev and Sela (2014) derive the EME for \(n = 2,\) and 3 under the condition that each participant has a different c, although they do not provide a generalized explicit solution for n players.
Two simultaneous contest sessions with two players (SIM2) did not have enough subjects. Therefore, we had to run a session with 10 subjects and another with 8 subjects. However, the observed behaviors in these two sessions were not statistically significantly different from other sessions, so we treated them the same as other sessions of the same treatment.
Four decimal points were kept for the display of ability factors, i.e., \(a_{i},\) and for subjects’ input of their effort levels, i.e., \(q_{i}.\)
We chose two tokens as the maximum earning since it was much below the average earning in our auctions in each round (146.67 tokens), so as to avoid hedging biases as suggested by Blanco et al. (2010).
In some literature this measure is simply called “efficiency” (e.g., Noussair and Silver 2006). Here, we call it value efficiency to differentiate it from the first measure.
Seven subjects went bankrupt during the experiment and we decided to pay them a $10 flat fee to compensate them for the time, although such a payment was not pre-announced.
To benchmark our experimental contests’ performance against their BNE predictions, we report two-sided p-values from one-sample signed rank tests. In both simultaneous and sequential contests, the aggregate maximum effort does not appear different from BNE predictions (\(p>0.1,\) two-sided tests).
The comparison remains insignificant for the contests with two players (SIM2 vs. SEQ2: 5.80 vs. 3.12, \(p=0.149,\) two-sided rank-sum tests).
To benchmark the efficiency measures with their theoretical predictions (Table 2), we report results using one-sample signed rank tests. Overall, simultaneous contests’ efficiency levels are lower than expected, i.e., achieving a proportion of efficient allocations lower than 100% (SIM2: 78%, \(p=0.068;\) SIM3: 74%, \(p=0.068,\) two-sided tests). SEQ2 contests’ allocative efficiency levels are very close to their theoretical predictions, i.e., reaching a proportion of efficient allocations at 75% (vs. 77%, \(p=0.26,\) two-sided test). The SEQ3 contests, however, are less efficient than predicted, i.e., achieving a proportion of efficient allocations of 56% (vs. 65%, \(p=0.07,\) two-sided test).
Because z-Tree requires a maximum decimal point for any numerical input, subjects’ efforts were restricted to four decimal points in our experiment. Therefore, we use 0.0001 as the increment of best responses, i.e., when a player tries to match the existing highest effort, she should simply make an effort that equals the existing highest effort plus 0.0001. To account for subjects’ imprecision in entering their efforts, we checked our results using 0.001, 0.01 and 0.1, and found no significant changes to our results.
We also ran probit regressions where the dependent variable was whether overbidding occurred, and the results were qualitatively consistent.
Since the standard errors are clustered at the session level, they could be biased due to the small number of clusters in our data. As a robustness check, we ran regression analyses with a wild cluster bootstrap procedure to correct for the small number of clusters (Cameron et al. 2008) and found that the results remained the same.
Following Muller and Schotter (2010), we used a switching regression model to examine whether the individuals’ effort function was continuous. The switching regression model fit the data significantly better than BNE predictions based on the sum of squared deviations (SSDs) measure (SIM2: 21.37 vs. 549.83, \(p < 0.01;\) SIM3: 25.04 vs. 695.77, \(p < 0.01,\) two-sided signed rank tests). Nevertheless, this switching regression model could not explain why the effort levels of high ability contestants in SIM3 were lower than BNE predictions.
Under the assumption of loss-aversion, Mermer (2013) analytically shows that high ability contestants over-exert effort while low ability contestants under-exert effort.
The power distribution used in our experiment was described by its quantiles in the experimental instructions. See Appendix F in online for details.
Consistently, results from Table 8 show that the BLF model does not fit the data significantly better than BNE.
References
Alicke, M. D., Klotz, M. L., Breitenbecher, D. L., Yurak, T. J., & Vredenburg, D. S. (2004). Personal contact, individuation, and the better-than-average effect. Journal of Personality and Social Psychology, 68(5), 804–825.
Altmann, S., Falk, A., & Wibral, M. (2012). Promotions and incentives: The case of multistage elimination tournaments. Journal of Labor Economics, 30(1), 149–174.
Amegashie, J. A., Cadsby, C. B., & Song, Y. (2007). Competitive Burnout: Theory and Experimental Evidence. Games and Economic Behavior, 59(2), 213–239.
Anderson, S. P., Goeree, J. K., & Holt, C. A. (1998). Rent seeking with bounded rationality: An analysis of the all-pay auction. Journal of Political Economy, 106(4), 828–853.
Archak, N., & Sundararajan, A. (2009). Attracting the best and the brightest: Asymptotics and optimal prize structure of all-pay contests with heterogeneous participants. Working paper.
Barut, Y., Kovenock, D., & Noussair, C. (2002). A comparison of multiple-unit all-pay and winner-pay auctions under incomplete information. International Economic Review, 43(3), 675–708.
Baye, M. R., Kovenock, D., & de Vries, C. G. (2005). Comparative analysis of litigation systems: An auction-theoretic approach. The Economic Journal, 115(505), 583–601.
Benz, M., & Meier, S. (2008). Do people behave in experiments as in the field? Evidence from donations. Experimental Economics, 11(3), 268–281.
Blanco, M., Engelmann, D., Koch, A. K., & Normann, H. T. (2010). Belief elicitation in experiments: Is there a hedging problem? Experimental Economics, 13(4), 412–438.
Cameron, A. C., Gelbach, J. B., & Miller, D. L. (2008). Bootstrap-based improvements for inference with clustered errors. The Review of Economics and Statistics, 90(3), 414–427.
Carpenter, J., Holmes, J., & Matthews, P. H. (2008). Charity auctions: A field experiment. The Economic Journal, 118(525), 92–113.
Charness, G., Gneezy, U., & Imas, A. (2013). Experimental methods: Eliciting risk preferences. Journal of Economic Behavior and Organization, 87, 43–51.
Chawla, S., & Hartline, J. D. (2013). Auctions with unique equilibria. In Proceedings of the fourteenth ACM conference on electronic commerce, 2013 (pp. 181–196).
Chawla, S., Hartline, J. D., & Sivan, B. (2012). Optimal crowdsourcing contests. In Proceedings of the twenty-third annual ACM–SIAM symposium on discrete Algorithms, 2012 (pp. 856–868).
Che, Y.-K., & Gale, I. (2003). Optimal design of research contests. American Economic Review, 93(3), 646–671.
Cooper, D., & Fang, H. (2008). Understanding overbidding in second price auctions: An experimental study. The Economic Journal, 118(532), 1572–1595.
Dasgupta, P. (1986). The theory of technological competition. In J. E. Stiglitz & F. Mathewson (Eds.), New developments in the analysis of market structures (pp. 519–548). London: Macmillan.
Davis, D. D., & Reilly, R. J. (1998). Do too many cooks spoil the stew? An experimental analysis of rent-seeking and the role of a strategic buyer. Public Choice, 95(1), 89–115.
Dechenaux, E., Kovenock, D., & Sheremeta, R. M. (2015). A survey of experimental research on contests, all-pay auctions and tournaments. Experimental Economics, 18(4), 609–669.
Dixit, A. (1987). Strategic behavior in contests. The American Economic Review, 77(5), 891–898.
Eckel, C. C., & Grossman, P. J. (2002). Sex differences and statistical stereotyping in attitudes toward financial risk. Evolution and Human Behavior, 23(4), 281–295.
Fibich, G., Gavious, A., & Sela, A. (2006). All-pay auctions with risk-averse players. International Journal of Game Theory, 34(4), 583–599.
Fischbacher, U. (2007). z-Tree: Zurich toolbox for ready-made economic experiments. Experimental Economics, 10(2), 171–178.
Fu, Q. (2006). Endogenous timing of contest with asymmetric information. Public Choice, 129(1), 1–23.
Fu, Q., & Lu, J. (2012). The optimal multi-stage contest. Economic Theory, 51(2), 351–382.
Gneezy, U., & Smorodinsky, R. (2006). All-pay auction: An experimental study. Journal of Economic Behavior and Organization, 61(2), 255–275.
Gradstein, M., & Konrad, K. A. (1999). Orchestrating rent seeking contests. The Economic Journal, 109(458), 536–545.
Grosskopf, B., Rentschler, L., & Sarin, R. (2010). Asymmetric information in contests: Theory and experiments. Working paper.
Harris, C., & Vickers, J. (1985). Perfect equilibrium in a model of a race. Review of Economic Studies, 52(2), 193–209.
Harris, C., & Vickers, J. (1987). Racing with uncertainty. Review of Economic Studies, 54(1), 1–21.
Harrison, G. W., & List, J. A. (2004). Field experiments. Journal of Economic Literature, 42(4), 1009–1055.
Hillman, A. L., & Riley, J. G. (1989). Politically contestable rents and transfers. Economics and Politics, 1(1), 17–40.
Holt, C. A., & Laury, S. K. (2002). Risk aversion and incentive effects. American Economic Review, 92(5), 1644–1655.
Irfanoglu, Z., Mago, S., & Sheremeta, R. M. (2015). New Hampshire effect: Behavior in sequential and simultaneous election contests. MPRA paper, October 2015 (pp. 1–36).
Jeppesen, L. B., & Lakhani, K. R. (2010). Marginality and problem-solving effectiveness in broadcast search. Organization Science, 21(5), 1016–1033.
Klumpp, T., & Polborn, M. K. (2006). Primaries and the new Hampshire effect. Journal of Public Economics, 90(6–7), 1073–1114.
Konrad, K. A. (2009). Strategy and dynamics in contests. New York: Oxford University Press.
Konrad, K. A., & Leininger, W. (2007). The generalized Stackelberg equilibrium of the all-pay auction with complete information. Review of Economic Design, 11(2), 165–174.
Krishna, V., & Morgan, J. (1997). An analysis of the war of attrition and the all-pay auction. Journal of Economic Theory, 72(2), 343–362.
Lazear, E. P., & Rosen, S. (1981). Rank-order tournaments as optimum labor contracts. The Journal of Political Economy, 89(5), 841–864.
Leininger, W. (1993). More efficient rent-seeking—A Munchhausen solution. Public Choice, 75(1), 43–62.
Liu, T. X. (2015). All-pay auctions with endogenous bid timing: An experimental study. Working paper.
Liu, T. X., Yang, J., Adamic, L. A., & Chen, Y. (2014). Crowdsourcing with all-pay auctions: A field experiment on Taskcn. Management Science, 60(8), 2020–2037.
Mago, S. D., Sheremeta, R. M., & Yates, A. (2013). Best-of-three contest experiments; strategic versus psychological momentum. International Journal of Industrial Organization, 31(3), 287–296.
Masiliunas, A., Mengel, F., & Reiss, J. P. (2014). Behavioral variation in Tullock contests. Working paper.
Mermer, A. G. (2013). Contests with expectation-based loss-averse players. Working paper.
Moldovanu, B., & Sela, A. (2006). Contest architecture. Journal of Economic Theory, 126(1), 70–96.
Moore, D. A., & Healy, P. J. (2008). The trouble with overconfidence. Psychological Review, 115(2), 502–517.
Morgan, J. (2003). Sequential contests. Public Choice, 116(1), 1–18.
Muller, W., & Schotter, A. (2010). Workaholics and dropouts in organizations. Journal of the European Economics Association, 8(4), 717–743.
Myerson, R. B. (1981). Optimal auction design. Mathematics of Operations Research, 6(1), 58–73.
Nalebuff, B. J., & Stiglitz, J. E. (1983). Prizes and incentives: Towards a general theory of compensation and competition. The Bell Journal of Economics, 14(1), 21–43.
Noussair, C., & Silver, J. (2006). Behavior in all-pay auctions with incomplete information. Games and Economic Behavior, 55(1), 189–206.
Nyarko, Y., & Schotter, A. (2002). An experimental study of belief learning using elicited beliefs. Econometrica, 70(3), 971–1005.
Plott, C., & Smith, V. (1978). An experimental examination of two exchange institutions. Review of Economic Studies, 45(1), 133–153.
Potters, J., de Vries, C. G., & van Winden, F. (1998). An experimental examination of rational rent-seeking. European Journal of Political Economy, 14(4), 783–800.
Rosen, S. (1986). Prizes and incentives in elimination tournaments. American Economic Review, 76(4), 701–715.
Segev, E., & Sela, A. (2014). Multi-stage sequential all-pay auctions. European Economic Review, 70, 371–382.
Terwiesch, C., & Xu, Y. (2008). Innovation contests, open innovation, and multiagent problem solving. Management Science, 54(9), 1529–1543.
Tullock, G. (1980). Efficient rent seeking. College Station, TX: Texas A&M University Press.
Weber, R. J. (1985). Auctions and competitive bidding. In H. Peyton Young (Ed.), Fair allocation, American Mathematical Society proceedings of symposia in applied mathematics (Vol. 33, pp. 143–170). Providence, RI: American Mathematical Society.
Zhang, J., & Wang, R. Q. (2009). The role of information revelation in elimination contests. Economic Journal, 119(536), 613–641.
Acknowledgements
We thank Juan Carrillo and Isabelle Brocas for providing us with access to the Los Angeles Behavioral Economics Laboratory (LABEL). We would also like to thank Yan Chen, Jeffrey Mackie-Mason, Lijia Wei and seminar participants at the 2013 North American ESA Meetings, the 2013 Annual Xiamen University International Workshop on Experimental Economics, and the 2014 Asian-Pacific ESA Meetings for helpful discussions and comments, and Chao Tang for excellent research assistance. We thank the Editor, David Cooper, and two anonymous referees for their constructive comments and suggestions. The financial support from Tsinghua University, the National Natural Science Foundation of China (NSFC) under Grant 71403140 and 71432004, is gratefully acknowledged. Lian Jian thanks the financial support from the APOC Program at the Annenberg School of Communication, University of Southern California.
Author information
Authors and Affiliations
Corresponding author
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Jian, L., Li, Z. & Liu, T.X. Simultaneous versus sequential all-pay auctions: an experimental study. Exp Econ 20, 648–669 (2017). https://doi.org/10.1007/s10683-016-9504-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10683-016-9504-1