Advertisement

Autonomous Agents and Multi-Agent Systems

, Volume 33, Issue 1–2, pp 192–215 | Cite as

Strategic behavior and learning in all-pay auctions: an empirical study using crowdsourced data

  • Yoram Bachrach
  • Ian A. Kash
  • Peter Key
  • Joel OrenEmail author
Article

Abstract

We analyze human behavior in crowdsourcing contests using an all-pay auction model where all participants exert effort, but only the highest bidder receives the reward. We let workers sourced from Amazon Mechanical Turk participate in an all-pay auction, and contrast the game theoretic equilibrium with the choices of the humans participants. We examine how people competing in the contest learn and adapt their bids, comparing their behavior to well-established online learning algorithms in a novel approach to quantifying the performance of humans as learners. For the crowdsourcing contest designer, our results show that a bimodal distribution of effort should be expected, with some very high effort and some very low effort, and that humans have a tendency to overbid. Our results suggest that humans are weak learners in this setting, so it may be important to educate participants about the strategic implications of crowdsourcing contests.

Keywords

All-pay auctions Crowdsourcing contests Learning 

Notes

References

  1. 1.
    Amann, E., & Leininger, W. (1996). Asymmetric all-pay auctions with incomplete information: The two-player case. Games and Economic Behavior, 14(1), 1–18.MathSciNetzbMATHGoogle Scholar
  2. 2.
    Arad, A., & Rubinstein, A. (Jan. 2010). Colonel blottos top secret files. Levine’s Working Paper Archive 814577000000000432, David K. Levine.Google Scholar
  3. 3.
    Arad, A., & Rubinstein, A. (2012). The 11–20 money request game: A level-k reasoning study. American Economic Review, 102(7), 3561–73.Google Scholar
  4. 4.
    Arora, S., Hazan, E., & Kale, S. (2012). The multiplicative weights update method: A meta-algorithm and applications. Theory of Computing, 8(1), 121–164.MathSciNetzbMATHGoogle Scholar
  5. 5.
    Baye, M., Kovenock, D., & Vries, C. D. (1996). The all-pay-auction with complete information. Economic Theory, 8, 291–305.MathSciNetzbMATHGoogle Scholar
  6. 6.
    Bennett, J., & Lanning, S. (2007). The netflix prize. In In KDD Cup and Workshop in conjunction with KDD (Vol. 2007, p. 35).Google Scholar
  7. 7.
    Burkart, M. (1995). Initial shareholdings and overbidding in takeover contests. The Journal of Finance, 50(5), 1491–1515.Google Scholar
  8. 8.
    Byers, J. W., Mitzenmacher, M., & Zervas, G. (2010). Information asymmetries in pay-per-bid auctions: How swoopo makes bank. CoRR, arXiv:abs/1001.0592.
  9. 9.
    Camerer, C. F. (2011). Behavioral game theory: Experiments in strategic interaction. Princeton: Princeton University Press.zbMATHGoogle Scholar
  10. 10.
    Chowdhury, S. M., Kovenock, D., & Sheremeta, R. M. (2013). An experimental investigation of colonel blotto games. Economic Theory, 52(3), 833–861.MathSciNetzbMATHGoogle Scholar
  11. 11.
    Davis, D. D., & Reilly, R. J. (1998). Do too many cooks always spoil the stew? An experimental analysis of rent-seeking and the role of a strategic buyer. Public Choice, 95(1–2), 89–115.Google Scholar
  12. 12.
    Dechenaux, E., Kovenock, D., & Sheremeta, R. M. (2015). A survey of experimental research on contests, all-pay auctions and tournaments. Experimental Economics, 18(4), 609–669.Google Scholar
  13. 13.
    DiPalantino, D., & Vojnovic, M. (2009). Crowdsourcing and all-pay auctions. In Proceedings of the 10th ACM conference on Electronic commerce, EC ’09 (pp. 119–128). New York: ACM.Google Scholar
  14. 14.
    Douceur, J. R. (2002). The sybil attack. In Revised Papers from the First International Workshop on Peer-to-Peer Systems, IPTPS ’01 (pp. 251–260). London: Springer-Verlag.Google Scholar
  15. 15.
    Erev, I., & Roth, A. E. (1998). Predicting how people play games: Reinforcement learning. American Economic Review, 88, 848–881.Google Scholar
  16. 16.
    Ernst, C., & Thni, C. (2013) Bimodal bidding in experimental all-pay auctions. Technical report.Google Scholar
  17. 17.
    Fallucchi, F., Renner, E., & Sefton, M. (2013). Information feedback and contest structure in rent-seeking games. European Economic Review, 64, 223–240.Google Scholar
  18. 18.
    Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119–139.MathSciNetzbMATHGoogle Scholar
  19. 19.
    Gao, X. A., Bachrach, Y., Key, P., Graepel, T. (2012) Quality expectation-variance tradeoffs in crowdsourcing contests. In AAAI (pp. 38–44).Google Scholar
  20. 20.
    Gelder, A., Kovenock, D., & Sheremeta, R. M. (2015). Behavior in all-pay auctions with ties.Google Scholar
  21. 21.
    Ghosh, A., & McAfee, P. (2012). Crowdsourcing with endogenous entry. In Proceedings of the 21st international conference on World Wide Web WWW ’12 (pp. 999–1008). New York: ACM.Google Scholar
  22. 22.
    Gneezy, U., & Smorodinsky, R. (2006). All-pay auctions-an experimental study. Journal of Economic Behavior & Organization, 61(2), 255–275.Google Scholar
  23. 23.
    Kohli, P., Kearns, M., Bachrach, Y., Herbrich, R., Stillwell, D., & Graepel, T. (2012). Colonel blotto on facebook: the effect of social relations on strategic interaction. In WebSci (pp. 141–150).Google Scholar
  24. 24.
    Krishna, V., & Morgan, J. (1997). An analysis of the war of attrition and the all-pay auction. Journal of Economic Theory, 72(2), 343–362.zbMATHGoogle Scholar
  25. 25.
    Lev, O., Polukarov, M., Bachrach, Y., & and Rosenschein, J. S. (May 2013). Mergers and collusion in all-pay auctions and crowdsourcing contests. In The Twelfth international joint conference on autonomous agents and multiagent systems (AAMAS 2013). Saint Paul: Minnesota, To appear.Google Scholar
  26. 26.
    Lewenberg, Y., Lev, O., Bachrach, Y., & Rosenschein, J. S. (2013). Agent failures in all-pay auctions. In Proceedings of the twenty-third international joint conference on artificial intelligence (pp. 241–247). AAAI Press.Google Scholar
  27. 27.
    Lewenberg, Y., Lev, O., Bachrach, Y., & Rosenschein, J. S. (2017). Agent failures in all-pay auctions. IEEE Intelligent Systems, 32(1), 8–16.Google Scholar
  28. 28.
    Liu, T. X., Yang, J., Adamic, L. A., & Chen, Y. (2011). Crowdsourcing with all-pay auctions: A field experiment on taskcn. Proceedings of the American Society for Information Science and Technology, 48(1), 1–4.Google Scholar
  29. 29.
    Lugovskyy, V., Puzzello, D., & Tucker, S. (2010). An experimental investigation of overdissipation in the all pay auction. European Economic Review, 54(8), 974–997.Google Scholar
  30. 30.
    Mago, S. D., Samak, A. C., & Sheremeta, R. M. (2016). Facing your opponents: Social identification and information feedback in contests. Journal of Conflict Resolution, 60(3), 459–481.Google Scholar
  31. 31.
    Mash, M., Bachrach, Y., & Zick, Y. (2017). How to form winning coalitions in mixed human-computer settings. In Proceedings of the 26th international joint conference on artificial intelligence (IJCAI) (pp. 465–471).Google Scholar
  32. 32.
    Potters, J., de Vries, C. G., & van Winden, F. (1998). An experimental examination of rational rent-seeking. European Journal of Political Economy, 14(4), 783–800.Google Scholar
  33. 33.
    Price, C. R., & Sheremeta, R. M. (2011). Endowment effects in contests. Economics Letters, 111(3), 217–219.MathSciNetGoogle Scholar
  34. 34.
    Price, C. R., & Sheremeta, R. M. (2015). Endowment origin, demographic effects, and individual preferences in contests. Journal of Economics & Management Strategy, 24(3), 597–619.Google Scholar
  35. 35.
    Sheremeta, R. M., & Zhang, J. (2010). Can groups solve the problem of over-bidding in contests? Social Choice and Welfare, 35(2), 175–197.MathSciNetzbMATHGoogle Scholar
  36. 36.
    Stahl, D. O., & Wilson, P. W. (1995). On players models of other players: Theory and experimental evidence. Games and Economic Behavior, 10(1), 218–254.MathSciNetzbMATHGoogle Scholar
  37. 37.
    Tang, J. C., Cebrian, M., Giacobe, N. A., Kim, H.-W., Kim, T., & Wickert, D. B. (2011). Reflecting on the darpa red balloon challenge. Communications of the ACM, 54(4), 78–85.Google Scholar
  38. 38.
    Vojnovic, M. (2015). Contest theory. Cambridge: Cambridge University Press.Google Scholar
  39. 39.
    Wang, Z., & Xu, M. (2012). Learning and strategic sophistication in games: The case of penny auctions on the internet. Working paper.Google Scholar
  40. 40.
    Yang, J., Adamic, L. A., & Ackerman, M. S. (2008). Crowdsourcing and knowledge sharing: strategic user behavior on taskcn. In Proceedings of the 9th ACM conference on electronic commerce, EC ’08 (pp. 246–255). New York: ACM.Google Scholar
  41. 41.
    Young, H. P. (2004). Strategic learning and its limits. Oxford: Oxford University Press.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  • Yoram Bachrach
    • 1
  • Ian A. Kash
    • 2
  • Peter Key
    • 3
  • Joel Oren
    • 4
    Email author
  1. 1.DeepMindLondonUK
  2. 2.The University of Illinois at ChicagoChicagoUSA
  3. 3.CambridgeUK
  4. 4.Yahoo! ResearchHaifaIsrael

Personalised recommendations