Abstract
Crowdsourcing applications frequently employ many individual workers, each performing a small amount of work. In such settings, individually determining the reward for each assignment and worker may seem economically beneficial, but is inapplicable if manually performed. We thus consider the problem of designing automated agents for automatic reward determination and negotiation in such settings. We formally describe this problem and show that it is NP-hard. We therefore present two automated agents for the problem, based on two different models of human behavior. The first, the Reservation Price Based Agent (RPBA), is based on the concept of a RP, and the second, the No Bargaining Agent (NBA) which tries to avoid any negotiation. The performance of the agents is tested in extensive experiments with real human subjects, where both NBA and RPBA outperform strategies developed by human experts.
Similar content being viewed by others
Notes
In most cases the assignments are not completely identical, but they are sufficiently similar so that the associated reward is identical (e.g. checking if a given link is stale).
We consider only types where \(d_t\) can be represented by a description which is polynomial in the size of the game.
For a comparison between AMT and other recruitment methods see [42].
References
Howe, J. (2006). The rise of crowdsourcing. Wired Magazine, 14(6), 1–4.
Adda, G., Sagot, B., Fort, K., Mariani, J., et al. (2011). Crowdsourcing for language resource development: Critical analysis of amazon mechanical turk overpowering use. In: Proceedings of the 5th Language and Technology Conference (LTC 2011). Poznan, Poland.
Rubinstein, A. (1982). Perfect equilibrium in a bargaining model. Econometrica, 50(1), 97–109. doi:10.2307/1912531.
Raiffa, H. (1982). The art and science of negotiation/Howard Raiffa. Cambridge, MA: Belknap Press of Harvard University Press.
Stahl, I. (1972). Bargaining theory (pp. 153–157). Stockholm: Stockholm School of Economics.
Di Giunta, F., & Gatti, N. (2006). Bargaining over multiple issues in finite horizon alternating-offers protocol. Annals of Mathematics and Artificial Intelligence, 47, 251–271.
Lin, R., Kraus, S., Wilkenfeld, J., & Barry, J. (2008). Negotiating with bounded rational agents in environments with incomplete information using an automated agent. Artificial Intelligence, 172(6–7), 823–851.
Ramchurn, S. D., Sierra, C., Godo, L., & Jennings, N. R. (2007). Negotiating using rewards. Artificial Intelligence, 171, 805–837.
Davis, R., & Smith, R. G. (1983). Negotiation as a metaphor for distributed problem solving. Artificial intelligence, 20(1), 63–109.
Zlotkin, G., & Rosenschein, J. S. (1996). Mechanism design for automated negotiation, and its application to task oriented domains. Artificial Intelligence, 86(2), 195–244.
Sarne, D., & Kraus, S. (2005). Solving the auction-based task allocation problem in an open environment. In Proceedings of AAAI, Boston, MA (pp. 164–169).
Chevaleyre, Y., Dunne, P. E., Endriss, U., Lang, J., Lemaître, M., Maudet, N., et al. (2006). Issues in multiagent resource allocation. Informatica (Slovenia), 30(1), 3–31.
Airiau, S., & Endriss, U. (2010). Multiagent resource allocation with sharable items: Simple protocols and nash equilibria. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS-2010), Toronto, Canada (pp. 167–174).
Ferguson, T. S. (1989). Who solved the secretary problem? Statistical Science, 4(3), 282–289.
Sandholm, T. W., & Gilpin, A. (2006). Sequences of take-it-or-leave-it offers: Near-optimal auctions without full valuation revelation. In Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) (pp. 1127–1134).
Gaith, W., Schmittberger, R., & Schwarze, B. (1982). An experimental analysis of ultimatum bargaining. The Journal of Economic Behavior and Organization, 3, 367–388.
Cameron, L. A. (1999). Raising the stakes in the ultimatum game: Experimental evidence from Indonesia. Economic Inquiry, 37(1), 47–59.
Katz, R., & Kraus, S. (2006). Efficient agents for cliff-edge environments with a large set of decision options. In AAMAS (pp. 697–704).
Gneezy, U., Haruvy, E., & Roth, A. (2003). Bargaining under a deadline: Evidence from the reverse ultimatum game. Games and Economic Behavior, 45, 347–368.
Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010). Evidence for a collective intelligence factor in the performance of human groups. Science, 330(6004), 686–688.
Galuscak, K., Keeney, M., Nicolitsas, D., Smets, F. R., Strzelecki, P., & Vodopivec, M. (2010). The determination of wages of newly hired employees - Survey evidence on internal versus external factors, Working Paper Series 1153, European Central Bank.
Walque, G. D., Pierrard, O., Sneessens, H., & Wouters, R. (2009). Sequential bargaining in a new keynesian model with frictional unemployment and staggered wage negotiation, Working Paper Research 157, National Bank of Belgium.
Horton, J. J., & Zeckhauser, R. J. (2010). Algorithmic wage negotiations: Applications to paid crowdsourcing. In Proceedings of CrowdConf, San Francisco, CA.
Amazon, Mechanical Turk services, http://www.mturk.com/ (2010)
Faridani, S., Hartmann, B. &, Ipeirotis, P. (2011). What’s the right price? pricing tasks for finishing on time. In Proceedings of the AAAI Workshop on Human Computation, Toronto, Canada.
Riley, J., & Zeckhauser, R. (1983). When to haggle, when to hold firm. The Quarterly Journal of Economics, 98(2), 267–289.
Azaria, A., Aumann, Y., & Kraus, S. (2012). Automated strategies for determining rewards for humanwork, In AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press.
Overstreet, H. (1925). Influencing human behavior. New York: The People’s Institute Publishing.
Carnegie, D. (1964). How to win friends and influence people. New York: Simon and Schuster.
Ariely, D., Loewenstein, G., & Prelec, D. (2006). Coherent arbitrariness: Stable demand curves without stable preferences. Journal of Business Research, 59(10–11), 1053–1062.
Karp, R. M. (1972). Reducibility among combinatorial problems. Berlin: Springer.
Weitzman, M. L. (1979). Optimal search for the best alternative, Econometrica. Journal of the Econometric Society, 47, 641–654.
Vickrey, W. (1961). Counterspeculation, and competitive sealed tenders. Journal of Finance, 16(1), 8–37.
Sarne, D., & Kraus, S. (2005). Solving the auction-based task allocation problem in an open environment, In AAAI (pp. 164–169). Menlo Park, CA: AAAI Press.
Becker, G. M., DeGroot, M. H., & Marschak, J. (1964). Measuring utility by a single-response sequential method. Behavioral Sciences, 9(3), 226–232.
Noussair, C., Robin, S., & Ruffieux, B. (2004). Mrevealing consumers’ willingness-to-pay: A comparison of the bdm mechanism and the vickrey auction. Journal of Economic Psychology, 25, 725–741.
Byers, S., & Raftery, A. (1998). Nearest-neighbor clutter removal for estimating features in spatial point processes. Journal of the American Statistical Association, 93, 577–584.
Fraley, C., & Raftery, A. E. (2002). Model-based clustering, discriminant analysis and density estimation. Journal of the American Statistical Association, 97, 611–631.
Fraley, C., & Raftery, A. E. (1998). How many clusters? which clustering method? Answers via model-based cluster analysis. The Computer Journal, 41(8), 578–588.
Peled, N., Gal, Y., Kraus, S. (2011). A study of computational and human strategies in revelation games. In AAMAS-11 (pp. 345–352).
Gal, Y., & Pfeffer, A. (2007). Modeling reciprocity in human bilateral negotiation, In Proceedings of AAAI (pp. 815–820). Menlo Park, CA: AAAI Press.
Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5(5), 411–419.
Arkes, H. R., & Blumer, C. (1985). The psychology of sunk cost. Organizational Behavior and Human Decision Processes, 35(1), 124–140.
Kachelmeier, S. J., & Shehata, M. (1992). Examining risk preferences under high monetary incentives: Experimental evidence from the people’s republic of china. American Economic Review, 82(5), 1120–1141.
Acknowledgments
This paper has evolved from a paper presented at the AAAI-2012 conference [27]. We thank Avi Rosenfeld and Shira Abuhatzera for their helpful comments. This work is supported in part by the following Grants: ERC Grant #267523, MURI Grant #W911NF-08-1-0144, ARO Grants W911NF0910206 and W911NF1110344 and ISF Grant #1401/09.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Azaria, A., Aumann, Y. & Kraus, S. Automated agents for reward determination for human work in crowdsourcing applications. Auton Agent Multi-Agent Syst 28, 934–955 (2014). https://doi.org/10.1007/s10458-013-9244-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10458-013-9244-y