Skip to main content
Log in

Automated agents for reward determination for human work in crowdsourcing applications

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

Crowdsourcing applications frequently employ many individual workers, each performing a small amount of work. In such settings, individually determining the reward for each assignment and worker may seem economically beneficial, but is inapplicable if manually performed. We thus consider the problem of designing automated agents for automatic reward determination and negotiation in such settings. We formally describe this problem and show that it is NP-hard. We therefore present two automated agents for the problem, based on two different models of human behavior. The first, the Reservation Price Based Agent (RPBA), is based on the concept of a RP, and the second, the No Bargaining Agent (NBA) which tries to avoid any negotiation. The performance of the agents is tested in extensive experiments with real human subjects, where both NBA and RPBA outperform strategies developed by human experts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. In most cases the assignments are not completely identical, but they are sufficiently similar so that the associated reward is identical (e.g. checking if a given link is stale).

  2. We consider only types where \(d_t\) can be represented by a description which is polynomial in the size of the game.

  3. For a comparison between AMT and other recruitment methods see [42].

References

  1. Howe, J. (2006). The rise of crowdsourcing. Wired Magazine, 14(6), 1–4.

    MathSciNet  Google Scholar 

  2. Adda, G., Sagot, B., Fort, K., Mariani, J., et al. (2011). Crowdsourcing for language resource development: Critical analysis of amazon mechanical turk overpowering use. In: Proceedings of the 5th Language and Technology Conference (LTC 2011). Poznan, Poland.

  3. Rubinstein, A. (1982). Perfect equilibrium in a bargaining model. Econometrica, 50(1), 97–109. doi:10.2307/1912531.

    Article  MATH  MathSciNet  Google Scholar 

  4. Raiffa, H. (1982). The art and science of negotiation/Howard Raiffa. Cambridge, MA: Belknap Press of Harvard University Press.

    Google Scholar 

  5. Stahl, I. (1972). Bargaining theory (pp. 153–157). Stockholm: Stockholm School of Economics.

    Google Scholar 

  6. Di Giunta, F., & Gatti, N. (2006). Bargaining over multiple issues in finite horizon alternating-offers protocol. Annals of Mathematics and Artificial Intelligence, 47, 251–271.

    Article  MATH  MathSciNet  Google Scholar 

  7. Lin, R., Kraus, S., Wilkenfeld, J., & Barry, J. (2008). Negotiating with bounded rational agents in environments with incomplete information using an automated agent. Artificial Intelligence, 172(6–7), 823–851.

    Article  MATH  MathSciNet  Google Scholar 

  8. Ramchurn, S. D., Sierra, C., Godo, L., & Jennings, N. R. (2007). Negotiating using rewards. Artificial Intelligence, 171, 805–837.

    Article  MATH  MathSciNet  Google Scholar 

  9. Davis, R., & Smith, R. G. (1983). Negotiation as a metaphor for distributed problem solving. Artificial intelligence, 20(1), 63–109.

    Article  Google Scholar 

  10. Zlotkin, G., & Rosenschein, J. S. (1996). Mechanism design for automated negotiation, and its application to task oriented domains. Artificial Intelligence, 86(2), 195–244.

    Article  MathSciNet  Google Scholar 

  11. Sarne, D., & Kraus, S. (2005). Solving the auction-based task allocation problem in an open environment. In Proceedings of AAAI, Boston, MA (pp. 164–169).

  12. Chevaleyre, Y., Dunne, P. E., Endriss, U., Lang, J., Lemaître, M., Maudet, N., et al. (2006). Issues in multiagent resource allocation. Informatica (Slovenia), 30(1), 3–31.

    MATH  Google Scholar 

  13. Airiau, S., & Endriss, U. (2010). Multiagent resource allocation with sharable items: Simple protocols and nash equilibria. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS-2010), Toronto, Canada (pp. 167–174).

  14. Ferguson, T. S. (1989). Who solved the secretary problem? Statistical Science, 4(3), 282–289.

    Article  MathSciNet  Google Scholar 

  15. Sandholm, T. W., & Gilpin, A. (2006). Sequences of take-it-or-leave-it offers: Near-optimal auctions without full valuation revelation. In Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) (pp. 1127–1134).

  16. Gaith, W., Schmittberger, R., & Schwarze, B. (1982). An experimental analysis of ultimatum bargaining. The Journal of Economic Behavior and Organization, 3, 367–388.

    Article  Google Scholar 

  17. Cameron, L. A. (1999). Raising the stakes in the ultimatum game: Experimental evidence from Indonesia. Economic Inquiry, 37(1), 47–59.

    Article  Google Scholar 

  18. Katz, R., & Kraus, S. (2006). Efficient agents for cliff-edge environments with a large set of decision options. In AAMAS (pp. 697–704).

  19. Gneezy, U., Haruvy, E., & Roth, A. (2003). Bargaining under a deadline: Evidence from the reverse ultimatum game. Games and Economic Behavior, 45, 347–368.

    Article  MATH  MathSciNet  Google Scholar 

  20. Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010). Evidence for a collective intelligence factor in the performance of human groups. Science, 330(6004), 686–688.

    Article  Google Scholar 

  21. Galuscak, K., Keeney, M., Nicolitsas, D., Smets, F. R., Strzelecki, P., & Vodopivec, M. (2010). The determination of wages of newly hired employees - Survey evidence on internal versus external factors, Working Paper Series 1153, European Central Bank.

  22. Walque, G. D., Pierrard, O., Sneessens, H., & Wouters, R. (2009). Sequential bargaining in a new keynesian model with frictional unemployment and staggered wage negotiation, Working Paper Research 157, National Bank of Belgium.

  23. Horton, J. J., & Zeckhauser, R. J. (2010). Algorithmic wage negotiations: Applications to paid crowdsourcing. In Proceedings of CrowdConf, San Francisco, CA.

  24. Amazon, Mechanical Turk services, http://www.mturk.com/ (2010)

  25. Faridani, S., Hartmann, B. &, Ipeirotis, P. (2011). What’s the right price? pricing tasks for finishing on time. In Proceedings of the AAAI Workshop on Human Computation, Toronto, Canada.

  26. Riley, J., & Zeckhauser, R. (1983). When to haggle, when to hold firm. The Quarterly Journal of Economics, 98(2), 267–289.

    Article  Google Scholar 

  27. Azaria, A., Aumann, Y., & Kraus, S. (2012). Automated strategies for determining rewards for humanwork, In AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press.

  28. Overstreet, H. (1925). Influencing human behavior. New York: The People’s Institute Publishing.

    Google Scholar 

  29. Carnegie, D. (1964). How to win friends and influence people. New York: Simon and Schuster.

    Google Scholar 

  30. Ariely, D., Loewenstein, G., & Prelec, D. (2006). Coherent arbitrariness: Stable demand curves without stable preferences. Journal of Business Research, 59(10–11), 1053–1062.

    Google Scholar 

  31. Karp, R. M. (1972). Reducibility among combinatorial problems. Berlin: Springer.

    Google Scholar 

  32. Weitzman, M. L. (1979). Optimal search for the best alternative, Econometrica. Journal of the Econometric Society, 47, 641–654.

    Article  MATH  MathSciNet  Google Scholar 

  33. Vickrey, W. (1961). Counterspeculation, and competitive sealed tenders. Journal of Finance, 16(1), 8–37.

    Article  Google Scholar 

  34. Sarne, D., & Kraus, S. (2005). Solving the auction-based task allocation problem in an open environment, In AAAI (pp. 164–169). Menlo Park, CA: AAAI Press.

  35. Becker, G. M., DeGroot, M. H., & Marschak, J. (1964). Measuring utility by a single-response sequential method. Behavioral Sciences, 9(3), 226–232.

    Article  Google Scholar 

  36. Noussair, C., Robin, S., & Ruffieux, B. (2004). Mrevealing consumers’ willingness-to-pay: A comparison of the bdm mechanism and the vickrey auction. Journal of Economic Psychology, 25, 725–741.

    Article  Google Scholar 

  37. Byers, S., & Raftery, A. (1998). Nearest-neighbor clutter removal for estimating features in spatial point processes. Journal of the American Statistical Association, 93, 577–584.

    Article  MATH  Google Scholar 

  38. Fraley, C., & Raftery, A. E. (2002). Model-based clustering, discriminant analysis and density estimation. Journal of the American Statistical Association, 97, 611–631.

    Article  MATH  MathSciNet  Google Scholar 

  39. Fraley, C., & Raftery, A. E. (1998). How many clusters? which clustering method? Answers via model-based cluster analysis. The Computer Journal, 41(8), 578–588.

    Article  MATH  Google Scholar 

  40. Peled, N., Gal, Y., Kraus, S. (2011). A study of computational and human strategies in revelation games. In AAMAS-11 (pp. 345–352).

  41. Gal, Y., & Pfeffer, A. (2007). Modeling reciprocity in human bilateral negotiation, In Proceedings of AAAI (pp. 815–820). Menlo Park, CA: AAAI Press.

  42. Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5(5), 411–419.

    Google Scholar 

  43. Arkes, H. R., & Blumer, C. (1985). The psychology of sunk cost. Organizational Behavior and Human Decision Processes, 35(1), 124–140.

    Article  Google Scholar 

  44. Kachelmeier, S. J., & Shehata, M. (1992). Examining risk preferences under high monetary incentives: Experimental evidence from the people’s republic of china. American Economic Review, 82(5), 1120–1141.

    Google Scholar 

Download references

Acknowledgments

This paper has evolved from a paper presented at the AAAI-2012 conference [27]. We thank Avi Rosenfeld and Shira Abuhatzera for their helpful comments. This work is supported in part by the following Grants: ERC Grant #267523, MURI Grant #W911NF-08-1-0144, ARO Grants W911NF0910206 and W911NF1110344 and ISF Grant #1401/09.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amos Azaria.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Azaria, A., Aumann, Y. & Kraus, S. Automated agents for reward determination for human work in crowdsourcing applications. Auton Agent Multi-Agent Syst 28, 934–955 (2014). https://doi.org/10.1007/s10458-013-9244-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10458-013-9244-y

Keywords

Navigation