Advertisement

Amazon Mechanical Turk: A Research Tool for Organizations and Information Systems Scholars

  • Kevin Crowston
Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 389)

Abstract

Amazon Mechanical Turk (AMT), a system for crowdsourcing work, has been used in many academic fields to support research and could be similarly useful for information systems research. This paper briefly describes the functioning of the AMT system and presents a simple typology of research data collected using AMT. For each kind of data, it discusses potential threats to reliability and validity and possible ways to address those threats. The paper concludes with a brief discussion of possible applications of AMT to research on organizations and information systems.

Keywords

Amazon Mechanical Turk crowd sourcing research methods 

References

  1. 1.
    Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast—But is it good?: Evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 254–263. Association for Computational Linguistics (2008)Google Scholar
  2. 2.
    Kaisser, M., Lowe, J.: Creating a research collection of question answer sentence pairs with Amazon’s Mechanical Turk. In: Proceedings of the Sixth International Conference on Language Resources and Evaluation, LREC 2008 (2008)Google Scholar
  3. 3.
    Rashtchian, C., Young, P., Hodosh, M., Hockenmaier, J.: Collecting image annotations using Amazon’s Mechanical Turk. In: Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pp. 139–147. Association for Computational Linguistics (2010)Google Scholar
  4. 4.
    Sorokin, A., Forsyth, D.: Utility data annotation with Amazon Mechanical Turk. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–8 (2008), doi:10.1109/CVPRW.2008.4562953Google Scholar
  5. 5.
    Heer, J., Bostock, M.: Crowdsourcing graphical perception: Using Mechanical Turk to assess visualization design. In: Proceedings of the 28th International Conference on Human Factors in Computing Systems (CHI 2010), pp. 203–212. ACM (2010), doi:10.1145/1753326.1753357Google Scholar
  6. 6.
    Kittur, A., Chi, E.H., Suh, B.: Crowdsourcing user studies with Mechanical Turk. In: Proceedings of the ACM Conference on Human-factors in Computing Systems, pp. 453–456. ACM, New York (2008)Google Scholar
  7. 7.
    Berinsky, A.J., Huber, G.A., Lenz, G.S.: Evaluating online labor markets for experimental research: Amazon.com’s Mechanical Turk. Political Analysis 20, 351–368 (2012), doi:10.1093/pan/mpr057Google Scholar
  8. 8.
    Buhrmester, M., Kwang, T., Gosling, S.D.: Amazon’s Mechanical Turk. Perspectives on Psychological Science 6, 3–5 (2011), doi:10.1177/1745691610393980CrossRefGoogle Scholar
  9. 9.
    Conley, C.A.: Design for quality: The case of Open Source Software development. PhD dissertation. New York University, New York, NY (2008)Google Scholar
  10. 10.
  11. 11.
    Brabham, D.C.: Crowdsourcing as a model for problem solving: An introduction and cases. Convergence 14, 75 (2008)CrossRefGoogle Scholar
  12. 12.
    Mason, W., Suri, S.: Conducting behavioral research on Amazon’s Mechanical Turk. Behavior Research Methods 44, 1–23 (2012)CrossRefGoogle Scholar
  13. 13.
    Ipeirotis, P.G.: Analyzing the Amazon Mechanical Turk marketplace. XRDS 17, 16–21 (2010), doi:10.1145/1869086.1869094CrossRefGoogle Scholar
  14. 14.
    Horton, J.J., Chilton, L.B.: The labor economics of paid crowdsourcing. In: Proceedings of the 11th ACM Conference on Electronic Commerce, pp. 209–218 (2010) Google Scholar
  15. 15.
    Chilton, L.B., Horton, J.J., Miller, R.C., Azenkot, S.: Task search in a human computation market. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 1–9. ACM (2010), doi:10.1145/1837885.1837889Google Scholar
  16. 16.
    Paolacci, G., Chandler, J., Ipeirotis, P.G.: Running experiments on Amazon Mechanical Turk. Judgment and Decision Making 5, 411–419 (2010)Google Scholar
  17. 17.
    Sprouse, J.: A validation of Amazon Mechanical Turk for the collection of acceptability judgments in linguistic theory. Behavior Research Methods 43, 155–167 (2011), doi:10.3758/s13428-010-0039-7CrossRefGoogle Scholar
  18. 18.
    Ipeirotis, P.G.: Demographics of Mechanical Turk. Working Paper CEDER-10-01, New York University (2010), http://ssrn.com/abstract=1585030
  19. 19.
    Wang, Y.-C., Kraut, R., Levine, J.M.: To stay or leave? The relationship of emotional and informational support to commitment in online health support groups. In: Proceedings of the ACM Conference on Computer Supported Cooperative Work, pp. 833–842. ACM (2012), doi:10.1145/2145204.2145329Google Scholar
  20. 20.
    Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on Amazon Mechanical Turk. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 64–67. ACM (2010), doi:10.1145/1837885.1837906Google Scholar
  21. 21.
    De George, R.: Information technology, globalization and ethics. Ethics and Information Technology 8, 29–40 (2006)CrossRefGoogle Scholar
  22. 22.
    Crowston, K., Prestopnik, N.R.: Motivation and data quality in a citizen science game: A design science evaluation. In: Proceedings of Hawai’i International Conference on System Science (2013)Google Scholar
  23. 23.
    Cohn, J.P.: Citizen science: Can volunteers do real research? BioScience 58, 192–107 (2008)Google Scholar
  24. 24.
    Wiggins, A., Crowston, K.: From conservation to crowdsourcing: A typology of citizen science. In: Proceedings of 44th Hawaii International Conference on System Sciences, pp. 1–10 (2011), doi:10.1109/HICSS.2011.207Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2012

Authors and Affiliations

  • Kevin Crowston
    • 1
  1. 1.Syracuse University School of Information StudiesSyracuseUSA

Personalised recommendations