Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8444)


Crowdsourcing is a promising solution to problems that are difficult for computers, but relatively easy for humans. One of the biggest challenges in crowdsourcing is quality control, since high quality results cannot be expected from crowdworkers who are not necessarily very capable or motivated. Several statistical crowdsourcing quality control methods for binary and multinomial questions have been proposed. In this paper, we consider tasks where crowdworkers are asked to arrange multiple items in the correct order. We propose a probabilistic generative model of crowd answers by extending a distance-based order model to incorporate worker ability, and propose an efficient estimation algorithm. Experiments using real crowdsourced datasets show the advantage of the proposed method over a baseline method.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bernstein, M., Little, G., Miller, R., Hartmann, B., Ackerman, M., Karger, D., Crowell, D., Panovich, K.: Soylent: A word processor with a crowd inside. In: Proceedings of the 23rd Annual ACM Symposium on User Interface Software and Technology, UIST (2010)Google Scholar
  2. 2.
    Bigham, J., Jayant, C., Ji, H., Little, G., Miller, A., Miller, R., Miller, R., Tatarowicz, A., White, B., White, S., et al.: VizWiz: Nearly real-time answers to visual questions. In: Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, UIST (2010)Google Scholar
  3. 3.
    Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast – but is it good? evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP (2008)Google Scholar
  4. 4.
    Sorokin, A., Forsyth, D.: Utility data annotation with Amazon Mechanical Turk. In: Proceedings of the 1st IEEE Workshop on Internet Vision (2008)Google Scholar
  5. 5.
    Sheng, V.S., Provost, F., Ipeirotis, P.G.: Get another label? improving data quality and data mining using multiple, noisy labelers. In: Proceeding of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD (2008)Google Scholar
  6. 6.
    Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the em algorithm. Journal of the Royal Statistical Society. Series C (Applied Statics) 28(1), 20–28 (1979)Google Scholar
  7. 7.
    Whitehill, J., Ruvolo, P., Wu, T., Bergsma, J., Movellan, J.: Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In: Advances in Neural Information Processing Systems, vol. 22 (2009)Google Scholar
  8. 8.
    Welinder, P., Branson, S., Belongie, S., Perona, P.: The multidimensional wisdom of crowds. In: Advances in Neural Information Processing Systems, vol. 23 (2010)Google Scholar
  9. 9.
    Lin, C., Mausam, M., Weld, D.: Crowdsourcing control: Moving beyond multiple choice. In: Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence, UAI (2012)Google Scholar
  10. 10.
    Zhang, H., Law, E., Miller, R., Gajos, K., Parkes, D., Horvitz, E.: Human computation tasks with global constraints. In: Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems, CHI (2012)Google Scholar
  11. 11.
    Marden, J.I.: Analyzing and Modeling Rank Data, vol. 64. CRC Press (1995)Google Scholar
  12. 12.
    Mallows, C.L.: Non-null ranking models. I. Biometrika 44, 114–130 (1957)CrossRefzbMATHMathSciNetGoogle Scholar
  13. 13.
    Chen, X., Bennett, P.N., Collins-Thompson, K., Horvitz, E.: Pairwise ranking aggregation in a crowdsourced setting. In: Proceedings of the 6th ACM International Conference on Web Search and Data Mining, WSDM (2013)Google Scholar
  14. 14.
    Chang, P.C., Toutanova, K.: A discriminative syntactic word order model for machine translation. In: Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACL (2007)Google Scholar
  15. 15.
    Eickhoff, C., de Vries, A.: How crowdsourcable is your task? In: Proceedings of the Workshop on Crowdsourcing for Search and Data Mining, CSDM (2011)Google Scholar
  16. 16.
    Smyth, P., Fayyad, U., Burl, M., Perona, P., Baldi, P.: Inferring ground truth from subjective labelling of venus images. In: Advances in Neural Information Processing Systems, vol. 7 (1995)Google Scholar
  17. 17.
    Wu, O., Hu, W., Gao, J.: Learning to rank under multiple annotators. In: Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI), pp. 1571–1576 (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.The University of TokyoJapan
  2. 2.National Institute of Advanced Industrial Science and Technology (AIST)Japan
  3. 3.JST PRESTOJapan

Personalised recommendations