Skip to main content

An Evaluation of Aggregation Techniques in Crowdsourcing

  • Conference paper
Web Information Systems Engineering – WISE 2013 (WISE 2013)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 8181))

Included in the following conference series:

Abstract

As the volumes of AI problems involving, human knowledge are likely to soar, crowdsourcing has become essential in a wide range of world-wide-web applications. One of the biggest challenges of crowdsourcing is aggregating the answers collected from the crowd since the workers might have wide-ranging levels of expertise. In order to tackle this challenge, many aggregation techniques have been proposed. These techniques, however, have never been compared and analyzed under the same setting, rendering a ‘right’ choice for a particular application very difficult. Addressing this problem, this paper presents a benchmark that offers a comprehensive empirical study on the performance comparison of the aggregation techniques. Specifically, we integrated several state-of-the-art methods in a comparable manner, and measured various performance metrics with our benchmark, including computation time, accuracy, robustness to spammers, and adaptivity to multi-labeling. We then provide in-depth analysis of benchmarking results, obtained by simulating the crowdsourcing process with different types of workers. We believe that the findings from the benchmark will be able to serve as a practical guideline for crowdsourcing applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. von Ahn, L., et al.: Labeling images with a computer game. In: CHI (2004)

    Google Scholar 

  2. von Ahn, L., et al.: recaptcha: Human-based character recognition via web security measures. Science (2008)

    Google Scholar 

  3. Hung, N.Q.V., Tam, N.T., Miklós, Z., Aberer, K.: On leveraging crowdsourcing techniques for schema matching networks. In: Meng, W., Feng, L., Bressan, S., Winiwarter, W., Song, W. (eds.) DASFAA 2013, Part II. LNCS, vol. DASFAA, pp. 139–154. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  4. Difallah, D.E., et al.: Mechanical cheat: Spamming schemes and adversarial techniques on crowdsourcing platforms. In: CrowdSearch (2012)

    Google Scholar 

  5. Doan, A., et al.: Crowdsourcing systems on the world-wide web. CACM (2011)

    Google Scholar 

  6. Hosmer, D.W., et al.: Applied logistic regression. Wiley-Interscience Publication (2000)

    Google Scholar 

  7. Ipeirotis, P.G., et al.: Quality management on amazon mechanical turk. In: HCOMP (2010)

    Google Scholar 

  8. Kamar, E., et al.: Combining human and machine intelligence in large-scale crowdsourcing. In: AAMAS (2012)

    Google Scholar 

  9. Kamar, E., et al.: Incentives for truthful reporting in crowdsourcing. In: AAMAS (2012)

    Google Scholar 

  10. Karger, D., et al.: Iterative learning for reliable crowdsourcing systems. In: NIPS (2011)

    Google Scholar 

  11. Nguyen, Q.V.H., et al.: Batc - a benchmark for aggregation techniques in crowdsourcing. In: SIGIR (2013)

    Google Scholar 

  12. Kazai, G., et al.: Worker types and personality traits in crowdsourcing relevance labels. In: CIKM (2011)

    Google Scholar 

  13. Khattak, F., et al.: Quality Control of Crowd Labeling through Expert Evaluation. In: NIPS (2011)

    Google Scholar 

  14. Nguyen, Q.V.H., et al.: Collaborative schema matching reconciliation. In: CoopIS (2013)

    Google Scholar 

  15. Kuncheva, L., et al.: Limits on the majority vote accuracy in classifier fusion. Pattern Anal. Appl. (2003)

    Google Scholar 

  16. Law, E., et al.: Human Computation. Morgan & Claypool Publishers (2011)

    Google Scholar 

  17. Lee, K., et al.: The social honeypot project: protecting online communities from spammers. In: WWW (2010)

    Google Scholar 

  18. Mason, W., et al.: Conducting behavioral research on amazon mechanical turk. BRM (2012)

    Google Scholar 

  19. Quinn, A.J., et al.: Human computation: a survey and taxonomy of a growing field. In: CHI (2011)

    Google Scholar 

  20. Nguyen, Q.V.H., et al.: Minimizing Human Effort in Reconciling Match Networks. In: ER (2013)

    Google Scholar 

  21. Raykar, V., et al.: Supervised learning from multiple experts: Whom to trust when everyone lies a bit. In: ICML (2009)

    Google Scholar 

  22. Raykar, V.C., et al.: Learning from crowds. Mach. Learn. Res. (2010)

    Google Scholar 

  23. Ross, J., et al.: Who are the crowdworkers?: shifting demographics in mechanical turk. In: CHI (2010)

    Google Scholar 

  24. Vuurens, J., et al.: How much spam can you take? an analysis of crowdsourcing results to increase accuracy. In: CIR (2011)

    Google Scholar 

  25. Whitehill, J., et al.: Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In: NIPS (2009)

    Google Scholar 

  26. Yan, T., et al.: CrowdSearch: exploiting crowds for accurate real-time image search on mobile phones. In: MobiSys (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Quoc Viet Hung, N., Tam, N.T., Tran, L.N., Aberer, K. (2013). An Evaluation of Aggregation Techniques in Crowdsourcing. In: Lin, X., Manolopoulos, Y., Srivastava, D., Huang, G. (eds) Web Information Systems Engineering – WISE 2013. WISE 2013. Lecture Notes in Computer Science, vol 8181. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-41154-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-41154-0_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-41153-3

  • Online ISBN: 978-3-642-41154-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics