Advertisement

Learning from Crowds under Experts’ Supervision

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8443)

Abstract

Crowdsourcing services have been proven efficient in collecting large amount of labeled data for supervised learning, but low cost of crowd workers leads to unreliable labels. Various methods have been proposed to infer the ground truth or learn from crowd data directly though, there is no guarantee that these methods work well for highly biased or noisy crowd labels. Motivated by this limitation of crowd data, we propose to improve the performance of crowdsourcing learning tasks with some additional expert labels by treating each labeler as a personal classifier and combining all labelers’ opinions from a model combination perspective. Experiments show that our method can significantly improve the learning quality as compared with those methods solely using crowd labels.

Keywords

Crowdsourcing multiple annotators model combination classification 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ambati, V., Vogel, S., Carbonell, J.: Active learning and crowd-sourcing for machine translation. Language Resources and Evaluation (LREC) 7, 2169–2174 (2010)Google Scholar
  2. 2.
    Bishop, C.M., et al.: Pattern recognition and machine learning, vol. 4. Springer, New York (2006)zbMATHGoogle Scholar
  3. 3.
    Brew, A., Greene, D., Cunningham, P.: Using crowdsourcing and active learning to track sentiment in online media. In: ECAI 2010, pp. 145–150 (2010)Google Scholar
  4. 4.
    Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the EM algorithm. Applied Statistics, 20–28 (1979)Google Scholar
  5. 5.
    Finin, T., Murnane, W., Karandikar, A., Keller, N., Martineau, J., Dredze, M.: Annotating named entities in twitter data with crowdsourcing. In: Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pp. 80–88. Association for Computational Linguistics (2010)Google Scholar
  6. 6.
    Frank, A., Asuncion, A.: UCI machine learning repository (2010)Google Scholar
  7. 7.
    Jaakkola, T., Jordan, M.: A variational approach to Bayesian logistic regression models and their extensions. In: Proceedings of the 6th International Workshop on Artificial Intelligence and Statistics (1997)Google Scholar
  8. 8.
    Kajino, H., Tsuboi, Y., Kashima, H.: A convex formulation for learning from crowds. In: Proceedings of the 26th AAAI Conference on Artificial Intelligence (2012) (to appear)Google Scholar
  9. 9.
    Kajino, H., Tsuboi, Y., Sato, I., Kashima, H.: Learning from crowds and experts. In: Proceedings of the 4th Human Computation Workshop, HCOMP 2012 (2012)Google Scholar
  10. 10.
    Karger, D.R., Oh, S., Shah, D.: Iterative learning for reliable crowdsourcing systems. In: Advances in Neural Information Processing Systems (NIPS 2011), pp. 1953–1961 (2011)Google Scholar
  11. 11.
    Kuncheva, L.I.: Combining pattern classifiers: Methods and algorithms (kuncheva, li; 2004)[book review]. IEEE Transactions on Neural Networks 18(3), 964 (2007)CrossRefGoogle Scholar
  12. 12.
    Kuncheva, L.I., Bezdek, J.C., Duin, R.P.W.: Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recognition 34(2), 299–314 (2001)CrossRefzbMATHGoogle Scholar
  13. 13.
    Liu, Q., Peng, J., Ihler, A.: Variational inference for crowdsourcing. In: Advances in Neural Information Processing Systems (NIPS 2012), pp. 701–709 (2012)Google Scholar
  14. 14.
    Merz, C.J.: Using correspondence analysis to combine classifiers. Machine Learning 36(1), 33–58 (1999)CrossRefGoogle Scholar
  15. 15.
    Raykar, V.C., Yu, S., Zhao, L.H., Valadez, G.H., Florin, C., Bogoni, L., Moy, L.: Learning from crowds. The Journal of Machine Learning Research 11, 1297–1322 (2010)MathSciNetGoogle Scholar
  16. 16.
    Sheng, V.S., Provost, F., Ipeirotis, P.G.: Get another label? Improving data quality and data mining using multiple, noisy labelers. In: Proceeding of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 614–622 (2008)Google Scholar
  17. 17.
    Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast—but is it good? evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 254–263. Association for Computational Linguistics (2008)Google Scholar
  18. 18.
    Wauthier, F.L., Jordan, M.I.: Bayesian bias mitigation for crowdsourcing. In: Advances in Neural Information Processing Systems (NIPS 2011), pp. 1800–1808 (2011)Google Scholar
  19. 19.
    Yan, Y., et al.: Modeling annotator expertise: Learning when everybody knows a bit of something. In: Proceedings of 13th International Conference on Artificial Intelligence and Statistics (AISTATS 2010), vol. 9, pp. 932–939 (2010)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.College of Computer Science and TechnologyZhejiang UniversityHangzhouChina
  2. 2.School of ComputingNational University of SingaporeSingapore
  3. 3.Provident Technology Pte. Ltd.Singapore

Personalised recommendations