Advertisement

Modeling for Noisy Labels of Crowd Workers

  • Qian Yan
  • Hao HuangEmail author
  • Yunjun Gao
  • Chen Ying
  • Qingyang Hu
  • Tieyun Qian
  • Qinming He
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9932)

Abstract

Crowdsourcing services can collect a large amount of labeled data at a low cost. Nonetheless, due to some influence factors such as the unqualified crowd workers and the controversiality of instances to be labeled, the collected labels often contain noisy data, i.e., they sometimes are randomly given, incorrect, or missing. Although approaches have been proposed to infer these influence factors to help better model the labeling results, the inferences are not guaranteed to reflect the true effects of the influence factors on the uncertainty and errors in the labels. In this paper, we propose to conduct probability fitting over the noisy labeled data with Bernoulli Mixture Model. Workers with similar behaviors correspond to a same Bernoulli component in the mixture model. The effects of influence factors are fused in the Bernoulli parameter of each Bernoulli component, which directly reflects the uncertainty of labels, and can help identify labeling errors, predict real labels, and reveal the behavior patterns of crowd workers. Experiments on both benchmark and real datasets verify the efficacy of our model.

Notes

Acknowledgements

This work was supported in part by NSFC Grants (61502347, 61522208, 61572376, 61472359, and 61379033), the Fundamental Research Funds for the Central Universities (2015XZZX005-07, 2015XZZX004-18, and 2042015kf0038), the Research Funds for Introduced Talents of Wuhan University, and the International Academic Cooperation Training Program of Wuhan University.

References

  1. 1.
    Difallah, D.E., Catasta, M., Demartini, G., Ipeirotis, P.G., Cudré-Mauroux, P.: The dynamics of micro-task crowdsourcing: the case of Amazon MTurk. In: WWW 2015, pp. 238–247 (2015)Google Scholar
  2. 2.
    Ertekin, S., Rudin, C., Hirsh, H.: Approximating the crowd. Data Min. Knowl. Disc. 28(5), 1189–1221 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Frank, A., Asuncion, A.: UCI machine learning repository (2010). http://archive.ics.uci.edu/ml/
  4. 4.
    Ghahramani, Z., Jordan, M.I.: Supervised learning from incomplete data via an EM approach. In: NIPS 1993, pp. 120–127 (1994)Google Scholar
  5. 5.
    Guo, X., Wang, H., Yangqiu, S., Gao, H.: Brief survey of crowdsourcing for data mining. Expert Syst. Appl. 41(17), 7987–7994 (2014)CrossRefGoogle Scholar
  6. 6.
    Hu, Q., Chiew, K., Huang, H., He, Q.: Recovering missing labels of crowdsourcing workers. In: SDM 2014, pp. 857–865 (2014)Google Scholar
  7. 7.
    Hu, Q., He, Q., Huang, H., Chiew, K., Liu, Z.: Learning from crowds under experts’ supervision. In: Tseng, V.S., Ho, T.B., Zhou, Z.-H., Chen, A.L.P., Kao, H.-Y. (eds.) PAKDD 2014, Part I. LNCS, vol. 8443, pp. 200–211. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  8. 8.
    Kajino, H., Tsuboi, Y., Kashima, H.: A convex formulation for learning from crowds. In: AAAI 2012, pp. 73–79 (2012)Google Scholar
  9. 9.
    Kajino, H., Tsuboi, Y., Kashima, H.: Clustering crowds. In: AAAI 2013, pp. 1120–1127 (2013)Google Scholar
  10. 10.
    Karger, D.R., Oh, S., Shah, D.: Iterative learning for reliable crowdsourcing systems. In: NIPS 2011, pp. 1953–1961 (2011)Google Scholar
  11. 11.
    Karger, D.R., Oh, S., Shah, D.: Efficient crowdsourcing for multi-class labeling. ACM SIGMETRICS Perform. Eval. Rev. 41(1), 81–92 (2013)CrossRefGoogle Scholar
  12. 12.
    Liu, Q., Peng, J., Ihler, A.T.: Variational inference for crowdsourcing. In: NIPS 2012, pp. 692–700 (2012)Google Scholar
  13. 13.
    Raykar, V.C., Yu, S., Zhao, L.H., Valadez, G.H., Florin, C., Bogoni, L., Moy, L.: Learning from crowds. J. Mach. Learn. Res. 11(4), 1297–1322 (2010)MathSciNetGoogle Scholar
  14. 14.
    Snow, R., O’Connor, B., Jurafsky, D., Ny, A.Y.: Cheap and fast–but is it good? evaluating non-expert annotations for natural language tasks. In: EMNLP 2008, pp. 254–263 (2008)Google Scholar
  15. 15.
    Tian, Y., Zhu, J.: Learning from crowds in the presence of schools of thought. In: KDD 2012, pp. 226–234 (2012)Google Scholar
  16. 16.
    Wang, W., Zhou, Z.H.: Crowdsourcing label quality: a theoretical analysis. Sci. Chin. Inf. Sci. 58(11), 1–12 (2015)MathSciNetGoogle Scholar
  17. 17.
    Welinder, P., Branson, S., Belongie, S., Perona, P.: The multidimensional wisdom of crowds. In: NIPS 2010, pp. 2424–2432 (2010)Google Scholar
  18. 18.
    Whitehill, J., Wu, T.-F., Bergsma, J., Movellan, J.R., Ruvolo, P.L.: Whose vote should count more: optimal integration of labels from labelers of unknown expertise. In: NIPS 2009, pp. 2035–2043 (2009)Google Scholar
  19. 19.
    Yan, Y., Rosales, R., Fung, G., Dy, J.G.: Modeling multiple annotator expertise in the semi-supervised learning scenario. In: UAI 2010, pp. 674–682 (2010)Google Scholar
  20. 20.
    Yan, Y., Rosales, R., Fung, G., Schmidt, M., Hermosillo, G., Bogoni, L., Moy, L., Dy, J.G.: Modeling annotator expertise: learning when everybody knows a bit of something. In: ASITATS 2010, pp. 932–939 (2010)Google Scholar
  21. 21.
    Zhang, J., Sheng, V.S., Wu, J., Fu, X., Wu, X.: Improving label quality in crowdsourcing using noise correction. In: CIKM 2015, pp. 1931–1934 (2015)Google Scholar
  22. 22.
    Zhang, J., Wu, X., Sheng, V.S.: Imbalanced multiple noisy labeling. IEEE Trans. Knowl. Data Eng. 27(2), 489–503 (2015)CrossRefGoogle Scholar
  23. 23.
    Zhao, Y., Zhu, Q.: Evaluation on crowdsourcing research: current status and future direction. Inf. Syst. Front. 16(3), 417–434 (2014)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Qian Yan
    • 1
  • Hao Huang
    • 1
    Email author
  • Yunjun Gao
    • 2
  • Chen Ying
    • 1
  • Qingyang Hu
    • 2
  • Tieyun Qian
    • 1
  • Qinming He
    • 2
  1. 1.State Key Laboratory of Software EngineeringWuhan UniversityWuhanChina
  2. 2.College of Computer ScienceZhejiang UniversityHangzhouChina

Personalised recommendations