Artificial Intelligence Review

, Volume 46, Issue 4, pp 543–576 | Cite as

Learning from crowdsourced labeled data: a survey

Article

Abstract

With the rapid growing of crowdsourcing systems, quite a few applications based on a supervised learning paradigm can easily obtain massive labeled data at a relatively low cost. However, due to the variable uncertainty of crowdsourced labelers, learning procedures face great challenges. Thus, improving the qualities of labels and learning models plays a key role in learning from the crowdsourced labeled data. In this survey, we first introduce the basic concepts of the qualities of labels and learning models. Then, by reviewing recently proposed models and algorithms on ground truth inference and learning models, we analyze connections and distinctions among these techniques as well as clarify the level of the progress of related researches. In order to facilitate the studies in this field, we also introduce open accessible real-world data sets collected from crowdsourcing systems and open source libraries and tools. Finally, some potential issues for future studies are discussed.

Keywords

Crowdsourcing Learning from crowds Multiple noisy labeling Label quality Learning model quality Ground truth inference 

References

  1. Allahbakhsh M, Benatallah B, Ignjatovic A, Motahari-Nezhad HR, Bertino E, Dustdar S (2013) Quality control in crowdsourcing systems: issues and directions. IEEE Internet Comput 2:76–81CrossRefGoogle Scholar
  2. Bernardi C, Maday Y (1997) Handbook of numerical analysis. Spectr Methods 5:209–485MathSciNetGoogle Scholar
  3. Bernstein MS, Little G, Miller RC, Hartmann B, Ackerman MS, Karger DR, Crowell D, Panovich K (2010) Soylent: a word processor with a crowd inside. In: Proceedings of the 23nd annual ACM symposium on user interface software and technology, ACM, pp 313–322Google Scholar
  4. Bragg J, Weld DS, et al (2013) Crowdsourcing multi-label classification for taxonomy creation. In: First AAAI conference on human computation and crowdsourcing, pp 25–33Google Scholar
  5. Breiman L (2001) Random forests. Mach Learn 45(1):5–32MathSciNetMATHCrossRefGoogle Scholar
  6. Brew A, Greene D, Cunningham P (2010) The interaction between supervised learning and crowdsourcing. In: NIPS workshop on computational social science and the wisdom of crowdsGoogle Scholar
  7. Brodley CE, Friedl MA (1999) Identifying mislabeled training data. J Artif Intell Res 11:131–167MATHGoogle Scholar
  8. Buckley C, Lease M, Smucker MD, Jung HJ, Grady C, Buckley C, Lease M, Smucker MD, Grady C, Lease M, et al (2010) Overview of the trec 2010 relevance feedback track (notebook). In: The nineteenth text retrieval conference (TREC) notebookGoogle Scholar
  9. Carvalho VR, Lease M, Yilmaz E (2011) Crowdsourcing for search evaluation. ACM Sigir Forum ACM 44:17–22CrossRefGoogle Scholar
  10. Corney J, Lynn A, Torres C, Di Maio P, Regli W, Forbes G, Tobin L (2010) Towards crowdsourcing translation tasks in library cataloguing, a pilot study. In: The 4th IEEE international conference on digital ecosystems and technologies(DEST), IEEE, pp 572–577Google Scholar
  11. Dagan I, Glickman O, Magnini B (2006) The pascal recognising textual entailment challenge. In: Quiñonero-Candela J, Dagan I, Magnini B, d’Alché-Buc F (eds) Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment, Springer, pp 177–190Google Scholar
  12. Dalvi N, Dasgupta A, Kumar R, Rastogi V (2013) Aggregating crowdsourced binary ratings. In: Proceedings of the 22nd international conference on world wide web, International World Wide Web conferences steering committee, pp 285–294Google Scholar
  13. Dawid AP, Skene AM (1979) Maximum likelihood estimation of observer error-rates using the em algorithm. Appl Stat 28(1):20–28CrossRefGoogle Scholar
  14. Demartini G, Difallah DE, Cudré-Mauroux P (2012) Zencrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In: Proceedings of the 21st international conference on world wide web, ACM, pp 469–478Google Scholar
  15. Doan A, Ramakrishnan R, Halevy AY (2011) Crowdsourcing systems on the world-wide web. Commun ACM 54(4):86–96CrossRefGoogle Scholar
  16. Donmez P, Carbonell JG, Schneider J (2009) Efficiently learning the accuracy of labeling sources for selective sampling. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, pp 259–268Google Scholar
  17. Donmez P, Carbonell JG, Schneider JG (2010) A probabilistic framework to learn from multiple annotators with time-varying accuracy. In: Proceedings of the 10th SIAM international conference on data mining, SIAM, pp 826–837Google Scholar
  18. Dow S, Kulkarni A, Klemmer S, Hartmann B (2012) Shepherding the crowd yields better work. In: Proceedings of the ACM 2012 conference on computer supported cooperative work, ACM, pp 1013–1022Google Scholar
  19. Downs JS, Holbrook MB, Sheng S, Cranor LF (2010) Are your participants gaming the system? Screening mechanical turk workers. In: Proceedings of the SIGCHI conference on human factors in computing systems, ACM, pp 2399–2402Google Scholar
  20. Eagle N (2009) txteagle: mobile crowdsourcing. In: Aykin N (ed) Internationalization, design and global development, Springer, pp 447–456Google Scholar
  21. Evgeniou T, Pontil M (2004) Regularized multi-task learning. In: Proceedings of the tenth ACM SIGKDD international conference on knowledge discovery and data mining, ACM, pp 109–117Google Scholar
  22. Faltings B, Jurca R, Pu P, Tran BD (2014) Incentives to counter bias in human computation. In: Second AAAI conference on human computation and crowdsourcingGoogle Scholar
  23. Fang M, Yin J, Zhu X (2013) Knowledge transfer for multi-labeler active learning. In: Machine learning and knowledge discovery in databases, Springer, pp 273–288Google Scholar
  24. Fang M, Zhu X, Li B, Ding W, Wu X (2012) Self-taught active learning from crowds. In: Data mining (2012 IEEE 12th international conference on ICDM), IEEE, pp 858–863Google Scholar
  25. Frénay B, Verleysen M (2014) Classification in the presence of label noise: a survey. IEEE Trans Neural Netw Learn Syst 25(5):845–869CrossRefGoogle Scholar
  26. Freund Y (2001) An adaptive version of the boost by majority algorithm. Mach Learn 43(3):293–318MATHCrossRefGoogle Scholar
  27. Fu Y, Zhu X, Li B (2013) A survey on instance selection for active learning. Knowl Inf Syst 35(2):249–283CrossRefGoogle Scholar
  28. Gelman A, Carlin JB, Stern HS, Rubin DB (2014) Bayesian data analysis, vol 2. Taylor & Francis, LondonMATHGoogle Scholar
  29. Ghosh A, Kale S, McAfee P (2011) Who moderates the moderators? Crowdsourcing abuse detection in user-generated content. In: Proceedings of the 12th ACM conference on electronic commerce, ACM, pp 167–176Google Scholar
  30. Grady C, Lease M (2010) Crowdsourcing document relevance assessment with mechanical turk. In: Proceedings of the NAACL HLT 2010 workshop on creating speech and language data with Amazon’s mechanical turk, association for computational linguistics, pp 172–179Google Scholar
  31. Gu B, Sheng VS, Tay KY, Romano W, Li S (2014) Incremental support vector learning for ordinal regression. IEEE Trans Neural Netw Learn Syst 26(7):1403–1416MathSciNetCrossRefGoogle Scholar
  32. Gu B, Sheng VS, Wang Z, Ho D, Osman S, Li S (2015) Incremental learning for \(\nu \)-support vector regression. Neural Netw 67:140–150CrossRefGoogle Scholar
  33. Halevy A, Norvig P, Pereira F (2009) The unreasonable effectiveness of data. IEEE Intell Syst 24(2):8–12CrossRefGoogle Scholar
  34. Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The weka data mining software: an update. ACM SIGKDD Explor Newslett 11(1):10–18CrossRefGoogle Scholar
  35. Han H, Otto C, Liu X, Jain A (2014) Demographic estimation from face images: human vs. machine performance. IEEE Trans Pattern Anal Mach Intell 37(6):1148–1161CrossRefGoogle Scholar
  36. Howe J (2006) The rise of crowdsourcing. Wired Mag 14(6):1–4MathSciNetGoogle Scholar
  37. Ipeirotis PG, Provost F, Wang J (2010) Quality management on amazon mechanical turk. In: Proceedings of the ACM SIGKDD workshop on human computation, ACM, pp 64–67Google Scholar
  38. Ipeirotis PG, Provost F, Sheng VS, Wang J (2014) Repeated labeling using multiple noisy labelers. Data Min Knowl Discov 28(2):402–441MathSciNetMATHCrossRefGoogle Scholar
  39. Jung HJ, Lease M (2011) Improving consensus accuracy via z-score and weighted voting. In: Proceedings of the 3rd human computation workshop (HCOMP) at AAAIGoogle Scholar
  40. Kajino H, Kashima H (2012) Convex formulations of learning from crowds. Trans Jan Soc Artif Intell 27:133–142CrossRefGoogle Scholar
  41. Karger DR, Oh S, Shah D (2011) Budget-optimal crowdsourcing using low-rank matrix approximations. In: Communication, control, and computing (Allerton), 2011 49th annual allerton conference on, IEEE, pp 284–291Google Scholar
  42. Karger DR, Oh S, Shah D (2014) Budget-optimal task allocation for reliable crowdsourcing systems. Oper Res 62(1):1–24MATHCrossRefGoogle Scholar
  43. Khattak FK, Salleb-Aouissi A (2011) Quality control of crowd labeling through expert evaluation. In: Proceedings of the NIPS 2nd workshop on computational social science and the wisdom of crowdsGoogle Scholar
  44. Khetan A, Oh S (2016) Reliable crowdsourcing under the generalized dawid-skene model. arXiv:1602.03481
  45. Kittur A, Smus B, Khamkar S, Kraut RE (2011) Crowdforge: crowdsourcing complex work. In: Proceedings of the 24th annual ACM symposium on User interface software and technology, ACM, pp 43–52Google Scholar
  46. Koller D, Friedman N (2009) Probabilistic graphical models: principles and techniques. MIT press, CambridgeMATHGoogle Scholar
  47. Kulkarni A, Can M, Hartmann B (2012) Collaboratively crowdsourcing workflows with turkomatic. In: Proceedings of the ACM 2012 conference on computer supported cooperative work, ACM, pp 1003–1012Google Scholar
  48. Kurve A, Miller DJ, Kesidis G (2015) Multicategory crowdsourcing accounting for variable task difficulty, worker skill, and worker intention. IEEE Trans Knowl Data Eng 27(3):794–809CrossRefGoogle Scholar
  49. Lease M (2011) On quality control and machine learning in crowdsourcing. In: Proceedings of the 3rd human computation workshop (HCOMP) at AAAIGoogle Scholar
  50. Li H, Yu B (2014) Error rate bounds and iterative weighted majority voting for crowdsourcing. arXiv:1411.4086
  51. Li J, Ott M, Cardie C, Hovy E (2014) Towards a general rule for identifying deceptive opinion spam. In: Proceedings of the 52nd annual meeting of the association for computational linguistics, ACLGoogle Scholar
  52. Li J, Li X, Yang B, Sun X (2015) Segmentation-based image copy-move forgery detection scheme. IEEE Trans Inf Forensics Secur 10(3):507–518CrossRefGoogle Scholar
  53. Lin CH, Weld DS, et al (2014) To re (label), or not to re (label). In: Second AAAI conference on human computation and crowdsourcingGoogle Scholar
  54. Ling CX, Sheng VS (2010) Cost-sensitive learning. In: Sammut C, Webb GI (eds) Encyclopedia of machine learning, Springer, pp 231–235Google Scholar
  55. Little G, Chilton LB, Goldman M, Miller RC (2009) Turkit: tools for iterative tasks on mechanical turk. In: Proceedings of the ACM SIGKDD workshop on human computation, ACM, pp 29–30Google Scholar
  56. Liu K, Cheung WK, Liu J (2015) Detecting multiple stochastic network motifs in network data. Knowl Inf Syst 42(1):49–74CrossRefGoogle Scholar
  57. Liu Q, Peng J, Ihler AT (2012) Variational inference for crowdsourcing. In: Advances in neural information processing systems, pp 692–700Google Scholar
  58. Long C, Hua G, Kapoor A (2013) Active visual recognition with expertise estimation in crowdsourcing. In: 2013 IEEE international conference on computer vision (ICCV), IEEE, pp 3000–3007Google Scholar
  59. Michalski RS, Carbonell JG, Mitchell TM (2013) Machine learning: an artificial intelligence approach. Springer, BerlinMATHGoogle Scholar
  60. Miller GA, Charles WG (1991) Contextual correlates of semantic similarity. Lang Cognit Process 6(1):1–28CrossRefGoogle Scholar
  61. Mo K, Zhong E, Yang Q (2013) Cross-task crowdsourcing. In: Proceedings of the 19th ACM SIGKDD international conference on knowledge discovery and data mining, ACM, pp 677–685Google Scholar
  62. Muhammadi J, Rabiee HR, Hosseini A (2015) A unified statistical framework for crowd labeling. Knowl Inf Syst 45(2):271–294CrossRefGoogle Scholar
  63. Natarajan N, Dhillon IS, Ravikumar PK, Tewari A (2013) Learning with noisy labels. In: Advances in neural information processing systems, vol 26. pp 1196–1204Google Scholar
  64. Nguyen QVH, Nguyen TT, Lam NT, Aberer K (2013) Batc: a benchmark for aggregation techniques in crowdsourcing. In: Proceedings of the 36th international ACM SIGIR conference on research and development in information retrieval, ACM, pp 1079–1080Google Scholar
  65. Nicholson B, Zhang J, Sheng VS, Wang Z (2015) Label noise correction methods. In: IEEE International Conference on, IEEE, data science and advanced analytics (DSAA), 2015. 36678 2015, pp 1–9Google Scholar
  66. Nowak S, Rüger S (2010) How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation. In: Proceedings of the international conference on multimedia information retrieval, ACM, pp 557–566Google Scholar
  67. Oyen D, Lane T (2015) Transfer learning for Bayesian discovery of multiple Bayesian networks. Knowl Inf Syst 43(1):1–28CrossRefGoogle Scholar
  68. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359CrossRefGoogle Scholar
  69. Parhami B (1994) Voting algorithms. IEEE Trans Reliab 43(4):617–629CrossRefGoogle Scholar
  70. Pradhan SS, Loper E, Dligach D, Palmer M (2007) Semeval-2007 task 17: english lexical sample, srl and all words. In: Proceedings of the 4th international workshop on semantic evaluations, association for computational linguistics, pp 87–92Google Scholar
  71. Prati RC, Batista GEAPA, Silva DF (2015) Class imbalance revisited: a new experimental setup to assess the performance of treatment methods. Knowl Inf Syst 45(1):247–270CrossRefGoogle Scholar
  72. Prpic J, Shukla P (2013) The theory of crowd capital. In: The 46th Hawaii international conference on system sciences (HICSS), IEEE, pp 3505–3514Google Scholar
  73. Prpic J, Shukla P (2014) The contours of crowd capability. In: The 47th Hawaii international conference on system sciences (HICSS), IEEE, pp 3461–3470Google Scholar
  74. Rätsch G, Schölkopf B, Smola AJ, Mika S, Onoda T, Müller KR (2000) Robust ensemble learning for data mining. Knowledge discovery and data mining. Current issues and new applications. Springer, Berlin, pp 341–344CrossRefGoogle Scholar
  75. Raykar VC, Yu S (2012) Eliminating spammers and ranking annotators for crowdsourced labeling tasks. J Mach Learn Res 13:491–518MathSciNetMATHGoogle Scholar
  76. Raykar VC, Yu S, Zhao LH, Jerebko A, Florin C, Valadez GH, Bogoni L, Moy L (2009) Supervised learning from multiple experts: whom to trust when everyone lies a bit. In: Proceedings of the 26th annual international conference on machine learning, ACM, pp 889–896Google Scholar
  77. Raykar VC, Yu S, Zhao LH, Valadez GH, Florin C, Bogoni L, Moy L (2010) Learning from crowds. J Mach Learn Res 11:1297–1322MathSciNetGoogle Scholar
  78. Rodrigues F, Pereira F, Ribeiro B (2013) Learning from multiple annotators: distinguishing good from random labelers. Pattern Recogn Lett 34(12):1428–1436CrossRefGoogle Scholar
  79. Rodrigues F, Pereira F, Ribeiro B (2014) Gaussian process classification and active learning with multiple annotators. In: Proceedings of the 31st international conference on machine learning (ICML-14), pp 433–441Google Scholar
  80. Ross J, Irani L, Silberman M, Zaldivar A, Tomlinson B (2010) Who are the crowdworkers? Shifting demographics in mechanical turk. In: CHI’10 extended abstracts on human factors in computing systems, ACM, pp 2863–2872Google Scholar
  81. Settles B (2010) Active learning literature survey. Univ Wis Madison 52(55–66):11Google Scholar
  82. Settles B, Craven M (2008) An analysis of active learning strategies for sequence labeling tasks. In: Proceedings of the conference on empirical methods in natural language processing, Association for Computational Linguistics, pp 1070–1079Google Scholar
  83. Shah NB, Zhou D (2015) Double or nothing: multiplicative incentive mechanisms for crowdsourcing. In: Advances in neural information processing systems, vol 28. pp 1–9Google Scholar
  84. Shah NB, Zhou D, Peres Y (2015) Approval voting and incentives in crowdsourcing. In: Proceedings of the 32nd international conference on machine learning (ICML)Google Scholar
  85. Sheng VS, Provost F, Ipeirotis PG (2008) Get another label? improving data quality and data mining using multiple, noisy labelers. In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, pp 614–622Google Scholar
  86. Sheng VS (2011) Simple multiple noisy label utilization strategies. In: Data mining (ICDM), 2011 IEEE 11th international conference on, IEEE, pp 635–644Google Scholar
  87. Sheshadri A, Lease M (2013) Square: a benchmark for research on computing crowd consensus. In: First AAAI conference on human computation and crowdsourcing, AAAI, pp 156–164Google Scholar
  88. Smyth P, Burl MC, Fayyad UM, Perona P (1994) Knowledge discovery in large image databases: dealing with uncertainties in ground truth. In: KDD workshop, pp 109–120Google Scholar
  89. Smyth P, Fayyad U, Burl M, Perona P, Baldi P (1995) Inferring ground truth from subjective labelling of venus images. In: Advances in Neural Information Processing Systems, vol 7. pp 1085–1092Google Scholar
  90. Snow R, O’Connor B, Jurafsky D, Ng AY (2008) Cheap and fast—but is it good? Evaluating non-expert annotations for natural language tasks. In: Proceedings of the conference on empirical methods in natural language processing, association for computational linguistics, pp 254–263Google Scholar
  91. Sorokin A, Forsyth D (2008) Utility data annotation with amazon mechanical turk. In: Proceedings of the First IEEE Workshop on Internet Vision at CVPR 2008, pp 1–8Google Scholar
  92. Steinwart I, Christmann A (2008) Support vector machines. Springer, BerlinMATHGoogle Scholar
  93. Strapparava C, Mihalcea R (2007) Semeval-2007 task 14: affective text. In: Proceedings of the 4th international workshop on semantic evaluations, association for computational linguistics, pp 70–74Google Scholar
  94. Sukhbaatar S, Bruna J, Paluri M, Bourdev L, Fergus R (2014) Training convolutional networks with noisy labels. arXiv:1406.2080
  95. Su Q, Pavlov D, Chow JH, Baker WC (2007) Internet-scale collection of human-reviewed data. In: Proceedings of the 16th international conference on world wide web, ACM, pp 231–240Google Scholar
  96. Tang W, Lease M (2011) Semi-supervised consensus labeling for crowdsourcing. In: SIGIR workshop on crowdsourcing for information retrieval, pp 66–75Google Scholar
  97. Tian T, Zhu J (2015) Uncovering the latent structures of crowd labeling. In: Pacific-Asia conference on knowledge discovery and data mining, pp 392–404Google Scholar
  98. Ting KM (2002) An instance-weighting method to induce cost-sensitive trees. IEEE Trans Knowl Data Eng 14(3):659–665CrossRefGoogle Scholar
  99. Tong Y, Cao CC, Zhang CJ, Li Y, Chen L (2014) Crowdcleaner: data cleaning for multi-version data on the web via crowdsourcing. In: 2014 IEEE 30th international conference on data engineering (ICDE), IEEE, pp 1182–1185Google Scholar
  100. Urbano J, Morato J, Marrero M, Martín D (2010) Crowdsourcing preference judgments for evaluation of music similarity tasks. In: ACM SIGIR workshop on crowdsourcing for search evaluation, pp 9–16Google Scholar
  101. Vempaty A, Varshney LR, Varshney PK (2014) Reliable crowdsourcing for multi-class labeling using coding theory. IEEE J Sel Top Signal Process 8(4):667–679CrossRefGoogle Scholar
  102. Von Ahn L (2009) Human computation. In: The 46th ACM/IEEE design automation conference (DAC’09), IEEE, pp 418–419Google Scholar
  103. Von Ahn L, Dabbish L (2004) Labeling images with a computer game. In: Proceedings of the SIGCHI conference on human factors in computing systems, ACM, pp 319–326Google Scholar
  104. Von Ahn L, Maurer B, McMillen C, Abraham D, Blum M (2008) recaptcha: Human-based character recognition via web security measures. Science 321(5895):1465–1468MathSciNetMATHCrossRefGoogle Scholar
  105. Vondrick C, Patterson D, Ramanan D (2013) Efficiently scaling up crowdsourced video annotation. Int J Comput Vis 101(1):184–204CrossRefGoogle Scholar
  106. Wainwright MJ, Jordan MI (2008) Graphical models, exponential families, and variational inference. Found Trends Mach Learn 1(1–2):1–305MATHGoogle Scholar
  107. Wang G, Wang T, Zheng H, Zhao BY (2014) Man vs. machine: practical adversarial detection of malicious crowdsourcing workers. In: 23rd USENIX security symposium, USENIX Association, CAGoogle Scholar
  108. Watanabe M, Yamaguchi K (2003) The EM algorithm and related statistical models. CRC Press, Boca RatonMATHCrossRefGoogle Scholar
  109. Wauthier FL, Jordan MI (2011) Bayesian bias mitigation for crowdsourcing. In: Advances in neural information processing systems, pp 1800–1808Google Scholar
  110. Weiss GM, Hirsh H (1998) The problem with noise and small disjuncts. In: ICML, pp 574–578Google Scholar
  111. Welinder P, Perona P (2010) Online crowdsourcing: rating annotators and obtaining cost-effective labels. In: The 2010 IEEE computer society conference on computer vision and pattern recognition workshops (CVPRW), IEEE, pp 25–32Google Scholar
  112. Welinder P, Branson S, Perona P, Belongie SJ (2010) The multidimensional wisdom of crowds. In: Advances in neural information processing systems (NIPS), vol 23. pp 2424–2432Google Scholar
  113. Wen X, Shao L, Xue Y, Fang W (2015) A rapid learning algorithm for vehicle classification. Inf Sci 295:395–406CrossRefGoogle Scholar
  114. Whitehill J, Wu Tf, Bergsma J, Movellan JR, Ruvolo PL (2009) Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In: Advances in neural information processing systems (NIPS), pp 2035–2043Google Scholar
  115. Williams CK, Barber D (1998) Bayesian classification with gaussian processes. IEEE Trans Pattern Anal Mach Intell 20(12):1342–1351CrossRefGoogle Scholar
  116. Wu W, Liu Y, Guo M, Wang C, Liu X (2013) A probabilistic model of active learning with multiple noisy oracles. Neurocomputing 118:253–262CrossRefGoogle Scholar
  117. Xu Q, Huang Q, Yao Y (2012) Online crowdsourcing subjective image quality assessment. In: Proceedings of the 20th ACM international conference on multimedia, ACM, pp 359–368Google Scholar
  118. Yan T, Kumar V, Ganesan D (2010a) Crowdsearch: exploiting crowds for accurate real-time image search on mobile phones. In: Proceedings of the 8th international conference on mobile systems, applications, and services, ACM, pp 77–90Google Scholar
  119. Yan Y, Rosales R, Fung G, Dy J (2010b) Modeling multiple annotator expertise in the semi-supervised learning scenario. In: Proceedings of conference on uncertainty in artificial intelligence, pp 674–682Google Scholar
  120. Yan Y, Rosales R, Fung G, Schmidt MW, Valadez GH, Bogoni L, Moy L, Dy JG (2010c) Modeling annotator expertise: learning when everybody knows a bit of something. In: International conference on artificial intelligence and statistics, pp 932–939Google Scholar
  121. Yan Y, Fung GM, Rosales R, Dy JG (2011) Active learning from crowds. In: Proceedings of the 28th international conference on machine learning (ICML-11), pp 1161–1168Google Scholar
  122. Zellner A (1996) An introduction to Bayesian inference in econometrics. Wiley, New YorkGoogle Scholar
  123. Zhang Z, Pang J, Xie X (2013) Research on crowdsourcing quality control strategies and evaluation algorithm. Chin J Comput 8:1636–1649Google Scholar
  124. Zhang Y, Chen X, Zhou D, Jordan MI (2014) Spectral methods meet em: a provably optimal algorithm for crowdsourcing. In: Advances in neural information processing systems, vol 27. pp 1260–1268Google Scholar
  125. Zhang J, Sheng V, Nicholson BA, Wu X (2015a) Ceka: a tool for mining the wisdom of crowds. J Mach Learn Res 16:2853–2858MathSciNetMATHGoogle Scholar
  126. Zhang J, Sheng VS, Wu J, Fu X, Wu X (2015b) Improving label quality in crowdsourcing using noise correction. In: Proceedings of the 24th ACM international on conference on information and knowledge management, ACM, pp 1931–1934Google Scholar
  127. Zhang J, Wu X, Sheng VS (2015c) Imbalanced multiple noisy labeling. IEEE Trans Knowl Data Eng 27(2):489–503CrossRefGoogle Scholar
  128. Zhang J, Wu X, Shengs VS (2015d) Active learning with imbalanced multiple noisy labeling. IEEE Trans Cybern 45(5):1081–1093Google Scholar
  129. Zhang J, Sheng V, Wu J, Wu X (2016) Multi-class ground truth inference in crowdsourcing with clustering. IEEE Trans Knowl Data Eng 28(4):1080–1085CrossRefGoogle Scholar
  130. Zhong J, Tang K, Zhou ZH (2015) Active learning from crowds with unsure option. In: Proceedings of 2015 international joint conference on artificial intelligenceGoogle Scholar
  131. Zhou D, Basu S, Mao Y, Platt JC (2012) Learning from the wisdom of crowds by minimax entropy. In: Advances in neural information processing systems (NIPS), pp 2195–2203Google Scholar
  132. Zhou D, Liu Q, Platt J, Meek C (2014) Aggregating ordinal labels from crowds by minimax conditional entropy. In: Proceedings of the 31st international conference on machine learning (ICML-14), pp 262–270Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2016

Authors and Affiliations

  1. 1.School of Computer Science and EngineeringNanjing University of Science and TechnologyNanjingChina
  2. 2.School of Computer Science and Information EngineeringHefei University of TechnologyHefeiChina
  3. 3.Department of Computer ScienceUniversity of Central ArkansasConwayUSA
  4. 4.Jiangsu Engineering Center of Network MonitoringNanjing University of Information Science and TechnologyNanjingChina

Personalised recommendations