Iterative Aggregation of Crowdsourced Tasks Within the Belief Function Theory

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10369)

Abstract

With the growing of crowdsourcing services, gathering training data for supervised machine learning has become cheaper and faster than engaging experts. However, the quality of the crowd-generated labels remains an open issue. This is basically due to the wide ranging expertise levels of the participants in the labeling process. In this paper, we present an iterative approach of label aggregation based on the belief function theory that simultanously estimates labels, the reliability of participants and difficulty of each task. Our empirical evaluation demonstrate the efficiency of our method as it gives better quality labels.

Keywords

Aggregation Crowd Expectation-Maximization Belief function theory Expertise 

References

  1. 1.
    Shafer, G.: A Mathematical Theory of Evidence, vol. 1. Princeton University Press, Princeton (1976)MATHGoogle Scholar
  2. 2.
    Dempster, A.P.: Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Stat. 38, 325–339 (1967)MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Jousselme, A.-L., Grenier, D., Bossé, É.: A new distance between two bodies of evidence. Inf. Fusion 2, 91–101 (2001)CrossRefGoogle Scholar
  4. 4.
    Lefèvre, E., Elouedi, Z.: How to preserve the confict as an alarm in the combination of belief functions? Decis. Support Syst. 56, 326–333 (2013)CrossRefGoogle Scholar
  5. 5.
    Lee, K., Caverlee, J., Webb, S.: The social honeypot project: protecting online communities from spammers. In: International World Wide Web Conference, pp. 1139–1140 (2010)Google Scholar
  6. 6.
    Smets, P.: The combination of evidence in the transferable belief model. IEEE Trans. Pattern Anal. Mach. Intell. 12(5), 447–458 (1990)CrossRefGoogle Scholar
  7. 7.
    Raykar, V.C., Yu, S., Zhao, L.H., Jerebko, A., Florin, C., Valadez, G.H., Bogoni, L., Moy, L.: Supervised learning from multiple experts: whom to trust when everyone lies a bit. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 889–896 (2009)Google Scholar
  8. 8.
    Raykar, V.C., Yu, S., Zhao, L.H., Valadez, G.H., Florin, C., Bogoni, L., Moy, L.: Learning from crowds. J. Mach. Learn. Res. 11, 1297–1322 (2010)MathSciNetGoogle Scholar
  9. 9.
    Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the EM algorithm. Appl. Stat. 28, 20–28 (2010)CrossRefGoogle Scholar
  10. 10.
    Khattak, F.K., Salleb, A.: Quality control of crowd labeling through expert evaluation. In: The Neural Information Processing Systems 2nd Workshop on Computational Social Science and the Wisdom of Crowds, pp. 27–29 (2011)Google Scholar
  11. 11.
    Sheng, V.S., Provost, F., Ipeirotis, P.G.: Get another label? improving data quality and data mining using multiple, noisy labelers. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 614–622 (2008)Google Scholar
  12. 12.
    Smets, P., Mamdani, A., Dubois, D., Prade, H.: Non Standard Logics for Automated Reasoning, pp. 253–286. Academic Press, London (1988)Google Scholar
  13. 13.
    Whitehill, J., Wu, T., Bergsma, J., Movellan, J.R., Ruvolo, P.L.: Whose vote should count more: optimal integration of labels from labelers of unknown expertise. In: Neural Information Processing Systems, pp. 2035–2043 (2009)Google Scholar
  14. 14.
    Snow, R., et al.: Cheap and fast but is it good? Evaluation non-expert annotations for natural language tasks. In: The Conference on Empirical Methods in Natural Languages Processing, pp. 254–263 (2008)Google Scholar
  15. 15.
    Abassi, L., Boukhris, I.: Crowd label aggregation under a belief function framework. In: Lehner, F., Fteimi, N. (eds.) KSEM 2016. LNCS, vol. 9983, pp. 185–196. Springer, Cham (2016). doi:10.1007/978-3-319-47650-6_15 CrossRefGoogle Scholar
  16. 16.
    Georgescu, M., Zhu, X.: Aggregation of crowdsourced labels based on worker history. In: Proceedings of the 4th International Conference on Web Intelligence, Mining and Semantics, pp. 1–11 (2014)Google Scholar
  17. 17.
    Quoc Viet Hung, N., Tam, N.T., Tran, L.N., Aberer, K.: An evaluation of aggregation techniques in crowdsourcing. In: Lin, X., Manolopoulos, Y., Srivastava, D., Huang, G. (eds.) WISE 2013. LNCS, vol. 8181, pp. 1–15. Springer, Heidelberg (2013). doi:10.1007/978-3-642-41154-0_1 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.LARODEC, Institut Supérieur de Gestion de TunisUniversité de TunisTunisTunisia

Personalised recommendations