Advertisement

Imprecise Label Aggregation Approach Under the Belief Function Theory

  • Lina AbassiEmail author
  • Imen Boukhris
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 941)

Abstract

Crowdsourcing has become a phenomenon of increasing interest in several research fields such as artificial intelligence. It typically uses human cognitive ability in order to effectively solve tasks that can hardly be addressed by automated computation. The major problem however is that so far studies could not completely control the quality of obtained data since contributors are uncertainly reliable. In this work, we propose an approach that aggregates labels using the belief function theory under the assumption that these labels could be partial hence imprecise. Simulated data demonstrate that our method produces more reliable aggregation results.

Keywords

Crowdsourcing Label aggregation Belief function theory Precision Exactitude 

References

  1. 1.
    Zheng, Y., Wang, J., Li, G., Feng, J.: QASCA: a quality-aware task assignment system for crowdsourcing applications. In: International Conference on Management of Data, pp. 1031–1046 (2015)Google Scholar
  2. 2.
    Yan, T., Kumar, V., Ganesan, D.: Designing games with a purpose. Commun. ACM 51(8), 58–67 (2008)Google Scholar
  3. 3.
    Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast but is it good? Evaluation non-expert annotations for natural language tasks. In: The Conference on Empirical Methods in Natural Languages Processing, pp. 254–263 (2008)Google Scholar
  4. 4.
    Sheng, V.S., Provost, F., Ipeirotis, P.G.: Get another label? Improving data quality and data mining using multiple, noisy labelers. In: International Conference on Knowledge Discovery and Data Mining, pp. 614–622 (2008)Google Scholar
  5. 5.
    Shafer, G.: A Mathematical Theory of Evidence, vol. 1. Princeton University Press, Princeton (1976)zbMATHGoogle Scholar
  6. 6.
    Dempster, A.P.: Upper and lower probabilities induced by a multivalued mapping. In: The Annals of Mathematical Statistics, pp. 325–339 (1967)Google Scholar
  7. 7.
    Jousselme, A.-L., Grenier, D., Bossé, É.: A new distance between two bodies of evidence. In: Information Fusion, pp. 91–101 (2001)Google Scholar
  8. 8.
    Lefèvre, E., Elouedi, Z.: How to preserve the confict as an alarm in the combination of belief functions? Decis. Support Syst. 56, 326–333 (2013)CrossRefGoogle Scholar
  9. 9.
    Smets, P.: The combination of evidence in the transferable belief model. IEEE Trans. Pattern Anal. Mach. Intell. 12(5), 447–458 (1990)CrossRefGoogle Scholar
  10. 10.
    Raykar, V.C., Yu, S.: Eliminating spammers and ranking annotators for crowdsourced labeling tasks. J. Mach. Learn. Res. 13, 491–518 (2012)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the EM algorithm. Appl. Stat. 28, 20–28 (2010)CrossRefGoogle Scholar
  12. 12.
    Khattak, F.K., Salleb, A.: Quality control of crowd labeling through expert evaluation. In: The Neural Information Processing Systems 2nd Workshop on Computational Social Science and the Wisdom of Crowds, pp. 27–29 (2011)Google Scholar
  13. 13.
    Smets, P., Mamdani, A., Dubois, D., Prade, H.: Non Standard Logics for Automated Reasoning, pp. 253–286. Academic Press, London (1988)Google Scholar
  14. 14.
    Ben Rjab, A., Kharoune, M., Miklos, Z., Martin, A.: Characterization of experts in crowdsourcing platforms. In: International Conference on BELIEF 2016, pp. 97–104 (2016)Google Scholar
  15. 15.
    Watanabe, M., Yamaguchi, K.: The EM Algorithm and Related Statistical Models, 250 p. CRC Press, Boca Raton (2003)Google Scholar
  16. 16.
    Whitehill, J., Wu, T., Bergsma, J., Movellan, J.R., Ruvolo, P.L.: Whose vote should count more: optimal integration of labels from labelers of unknown expertise. In: Neural Information Processing Systems, pp. 2035–2043 (2009)Google Scholar
  17. 17.
    Abassi, L., Boukhris, I.: Crowd label aggregation under a belief function framework. In: International Conference on Knowledge Science, Engineering and Management, pp. 185–196. Springer (2016)Google Scholar
  18. 18.
    Abassi, L., Boukhris, I.: A gold standards-based crowd label aggregation within the belief function theory. In: International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, pp. 97–106. Springer (2017)Google Scholar
  19. 19.
    Abassi, L., Boukhris, I.: Iterative aggregation of crowdsourced tasks within the belief function theory. In: European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty, pp. 159–168. Springer (2017)Google Scholar
  20. 20.
    Abassi, L., Boukhris, I.: A worker clustering-based approach of label aggregation under the belief function theory. In: Applied Intelligence, pp. 1573–7497 (2018)Google Scholar
  21. 21.
    Florentin, S., Arnaud, M., Christophe, O.: Contradiction measures and specificity degrees of basic belief assignments. In: 14th International Conference on Information Fusion, pp. 1–8 (2011)Google Scholar
  22. 22.
    Kuncheva, L., et al.: Limits on the majority vote accuracy in classifier fusion. Pattern Anal. Appl. 6, 22–31 (2003)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.LARODEC LaboratoryInstitut Supérieur de Gestion de Tunis, University of TunisTunisTunisia

Personalised recommendations