Advertisement

Scaling up the learning-from-crowds GLAD algorithm using instance-difficulty clustering

  • Enrique González RodrigoEmail author
  • Juan A. Aledo
  • Jose A. Gamez
Regular Paper
  • 2 Downloads

Abstract

The main goal of this article is to improve the results obtained by the GLAD algorithm in cases with large data. This algorithm is able to learn from instances labeled by multiple annotators taking into account both the quality of the annotators and the difficulty of the instances. Despite its many advantages, this study shows that GLAD does not scale well when dealing with large number of instances, as it estimates one parameter per instance of the dataset. Clustering is an alternative to reduce the number of parameters to be estimated, making the learning process more efficient. However, as the features of crowdsourced datasets are not usually available, classical clustering procedures cannot be applied directly. To solve this issue, we propose using clustering from vectors created by matrix factorization. Our analysis shows that this clustering process improves the results obtained by GLAD both regarding accuracy and execution time, especially in large data scenarios. We also compare this approach against other algorithms with a similar goal.

Keywords

Non-standard classification Crowdsourcing Multiple annotators Weakly supervised 

Notes

References

  1. 1.
    Aydin, B.I., Yilmaz, Y.S., Li, Y., Li, Q., Gao, J., Demirbas, M.: Crowdsourcing for multiple-choice question answering. In: Twenty-Sixth IAAI Conference (2014)Google Scholar
  2. 2.
    Charte, D., Charte, F., García, S., Herrera, F.: A snapshot on nonstandard supervised learning problems: taxonomy, relationships, problem transformations and algorithm adaptations. Prog. Artif. Intell. (2018).  https://doi.org/10.1007/s13748-018-00167-7 Google Scholar
  3. 3.
    Chen, X., Bennett, P.N., Collins-Thompson, K., Horvitz, E.: Pairwise ranking aggregation in a crowdsourced setting. In: Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, pp. 193–202. ACM (2013)Google Scholar
  4. 4.
    Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the EM algorithm. Appl. Stat. 2, 20–28 (1979)CrossRefGoogle Scholar
  5. 5.
    Demartini, G., Difallah, D.E., Cudré-Mauroux, P.: Zencrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In: Proceedings of the 21st International Conference on World Wide Web, pp. 469–478. ACM (2012)Google Scholar
  6. 6.
    Hernández-González, J., Inza, I., Lozano, J.A.: Weak supervision and other non-standard classification problems: a taxonomy. Pattern Recognit. Lett. 69, 49–55 (2016)CrossRefGoogle Scholar
  7. 7.
    Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on Amazon Mechanical Turk. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP ’10, pp. 64–67. ACM, New York (2010).  https://doi.org/10.1145/1837885.1837906
  8. 8.
    Karger, D.R., Oh, S., Shah, D.: Iterative learning for reliable crowdsourcing systems. In: Advances in Neural Information Processing Systems, pp. 1953–1961 (2011)Google Scholar
  9. 9.
    Kim, H.C., Ghahramani, Z.: Bayesian classifier combination. In: Artificial Intelligence and Statistics, pp. 619–627 (2012)Google Scholar
  10. 10.
    Li, Q., Li, Y., Gao, J., Su, L., Zhao, B., Demirbas, M., Fan, W., Han, J.: A confidence-aware approach for truth discovery on long-tail data. Proc. VLDB Endow. 8(4), 425–436 (2014)CrossRefGoogle Scholar
  11. 11.
    Li, Q., Li, Y., Gao, J., Zhao, B., Fan, W., Han, J.: Resolving conflicts in heterogeneous data by truth discovery and source reliability estimation. In: Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data, pp. 1187–1198. ACM (2014)Google Scholar
  12. 12.
    Liu, Q., Peng, J., Ihler, A.T.: Variational inference for crowdsourcing. In: Advances in Neural Information Processing Systems, pp. 692–700 (2012)Google Scholar
  13. 13.
    Luna-Romera, J.M., García-Gutiérrez, J., Martínez-Ballesteros, M., Riquelme Santos, J.C.: An approach to validity indices for clustering techniques in big data. Prog. Artif. Intell. 7(2), 81–94 (2018)CrossRefGoogle Scholar
  14. 14.
    Raykar, V.C., Yu, S., Zhao, L.H., Valadez, G.H., Florin, C., Bogoni, L., Moy, L.: Learning from crowds. J. Mach. Learn. Res. 11(Apr), 1297–1322 (2010)MathSciNetGoogle Scholar
  15. 15.
    Rodrigo, G., Aledo, E., Gámez, J.A.: CGLAD: using GLAD in crowdsourced large datasets. In: Lecture Notes in Computer Science, vol. 11314 (IDEAL 2018), pp. 783–791 (2018)Google Scholar
  16. 16.
    Rodrigo, E.G., Aledo, J.A., Gamez, J.A.: spark-crowd: a spark package for learning from crowdsourced big data. J. Mach. Learn. Res. 20(19), 1–5 (2019)MathSciNetGoogle Scholar
  17. 17.
    Rodrigo, G., Aledo, E., Gámez, J.A.: Machine learning from crowds: a systematic review of its applications. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. (2019).  https://doi.org/10.1002/widm.1288 Google Scholar
  18. 18.
    Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast—but is it good? Evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 254–263. Association for Computational Linguistics, Honolulu (2008)Google Scholar
  19. 19.
    Venanzi, M., Guiver, J., Kazai, G., Kohli, P., Shokouhi, M.: Community-based Bayesian aggregation models for crowdsourcing. In: Proceedings of the 23rd International Conference on World Wide Web, pp. 155–164. ACM (2014)Google Scholar
  20. 20.
    Whitehill, J., Wu, T.f., Bergsma, J., Movellan, J.R., Ruvolo, P.L.: Whose vote should count more: optimal integration of labels from labelers of unknown expertise. In: Advances in Neural Information Processing Systems, pp. 2035–2043 (2009)Google Scholar
  21. 21.
    Xu, R., Wunsch, D.: Survey of clustering algorithms. IEEE Trans. Neural Netw. 16(3), 645–678 (2005)CrossRefGoogle Scholar
  22. 22.
    Zhang, J., Wu, X.: Multi-label inference for crowdsourcing. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’18, pp. 2738–2747. ACM, New York (2018).  https://doi.org/10.1145/3219819.3219958
  23. 23.
    Zhang, J., Wu, X., Sheng, V.S.: Learning from crowdsourced labeled data: a survey. Artif. Intell. Rev. 46(4), 543–576 (2016)CrossRefGoogle Scholar
  24. 24.
    Zheng, Y., Li, G., Li, Y., Shan, C., Cheng, R.: Truth inference in crowdsourcing: is the problem solved? Proc. VLDB Endow. 10(5), 541–552 (2017)CrossRefGoogle Scholar
  25. 25.
    Zhou, Y., Wilkinson, D., Schreiber, R., Pan, R.: Large-scale parallel collaborative filtering for the netflix prize. In: International Conference on Algorithmic Applications in Management, pp. 337–348. Springer, Berlin (2008)Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.AlbaceteSpain

Personalised recommendations