The Cropland Capture Game: Good Annotators Versus Vote Aggregation Methods

  • Artem Baklanov
  • Steffen Fritz
  • Michael Khachay
  • Oleg Nurmukhametov
  • Linda See
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 453)

Abstract

The Cropland Capture game, which is a recently developed Geo-Wiki game, aims to map cultivated lands using around 17,000 satellite images from the Earth’s surface. Using a perceptual hash and blur detection algorithm, we improve the quality of the Cropland Capture game’s dataset. We then benchmark state-of-the-art algorithms for an aggregation of votes using results of well-known machine learning algorithms as a baseline. We demonstrate that volunteer-image assignment is highly irregular and only good annotators are presented (there are no spammers and malicious voters). We conjecture that the last fact is the main reason for surprisingly similar accuracy levels across all examined algorithms. Finally, we increase the estimated consistency with expert opinion from 77 to 91 % and up to 96 % if we restrict our attention to images with more than 9 votes.

Keywords

Crowdsourcing Image processing Votes aggregation 

Notes

Acknowledgments

This research was supported by Russian Science Foundation, grant no. 14-11-00109, and the EU-FP7 funded ERC CrowdLand project, grant no. 617754.

References

  1. 1.
    Comber, A., Brunsdon, C., See, L., Fritz, S., McCallum, I.: Comparing expert and non-expert conceptualisations of the land: an analysis of crowdsourced land cover data. In: Spatial Information Theory, pp. 243–260. Springer (2013)Google Scholar
  2. 2.
    Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the em algorithm. Appl. Stat. 20–28 (1979)Google Scholar
  3. 3.
    Dempster, A.P., et al.: Maximum likelihood from incomplete data via the EM algorithm. JRSS Ser. B 1–38 (1977)Google Scholar
  4. 4.
    Harvey, N.J., Ladner, R.E., Lovász, L., Tamir, T.: Semi-matchings for bipartite graphs and load balancing. In: Algorithms and Data Structures, pp. 294–306. Springer (2003)Google Scholar
  5. 5.
    Jagabathula, S., et al.: Reputation-based worker filtering in crowdsourcing. In: Advances in Neural Information Processing Systems. pp. 2492–2500 (2014)Google Scholar
  6. 6.
    Khattak, F.K., Salleb-Aouissi, A.: Improving crowd labeling through expert evaluation. In: 2012 AAAI Spring Symposium Series (2012)Google Scholar
  7. 7.
    Kim, H.C., Ghahramani, Z.: Bayesian classifier combination. In: International Conference on Artificial Intelligence and Statistics, pp. 619–627 (2012)Google Scholar
  8. 8.
    Li, H., Yu, B.: Error rate bounds and iterative weighted majority voting for crowdsourcing. arXiv:1411.4086 (2014)
  9. 9.
    Moreno, P.G., Teh, Y.W., Perez-Cruz, F., Artés-Rodríguez, A.: Bayesian nonparametric crowdsourcing. arXiv:1407.5017 (2014)
  10. 10.
    Pareek, H., Ravikumar, P.: Human boosting. In: Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 338–346 (2013)Google Scholar
  11. 11.
    Raykar, V.C.: Eliminating spammers and ranking annotators for crowdsourced labeling tasks. JMLR 13, 491–518 (2012)MathSciNetMATHGoogle Scholar
  12. 12.
    Raykar, V.C., et al.: Learning from crowds. J. Mach. Learn. Res. 11, 1297–1322 (2010)MathSciNetGoogle Scholar
  13. 13.
    Salk, C.F., Sturn, T., See, L., Fritz, S., Perger, C.: Assessing quality of volunteer crowdsourcing contributions: lessons from the cropland capture game. Int. J. Digital Earth 1–17 (2015)Google Scholar
  14. 14.
    See, L., et al.: Building a hybrid land cover map with crowdsourcing and geographically weighted regression. ISPRS J. Photogramm. Remote Sens. 103, 48–56 (2015)CrossRefGoogle Scholar
  15. 15.
    Simpson, E., et al.: Dynamic Bayesian combination of multiple imperfect classifiers. In: Decision Making and Imperfection, pp. 1–35. Springer (2013)Google Scholar
  16. 16.
    Tong, H., Li, M., Zhang, H., Zhang, C.: Blur detection for digital images using wavelet transform. In: 2004 IEEE International Conference on Multimedia and Expo, 2004. ICME’04, vol. 1, pp. 17–20. IEEE (2004)Google Scholar
  17. 17.
    Zauner, C.: Implementation and benchmarking of perceptual image hash functions. Ph.D. thesis (2010)Google Scholar
  18. 18.
    Zhu, X., et al.: Co-training as a human collaboration policy. In: AAAI (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Artem Baklanov
    • 1
    • 2
    • 3
  • Steffen Fritz
    • 1
  • Michael Khachay
    • 2
    • 3
  • Oleg Nurmukhametov
    • 2
  • Linda See
    • 1
  1. 1.International Institute for Applied Systems Analysis (IIASA)LaxenburgAustria
  2. 2.Krasovsky Institute of Mathematics and MechanicsEkaterinburgRussia
  3. 3.Ural Federal UniversityEkaterinburgRussia

Personalised recommendations