Role of Task Complexity and Training in Crowdsourced Image Annotation
Accurate annotation of anatomical structures or pathological changes in microscopic images is an important task in computational pathology. Crowdsourcing holds promise to address this demand, but so far feasibility has only be shown for simple tasks and not for high-quality annotation of complex structures which is often limited by shortage of experts. Third-year medical students participated in solving two complex tasks, labeling of images and delineation of relevant image objects in breast cancer and kidney tissue. We evaluated their performance and addressed the requirements of task complexity and training phases. Our results show feasibility and a high agreement between students and experts. The training phase improved accuracy of image labeling.
KeywordsCrowdsourcing Human decision making Image classification Image delineation Digital pathology Annotation
We thank all students for contribution; M. Temerinac-Ott, Icube; R. Schönmeyer, C. Vanegas, Definiens for help in data selection; G. Stiller, M. Behrends, Peter L. Reichertz Institute for Medical Informatics; and A.-K. Rieke for the video.
- 3.Hoßfeld, T., et al.: Best practices and recommendations for crowdsourced qoe-lessons learned from the qualinet task force crowdsourcing. In: QUALINET (2014)Google Scholar
- 4.Irshad, H., et al.: Crowdsourcing scoring of immunohistochemistry images: evaluating performance of the crowd and an automated computational method. Sci. Rep. 7 (2017)Google Scholar
- 5.Kim, E., Mente, S., Keenan, A., Gehlot, V.: Digital pathology annotation data for improved deep neural network classification. In: SPIE Medical Imaging, p. 101380D (2017)Google Scholar
- 10.Redi, J., Povoa, I.: Crowdsourcing for rating image aesthetic appeal: better a paid or a volunteer crowd? In: Proceedings of 2014 International ACM Workshop Crowdsourcing Multimedia, pp. 25–30. ACM (2014)Google Scholar