Exploiting Disagreement Through Open-Ended Tasks for Capturing Interpretation Spaces

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9678)

Abstract

An important aspect of the semantic web is that systems have an understanding of the content and context of text, images, sounds and videos. Although research in these fields has progressed over the last years, there is still a semantic gap between data available of multimedia and metadata annotated by humans describing the content. This research investigates how the complete interpretation space of humans about the content and context of this data can be captured. The methodology consists of using open-ended crowdsourcing tasks that optimize the capturing of multiple interpretations combined with disagreement based metrics for evaluation of the results. These descriptions can be used meaningfully to improve information retrieval and recommendation of multimedia, to train and evaluate machine learning components and the training and assessment of experts.

Keywords

Semantic interpretation Multimedia Crowdsourcing Disagreement 

References

  1. 1.
    Nixon, L., Troncy, R.: Survey of semantic media annotation tools for the web: towards new media applications with linked media. In: Presutti, V., Blomqvist, E., Troncy, R., Sack, H., Papadakis, I., Tordai, A. (eds.) ESWC Satellite Events 2014. LNCS, vol. 8798, pp. 100–114. Springer, Heidelberg (2014)Google Scholar
  2. 2.
    Jiang, L.: Web-scale multimedia search for internet video content. In: Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, WSDM 2016, p. 701. ACM, New York (2016)Google Scholar
  3. 3.
    Aljanaki, A., Wiering, F., Veltkamp, R.C.: Emotion based segmentation of musical audio. In: Proceedings of the 15th Conference of the International Society for Music Information Retrieval (ISMIR 2014) (2015)Google Scholar
  4. 4.
    Campos, G., Quintas, J.: On the validation of computerised lung auscultation. In: Proceedings of the International Conference on Health Informatics (BIOSTEC 2015), pp. 654–658 (2015)Google Scholar
  5. 5.
    Singh, P., Lasecki, W.S., Barelli, P., Bigham, J.P.: Hivemind: A framework for optimizing open-ended responses from the crowd. Technical report, URCS Technical Report (2012)Google Scholar
  6. 6.
    Timmermans, B., Aroyo, L., Welty, C.: Crowdsourcing ground truth for question answering using crowdtruth. In: WebSci (2015)Google Scholar
  7. 7.
    Lin, C.H., Mausam, M., Weld, D.S.: Crowdsourcing control: moving beyond multiple choice. In: Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence (2012)Google Scholar
  8. 8.
    Inel, O., et al.: CrowdTruth: machine-human computation framework for harnessing disagreement in gathering annotated data. In: Mika, P., et al. (eds.) ISWC 2014, Part II. LNCS, vol. 8797, pp. 486–504. Springer, Heidelberg (2014)Google Scholar
  9. 9.
    Aroyo, L., Welty, C.: Measuring crowd truth for medical relation extraction. In: AAAI 2013 Fall Symposium on Semantics for Big Data (2013)Google Scholar
  10. 10.
    Soberón, G., Aroyo, L., Welty, C., Inel, O., Lin, H., Overmeen, M.: Measuring crowdtruth: disagreement metrics combined with worker behavior filters. In: Proceedings of 1st International Workshop on Crowdsourcing the Semantic Web (CrowdSem), ISWC, pp. 45–58 (2013)Google Scholar
  11. 11.
    Inel, O., Aroyo, L., Welty, C., Sips, R.-J.: Domain-independent quality measures for crowd truth disagreement. J. Detect. Representation Exploit. Events Semant. Web, 2–13 (2013)Google Scholar
  12. 12.
    Aroyo, L., Welty, C.: Truth is a lie: 7 myths about human annotation. AI Mag. 36(1), 15–24 (2015)Google Scholar
  13. 13.
    Macanas, J., Ouyang, L., Bruening, M.L., Muñoz, M., Remigy, J.C., Lahitte, J.F.: Development of polymeric hollow fiber membranes containing catalytic metal nanoparticles. Catal. Today 156(3), 181–186 (2010). doi:10.1016/j.cattod.2010.02.036 CrossRefGoogle Scholar
  14. 14.
    van Miltenburg, E., Timmermans, B., Aroyo, L.: The VU sound corpus: adding more fine-grained annotations to the freesound database. In: LREC 2016 (2016)Google Scholar
  15. 15.
    Salek, M., Bachrach, Y., Key, P.: Hotspottinga probabilistic graphical model for image object localization through crowdsourcing. In: Twenty-Seventh AAAI Conference on Artificial Intelligence (2013)Google Scholar
  16. 16.
    Kurve, A., Miller, D.J., Kesidis, G.: Multicategory crowdsourcing accounting for variable task difficulty, worker skill, and worker intention. IEEE Trans. Knowl. Data Eng. 27(3), 794–809 (2015)CrossRefGoogle Scholar
  17. 17.
    Lasecki, W.S., Homan, C., Bigham, J.P.: Architecting real-time crowd-powered systems. Human Comput. 1(1), 69 (2014)CrossRefGoogle Scholar
  18. 18.
    Liu, D., Bias, R.G., Lease, M., Kuipers, R.: Crowdsourcing for usability testing. Proc. Am. Soc. Inf. Sci. Technol. 49(1), 1–10 (2012)Google Scholar
  19. 19.
    Sullivan, P., Clarke, D., Clarke, B.: Using content-specific open-ended tasks. In: Sullivan, P., Clarke, D., Clarke, B. (eds.) Teaching with Tasks for Effective Mathematics Learning, vol. 104, pp. 57–70. Springer, New York (2013)CrossRefGoogle Scholar
  20. 20.
    Ooi, W.T., Marques, O., Charvillat, V., Carlier, A.: Pushing the envelope: solving hard multimedia problems with crowdsourcing. MMTC e-letter 8(1), 37–40 (2013)Google Scholar
  21. 21.
    Deng, J., Krause, J., Fei-Fei, L.: Fine-grained crowdsourcing for fine-grained recognition. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2013)Google Scholar
  22. 22.
    Schulze, T., Nordheimer, D., Schader, M.: Worker perception of quality assurance mechanisms in crowdsourcing and human computation markets. In: Proceedings of 19th Americas Conference on Information Systems, AMCIS 2013, pp. 1–11 (2013)Google Scholar
  23. 23.
    Maji, S.: Discovering a lexicon of parts and attributes. In: Fusiello, A., Murino, V., Cucchiara, R. (eds.) ECCV 2012 Ws/Demos, Part III. LNCS, vol. 7585, pp. 21–30. Springer, Heidelberg (2012)Google Scholar
  24. 24.
    Walczyk, J.J., Roper, K.S., Seemann, E., Humphrey, A.M.: Cognitive mechanisms underlying lying to questions: response time as a cue to deception. Appl. Cogn. Psychol. 17(7), 755–774 (2003)CrossRefGoogle Scholar
  25. 25.
    Nudds, M., O’Callaghan, C.: Sounds and Perception: New Philosophical Essays. Oxford University Press, Oxford (2009)CrossRefGoogle Scholar
  26. 26.
    O’Callaghan, C.: Objects for multisensory perception. Philos. Stud. 173(5), 1269–1289 (2016). doi:10.1007/s11098-015-0545-7 MathSciNetCrossRefGoogle Scholar
  27. 27.
    Ekeroma, A., Kenealy, T., Shulruf, B., Hill, A.: Educational and wider interventions that increase research activity and capacity of clinicians in low to middle income countries: a systematic review and narrative synthesis. IBM J. Res. Dev. 3, 120 (2015)Google Scholar
  28. 28.
    Boland, M.R., Miotto, R., Gao, J., Weng, C.: Feasibility of feature-based indexing, clustering, and search of clinical trials. Methods Inform. Med. 52(5), 382–394 (2013). doi:10.3414/ME12-01-0092 CrossRefGoogle Scholar
  29. 29.
    Schedl, M., Widmer, G., Knees, P., Pohle, T.: A music information system automatically generated via web content mining techniques. Inform. Process. Manage. 47(3), 426–439 (2011). dx.doi.org/10.1016/j.ipm.2010.09.002 CrossRefGoogle Scholar
  30. 30.
    Allik, A., Fazekas, G., Dixon, S., Sandler, M.: Facilitating music information research with shared open vocabularies. In: Cimiano, P., Fernández, M., Lopez, V., Schlobach, S., Völker, J. (eds.) ESWC 2013. LNCS, vol. 7955, pp. 178–183. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  31. 31.
    Seyerlehner, Klaus, Schedl, Markus, Sonnleitner, Reinhard, Hauger, David, Ionescu, Bogdan: From Improved Auto-Taggers to Improved Music Similarity Measures. In: Nürnberger, Andreas, Stober, Sebastian, Larsen, Birger, Detyniecki, Marcin (eds.) AMR 2012. LNCS, vol. 8382, pp. 193–202. Springer, Heidelberg (2014). doi:10.1007/978-3-319-12093-5_11 Google Scholar
  32. 32.
    Aroyo, L., Welty, C.: The three sides of CrowdTruth. J. Human Comput. 1, 31–34 (2014)Google Scholar
  33. 33.
    Lopopolo, A., van Miltenburg, E.: Sound-based distributional models. In: IWCS 2015, p. 70 (2015)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.VU UniversityAmsterdamThe Netherlands

Personalised recommendations