Skill Ontology-Based Model for Quality Assurance in Crowdsourcing

  • Kinda El Maarry
  • Wolf-Tilo Balke
  • Hyunsouk Cho
  • Seung-won Hwang
  • Yukino Baba
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8505)

Abstract

Crowdsourcing continues to gain more momentum as its potential becomes more recognized. Nevertheless, the associated quality aspect remains a valid concern, which introduces uncertainty in the results obtained from the crowd. We identify the different aspects that dynamically affect the overall quality of a crowdsourcing task. Accordingly, we propose a skill ontology-based model that caters for these aspects, as a management technique to be adopted by crowdsourcing platforms. The model maintains a dynamically evolving ontology of skills, with libraries of standardized and personalized assessments for awarding workers skills. Aligning a worker’s set of skills to that required by a task, boosts the ultimate resulting quality. We visualize the model’s components and workflow, and consider how to guard it against malicious or unqualified workers, whose responses introduce this uncertainty and degrade the overall quality.

Keywords

Crowdsourcing Quality assurance Skill ontology Uncertain data 

References

  1. 1.
    Surowiecki, J.: The Wisdom of Crowds, p. 336. Anchor, New York (2005)Google Scholar
  2. 2.
    Howe, J.: The rise of crowdsourcing. North 14(14), 1–5 (2006)Google Scholar
  3. 3.
    Brabham, D.C.: Crowdsourcing as a model for problem solving: an introduction and cases. Convergence Int J. Res. New Media Technol. 14(1), 75–90 (2008)CrossRefGoogle Scholar
  4. 4.
    Kamps, J., Geva, S., Peters, C., Sakai, T., Trotman, A., Voorhees, E.: Report on the SIGIR 2009 workshop on the future of IR evaluation. ACM SIGIR Forum 43(2), 13 (2009)CrossRefGoogle Scholar
  5. 5.
    Zhu, D., Carterette, B.: An analysis of assessor behavior in crowdsourced preference judgments. In: SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation, no. Cse, pp. 17–20 (2010)Google Scholar
  6. 6.
    Scoring Workers in Crowdsourcing: How Many Control Questions are Enough?.pdf, 2013Google Scholar
  7. 7.
    Lofi, C., Selke, J., Balke, W.-T.: Information extraction meets crowdsourcing: a promising couple. Datenbank-Spektrum 12(1), 109–120 (2012)CrossRefGoogle Scholar
  8. 8.
    Kuncheva, L.I., Whitaker, C.J., Shipp, C.A., Duin, R.P.W.: Limits on the majority vote accuracy in classifier fusion. Pattern Anal. Appl. 6(1), 22–31 (2003)CrossRefMATHMathSciNetGoogle Scholar
  9. 9.
    Kazai, G.: In search of quality in crowdsourcing for search engine evaluation. SIGIR Forum 44(2), 165–176 (2011)Google Scholar
  10. 10.
    Mason, W., Watts, D.J.: Financial incentives and the ‘performance of crowds’. ACM SIGKDD Explor. Newslett. 11(2), 100 (2010)CrossRefGoogle Scholar
  11. 11.
    Brabham, D.C.: Moving the crowd at threadless. Inf. Commun. Soc. 13(8), 1122–1145 (2010)CrossRefGoogle Scholar
  12. 12.
    PodCastle: Collaborative training of acoustic models on the basis of wisdom of crowds for podcast transcription, (2009). https://staff.aist.go.jp/m.goto/PAPER/INTERSPEECH2009ogata.pdf
  13. 13.
    Goto, M., Ogata, J.: Podcastle: recent advances of a spoken document retrieval service improved by anonymous user contributions. In: Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), pp. 3073–3076 (2011)Google Scholar
  14. 14.
    Schall, D.: Service-Oriented Crowdsourcing: Architecture, Protocols and Algorithms, p. 105. Springer, New York (2012)Google Scholar
  15. 15.
    Lai, C.: Endorsements, licensing, and insurance for distributed system services. J. Electron. Publishing 2(1) (1996)Google Scholar
  16. 16.
    Ludwig, H., Keller, A., Dan, A., King, R.: A service level agreement language for dynamic electronic services. In: Proceedings of 4th IEEE International Workshop on Advanced Issues of E-Commerce and Web-Based Information Systems (WECWIS 2002) (2002)Google Scholar
  17. 17.
    Sahai, A., Machiraju, V., Anna, D.: Towards automated SLA management for web services (2002). http://www.hpl.hp.com/techreports/2001/HPL-2001-310R1.pdf
  18. 18.
    Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the EM algorithm. J. Roy. Stat. Soc.: Ser. C (Appl. Stat.) 28(1), 20–28 (1979)Google Scholar
  19. 19.
    Raykar, V.C., Yu, S., Zhao, L.H., Valadez, G.H., Florin, C., Bogoni, L., Moy, L.: Learning from crowds. J. Mach. Learn. Res. 11, 1297–1322 (2010)MathSciNetGoogle Scholar
  20. 20.
    Whitehill, J., Ruvolo, P., Wu, T., Bergsma, J., Movellan, J.: Whose vote should count more: optimal integration of labels from labelers of unknown expertise. Adv. Neural Inf. Process. Syst. 22(1), 1–9 (2009)Google Scholar
  21. 21.
    Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on amazon mechanical turk. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 64–67. ACM, New York (2010)Google Scholar
  22. 22.
    Campion, M.A., Fink, A.A., Ruggeberg, B.J., Carr, L., Phillips, G.M., Odman, R.B.: Doing competencies well: best practices in competency modeling. Pers. Psychol. 64(1), 225–262 (2011)CrossRefGoogle Scholar
  23. 23.
    Shippmann, J.S., Ash, R.A., Battista, M., Carr, L., Eyde, L.D., Hesketh, B., Kehoe, J., Pearlman, K., Prien, E.P., Sanchez, J.I.: The practice of competency modeling. Pers. Psychol. 53, 703–740 (2000)CrossRefGoogle Scholar
  24. 24.
    De Coi, J.L., Herder, E., Koesling, A., Lofi, C., Olmedilla, D., Papapetrou, O., Siberski, W.: A model for competence gap analysis. In: WEBIST 2007: Proceedings of the 3rd International Conference on Web Information Systems and Technologies, pp. 304–312 (2007)Google Scholar
  25. 25.
    Colucci, S., Di Noia, T., Di Sciascio, E., Donini, F.M., Mongiello, M., Mottola, M.: A formal approach to ontology-based semantic match of skills descriptions. J. Univ. Comput. Sci. 9(12), 1437–1454 (2003)Google Scholar
  26. 26.
    Koeppen, K., Hartig, J., Klieme, E., Leutner, D.: Current issues in competence modeling and assessment. Zeitschrift für Psychologie/J. Psychol. 216(2), 61–73 (2008)CrossRefGoogle Scholar
  27. 27.
    Allahbakhsh, M., Benatallah, B., Ignjatovic, A., Motahari-Nezhad, H.R., Bertino, E., Dustdar, S.: Quality control in crowdsourcing systems: issues and directions. IEEE Internet Comput. 17(2), 76–81 (2013)CrossRefGoogle Scholar
  28. 28.
    Allahbakhsh, M., Ignjatovic, A., Benatallah, B., Foo, N., Beheshti, S.M.R., Bertino, E.: Reputation management in crowdsourcing systems (2012)Google Scholar
  29. 29.
    Ignjatovic, A., Foo, N., Lee, C.T.: An analytic approach to reputation ranking of participants in online transactions. In: IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, vol. 1 (2008)Google Scholar
  30. 30.
    Noorian, Z., Ulieru, M.: The state of the art in trust and reputation systems: a framework for comparison. J. Theor. Appl. Electron. Commer. Res. 5(2), 97–117 (2010)CrossRefGoogle Scholar
  31. 31.
    Liu, X., Song, Y., Liu, S., Wang, H.: Automatic taxonomy construction from keywords. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, pp. 1433–1441 (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Kinda El Maarry
    • 1
  • Wolf-Tilo Balke
    • 1
  • Hyunsouk Cho
    • 2
  • Seung-won Hwang
    • 2
  • Yukino Baba
    • 3
  1. 1.Institut für InformationssystemeTU BraunschweigBrunswickGermany
  2. 2.Department of Computer Science and EngineeringPOSTECHPohang-siKorea
  3. 3.The University of TokyoTokyoJapan

Personalised recommendations