Crowdsourcing Technology to Support Academic Research

  • Matthias HirthEmail author
  • Jason JacquesEmail author
  • Peter RodgersEmail author
  • Ognjen ScekicEmail author
  • Michael WybrowEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10264)


Current crowdsourcing platforms typically concentrate on simple microtasks and do not meet the needs of academic research well, where more complex, time consuming studies are required. This has lead to the development of specialised software tools to support academic research on such platforms. However, the loose coupling of the software with the crowdsourcing site means that there is only limited access to the features of the platform. In addition, the specialised nature of the software tools means that technical knowledge is needed to operate them. Hence there is great potential to enrich the features of crowdsourcing platforms from an academic perspective. In this chapter we discuss the possibilities for practical improvement of academic crowdsourced studies through adaption of technological solutions.


Academic Research Support Crowdsourcing Technology Crowdsourcing Platform Crowdsourcing Studies Crowd Workers 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



The genesis and planning of this chapter took place at the Dagstuhl Seminar #15481, “Evaluation in the Crowd: Crowdsourcing and Human-Centred Experiments” held in November 2015. Jason Jacques was supported by a studentship from the Engineering and Physical Sciences Research Council. Ognjen Scekic was supported by the EU FP7 SmartSociety project under grant #600854. Michael Wybrow was supported by the Australian Research Council Discovery Project grant DP140100077. This work was partially funded by the Deutsche Forschungsgemeinschaft (DFG) under Grants HO4770/2-2 and TR257/38-2. The authors alone are responsible for the content.


  1. 1.
    Adams, A.A., Ferryman, J.M.: The future of video analytics for surveillance and its ethical implications. Secur. J. 28(3), 272–289 (2015)CrossRefGoogle Scholar
  2. 2.
    Barrick, M.R., Mount, M.K.: The big five personality dimensions and job performance: a meta-analysis. Person. Psychol. 44(1), 1–26 (1991)CrossRefGoogle Scholar
  3. 3.
    Bertua, C., Anderson, N., Salgado, J.F.: The predictive validity of cognitive ability tests: a UK meta-analysis. J. Occup. Organ. Psychol. 78(3), 387–409 (2005)CrossRefGoogle Scholar
  4. 4.
    Breslav, S., Khan, A., Hornbæk, K.: Mimic: visual analytics of online micro-interactions. In: Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces (AVI 2014), pp. 245–252, NY, USA. ACM, New York (2014)Google Scholar
  5. 5.
    Caraway, B.: Online labour markets: an inquiry into oDesk providers. Work Organ. Labour Globalisation 4(2), 111–125 (2010)CrossRefGoogle Scholar
  6. 6.
    Chandler, J., Mueller, P., Paolacci, G.: Nonnaïveté among amazon mechanical turk workers: consequences and solutions for behavioral researchers. Behav. Res. Methods 46(1), 112–130 (2014)CrossRefGoogle Scholar
  7. 7.
    Crump, M.J., McDonnell, J.V., Gureckis, T.M.: Evaluating amazon’s mechanical turk as a tool for experimental behavioral research. PloS one 8(3), e57410 (2013)CrossRefGoogle Scholar
  8. 8.
    Difallah, D.E., Demartini, G., Cudré-Mauroux, P.: Mechanical cheat: spamming schemes and adversarial techniques on crowdsourcing platforms. In: CrowdSearch, pp. 26–30 (2012)Google Scholar
  9. 9.
    Doan, A., Ramakrishnan, R., Halevy, A.Y.: Crowdsourcing systems on the World-Wide Web. Commun. ACM 54(4), 86–96 (2011)CrossRefGoogle Scholar
  10. 10.
    Eickhoff, C., Harris, C.G., de Vries, A.P., Srinivasan, P.: Quality through flow and immersion: gamifying crowdsourced relevance assessments. In: Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2012), pp. 871–880, NY, USA. ACM, New York (2012)Google Scholar
  11. 11.
    Elkhodr, M., Shahrestani, S., Cheung, H.: A semantic obfuscation technique for the internet of things. In: 2014 IEEE International Conference on Communications Workshops (ICC), pp. 448–453, June 2014Google Scholar
  12. 12.
    Felstiner, A.: Working the crowd: employment and labor law in the crowdsourcing industry. Berkeley J. Employ. Labor Law 32(1), 143 (2011)Google Scholar
  13. 13.
    Ferreira, D., Kostakos, V., Dey, A.K.: AWARE: mobile context instrumentation framework. Front. ICT 2 (2015).
  14. 14.
    Glazer, A.: Motivating devoted workers. Int. J. Ind. Organ. 22(3), 427–440 (2004)CrossRefGoogle Scholar
  15. 15.
    Goodman, J.K., Cryder, C.E., Cheema, A.: Data collection in a flat world: the strengths and weaknesses of mechanical turk samples. J. Behav. Decis. Making 26(3), 213–224 (2013)CrossRefGoogle Scholar
  16. 16.
    Gualtieri, C.T., Johnson, L.G.: Reliability and validity of a computerized neurocognitive test battery, CNS vital signs. Arch. Clin. Neuropsychol. 21(7), 623–643 (2006)CrossRefGoogle Scholar
  17. 17.
    Hartswood, M., Jirotka, M., Chenu-Abente, R., Hume, A., Giunchiglia, F., Martucci, L.A., Fischer-Hübner, S.: Privacy for peer profiling in collective adaptive systems. In: Camenisch, J., Fischer-Hübner, S., Hansen, M. (eds.) Privacy and Identity 2014. IAICT, vol. 457, pp. 237–252. Springer, Cham (2015). doi: 10.1007/978-3-319-18621-4_16 CrossRefGoogle Scholar
  18. 18.
    Hauber, M., Bachmann, A., Budde, M., Beigl, M.: jActivity: supporting mobile web developers with HTML5/JavaScript based human activity recognition. In: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia (MUM 2013), pp. 45:1–45:2, NY, USA. ACM, New York (2013)Google Scholar
  19. 19.
    Heckman, J.J., Smith, J.A., Taber, C.: What do bureaucrats do? The effects of performance standards and bureaucratic preferences on acceptance into the JTPA program. In: Advances in the Study of Entrepreneurship Innovation and Economic Growth, vol. 7, pp. 191–217 (1996)Google Scholar
  20. 20.
    Hirth, M., Hoßfeld, T., Tran-Gia, P.: Anatomy of a crowdsourcing platform – using the example of In: Workshop on Future Internet and Next Generation Networks (FINGNet), Seoul, Korea, June 2011Google Scholar
  21. 21.
    Hirth, M., Scheuring, S., Hoßfeld, T., Schwartz, C., Tran-Gia, P.: Predicting result quality in crowdsourcing using application layer monitoring. In: 2014 Fifth International Conference on Communications and Electronics (ICCE). IEEE (2014)Google Scholar
  22. 22.
    Hossfeld, T., Keimel, C., Hirth, M., Gardlo, B., Habigt, J., Diepold, K., Tran-Gia, P.: Best practices for QoE crowdtesting: QoE assessment with crowdsourcing. Trans. Multimed. 16(2), 541–558 (2014)CrossRefGoogle Scholar
  23. 23.
    Irani, L.C., Silberman, M.S.: Turkopticon: interrupting worker invisibility in amazon mechanical turk. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2013), pp. 611–620, NY, USA. ACM, New York (2013)Google Scholar
  24. 24.
    Jacques, J.T., Kristensson, P.O.: Crowdsourcing a HIT: measuring workers’ pre-task interactions on microtask markets. In: First AAAI Conference on Human Computation and Crowdsourcing, November 2013Google Scholar
  25. 25.
    Kazai, G., Zitouni, I.: Quality management in crowdsourcing using gold judges behavior. In: Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, pp. 267–276. ACM (2016)Google Scholar
  26. 26.
    Kieffer, S., Dwyer, T., Marriott, K., Wybrow, M.: Hola: human-like orthogonal network layout. IEEE Trans. Vis. Comput. Graph. 22(1), 349–358 (2016)CrossRefGoogle Scholar
  27. 27.
    Kittur, A., Nickerson, J.V., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J., Lease, M., Horton, J.: The future of crowd work. In: Proceedings of the 2013 Conference on Computer Supported Cooperative Work (CSCW 2013), pp. 1301–1318. ACM (2013)Google Scholar
  28. 28.
    Lebreton, P., Hupont, I., Mäki, T., Skodras, E., Hirth, M.: Eye tracker in the wild: studying the delta between what is said and measured in a crowdsourcing experiment. In: Proceedings of the Fourth International Workshop on Crowdsourcing for Multimedia, pp. 3–8. ACM (2015)Google Scholar
  29. 29.
    Lebreton, P., Mäki, T., Skodras, E., Hupont, I., Hirth, M.: Bridging the gap between eye tracking and crowdsourcing, vol. 9394, pp. 93940W–93940W-14 (2015)Google Scholar
  30. 30.
    Little, G.: Exploring iterative and parallel human computation processes. In: Proceedings of the 28th International Conference on Human Factors in Computing Systems, CHI 2010, Extended Abstracts Volume, Atlanta, Georgia, USA, 10–15 April 2010, pp. 4309–4314 (2010)Google Scholar
  31. 31.
    Mao, A., Kamar, E., Chen, Y., Horvitz, E., Schwamb, M.E., Lintott, C.J., Smith, A.M.: Volunteering versus work for pay: incentives and tradeoffs in crowdsourcing. In: Hartman, B., Horvitz, E. (eds.) HCOMP. AAAI (2013)Google Scholar
  32. 32.
    Martin, D.B., Hanrahan, B.V., O’Neill, J., Gupta, N.: Being a turker. In: Computer Supported Cooperative Work (CSCW 2014), Baltimore, MD, USA, 15–19 February 2014, pp. 224–235 (2014)Google Scholar
  33. 33.
    Mason, W., Watts, D.J.: Financial incentives and the “performance of crowds”. In: Proceedings of the ACM SIGKDD Workshop on Human Computation (HCOMP 2009), NY, USA, pp. 77–85. ACM, New York (2009)Google Scholar
  34. 34.
    McDuff, D., el Kaliouby, R., Picard, R.W.: Crowdsourcing facial responses to online videos: extended abstract. In: 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 512–518, September 2015Google Scholar
  35. 35.
    McDuff, D., el Kaliouby, R., Picard, R.: Crowdsourced data collection of facial responses. In: Proceedings of the 13th International Conference on Multimodal Interfaces (ICMI 2011), NY, USA pp. 11–18. ACM, New York (2011)Google Scholar
  36. 36.
    Mok, R.K., Li, W., Chang, R.K.: Detecting low-quality crowdtesting workers. In: 2015 IEEE 23rd International Symposium on Quality of Service (IWQoS), pp. 201–206. IEEE (2015)Google Scholar
  37. 37.
    Narayanan, A., Shmatikov, V.: Robust de-anonymization of large sparse datasets. In: IEEE Symposium on Security and Privacy (SP 2008), pp. 111–125. IEEE (2008)Google Scholar
  38. 38.
    Navalpakkam, V., Churchill, E.: Mouse tracking: measuring and predicting users’ experience of web-based content. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2012), NY, USA, pp. 2963–2972. ACM, New York (2012)Google Scholar
  39. 39.
    Peer, E., Samat, S., Brandimarte, L., Acquisti, A.: Beyond the turk: an empirical comparison of alternative platforms for online behavioral research. Available at SSRN 2594183, April 2015Google Scholar
  40. 40.
    Raddick, M.J., Bracey, G., Gay, P.L., Lintott, C.J., Murray, P., Schawinski, K., Szalay, A.S., Vandenberg, J.: Galaxy zoo: exploring the motivations of citizen science volunteers. Astron. Educ. Rev. 9(1) (2010).!journalAUSimpleView/tab=HTML?cs=ISSN_15391515?ct=E-Journal%20Content?auId=ark:/27927/pgg3ztfdp8z
  41. 41.
    Reed, J., Raddick, M.J., Lardner, A., Carney, K.: An exploratory factor analysis of motivations for participating in Zooniverse, a collection of virtual citizen science projects. In: 2013 46th Hawaii International Conference on System Sciences (HICSS), pp. 610–619, January 2013Google Scholar
  42. 42.
    Richardson, D.W., Gribble, S.D.: Maverick: providing web applications with safe and flexible access to local devices. In: Proceedings of the 2011 USENIX Conference on Web Application Development (2011)Google Scholar
  43. 43.
    Salehi, N., Irani, L.C., Bernstein, M.S., Alkhatib, A., Ogbe, E., Milland, K.: Clickhappier: we are dynamo: overcoming stalling and friction in collective action for crowd workers. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI 2015), pp. 1621–1630. ACM, New York (2015)Google Scholar
  44. 44.
    Shen, X.: Mobile crowdsourcing [Editor’s note]. IEEE Netw. 29(3), 2–3 (2015)CrossRefGoogle Scholar
  45. 45.
    Thuan, N.H., Antunes, P., Johnstone, D.: Factors influencing the decision to crowdsource: a systematic literature review. Inf. Syst. Front. 18(1), 47–68 (2016)CrossRefGoogle Scholar
  46. 46.
    Väätäjä, H.K., Ahvenainen, M.J., Jaakola, M.S., Olsson, T.D.: Exploring augmented reality for user-generated hyperlocal news content. In: CHI 2013 Extended Abstracts on Human Factors in Computing Systems (CHI EA 2013), NY, USA, pp. 967–972. ACM, New York (2013)Google Scholar
  47. 47.
    Vaillant, G.: Triumphs of Experience. Harvard University Press, Boston (2012)CrossRefGoogle Scholar
  48. 48.
    Vakharia, D., Lease, M.: Beyond AMT: an analysis of crowd work platforms. In: iConference 2015 Proceedings. iSchools, March 2015Google Scholar
  49. 49.
    Wu, H.Y., Rubinstein, M., Shih, E., Guttag, J., Durand, F., Freeman, W.: Eulerian video magnification for revealing subtle changes in the world. ACM Trans. Graph. 31(4), 65:1–65:8 (2012)CrossRefGoogle Scholar
  50. 50.
    Xu, P., Ehinger, K.A., Zhang, Y., Finkelstein, A., Kulkarni, S.R., Xiao, J.: TurkerGaze: crowdsourcing saliency with webcam based eye tracking, April 2015. arXiv:1504.06755 [cs]

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.University of WürzburgWürzburgGermany
  2. 2.University of CambridgeCambridgeUK
  3. 3.University of KentCanterburyUK
  4. 4.TU WienViennaAustria
  5. 5.Monash UniversityMelbourneAustralia

Personalised recommendations