Advertisement

Hybrid Machine-Crowd Interaction for Handling Complexity: Steps Toward a Scaffolding Design Framework

  • António CorreiaEmail author
  • Shoaib Jameel
  • Hugo Paredes
  • Benjamim Fonseca
  • Daniel Schneider
Chapter
Part of the Human–Computer Interaction Series book series (HCIS)

Abstract

Much research attention on crowd work is paid to the development of solutions for enhancing microtask crowdsourcing settings. Although decomposing difficult problems into microtasks is appropriate for many situations, several problems are non-decomposable and require high levels of coordination among crowd workers. In this chapter, we aim to gain a better understanding of the macrotask crowdsourcing problem and the integration of crowd-AI mechanisms for solving complex tasks distributed across expert crowds and machines. We also explore some design implications of macrotask crowdsourcing systems taking into account their scaling abilities to support complex work in science.

Keywords

AI Complex work Crowdsourcing Crowd-AI hybrids Crowd science Hybrid crowd-machine interaction Macrotask crowdsourcing Mixed-initiative systems 

References

  1. Barbier, G., Zafarani, R., Gao, H., Fung, G., & Liu, H. (2012). Maximizing benefits from crowdsourced data. Computational and Mathematical Organization Theory, 18(3), 257–279.CrossRefGoogle Scholar
  2. Barowy, D. W., Curtsinger, C., Berger, E. D., & McGregor, A. (2012). Automan: A platform for integrating human-based and digital computation. ACM SIGPLAN Notices, 47(10), 639–654.CrossRefGoogle Scholar
  3. Bigham, J. P., Bernstein, M. S., & Adar, E. (2015). Human-computer interaction and collective intelligence. Handbook of Collective Intelligence, 57.Google Scholar
  4. Borromeo, R. M., & Toyama, M. (2016). An investigation of unpaid crowdsourcing. Human-Centric Computing and Information Sciences, 6(1), 11.CrossRefGoogle Scholar
  5. Brown, A. W., & Allison, D. B. (2014). Using crowdsourcing to evaluate published scientific literature: Methods and example. PLoS ONE, 9(7), e100647.CrossRefGoogle Scholar
  6. Chan, J., Chang, J. C., Hope, T., Shahaf, D., & Kittur, A. (2018). Solvent: A mixed initiative system for finding analogies between research papers. In Proceedings of the ACM Conference on Computer-Supported Cooperative Work and Social Computing.Google Scholar
  7. Chau, D. H., Kittur, A., Hong, J. I., & Faloutsos, C. (2011). Apolo: Making sense of large network data by combining rich user interaction and machine learning. In Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (pp. 167–176).Google Scholar
  8. Cheng, J., & Bernstein, M. S. (2015). Flock: Hybrid crowd-machine learning classifiers. In Proceedings of the ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 600–611).Google Scholar
  9. Cheng, J., Teevan, J., Iqbal, S. T., & Bernstein, M. S. (2015). Break it down: A comparison of macro-and microtasks. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 4061–4064).Google Scholar
  10. Chettih, A., Gross-Amblard, D., Guyon, D., Legeay, E., & Miklós, Z. (2014). Crowd, a platform for the crowdsourcing of complex tasks. In BDA 2014: Gestion de Données—Principes, Technologies et Applications (pp. 51–55).Google Scholar
  11. Correia, A., Schneider, D., Paredes, H., & Fonseca, B. (2018a). SciCrowd: Towards a hybrid, crowd-computing system for supporting research groups in academic settings. In Proceedings of the 24th International Conference on Collaboration and Technology (pp. 34–41).Google Scholar
  12. Correia, A., Schneider, D., Fonseca, B., & Paredes, H. (2018b). Crowdsourcing and massively collaborative science: A systematic literature review and mapping study. In Proceedings of the 24th International Conference on Collaboration and Technology (pp. 133–154).Google Scholar
  13. Crowston, K., Mitchell, E., & Østerlund, C. (2018). Coordinating advanced crowd work: Extending citizen science. In Proceedings of the 51st Hawaii International Conference on System Sciences (pp. 1681–1690).Google Scholar
  14. Daniel, F., Kucherbaev, P., Cappiello, C., Benatallah, B., & Allahbakhsh, M. (2018). Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions. ACM Computing Surveys (CSUR), 51(1), 7.CrossRefGoogle Scholar
  15. Difallah, D. E., Catasta, M., Demartini, G., & Cudré-Mauroux, P. (2014). Scaling-up the crowd: Micro-task pricing schemes for worker retention and latency improvement. In Second AAAI Conference on Human Computation and Crowdsourcing.Google Scholar
  16. Doan, A., Ramakrishnan, R., & Halevy, A. Y. (2011). Crowdsourcing systems on the world-wide web. Communications of the ACM, 54(4), 86–96.CrossRefGoogle Scholar
  17. Dong, Z., Lu, J., Ling, T. W., Fan, J., & Chen, Y. (2017). Using hybrid algorithmic-crowdsourcing methods for academic knowledge acquisition. Cluster Computing, 20(4), 3629–3641.CrossRefGoogle Scholar
  18. Dow, S., Kulkarni, A., Klemmer, S., & Hartmann, B. (2012). Shepherding the crowd yields better work. In Proceedings of the ACM Conference on Computer Supported Cooperative Work (pp. 1013–1022).Google Scholar
  19. Franzoni, C., & Sauermann, H. (2014). Crowd science: The organization of scientific research in open collaborative projects. Research Policy, 43(1), 1–20.CrossRefGoogle Scholar
  20. Gaikwad, S. N. S., Morina, D., Ginzberg, A., Mullings, C., Goyal, S., Gamage, D., et al. (2016). Boomerang: Rebounding the consequences of reputation feedback on crowdsourcing platforms. In Proceedings of the 29th ACM Symposium on User Interface Software and Technology (pp. 625–637).Google Scholar
  21. Garcia-Molina, H., Joglekar, M., Marcus, A., Parameswaran, A., & Verroios, V. (2016). Challenges in data crowdsourcing. IEEE Transactions on Knowledge and Data Engineering, 28(4), 901–911.CrossRefGoogle Scholar
  22. Geiger, D., Seedorf, S., Schulze, T., Nickerson, R. C., & Schader, M. (2011). Managing the crowd: Towards a taxonomy of crowdsourcing processes. In Proceedings of the Proceedings of the 17th Americas Conference on Information Systems.Google Scholar
  23. Gil, Y., & Hirsh, H. (2012). Discovery informatics: AI opportunities in scientific discovery. In Proceedings of the AAAI Fall Symposium: Discovery Informatics.Google Scholar
  24. Gil, Y., Greaves, M., Hendler, J., & Hirsh, H. (2014). Amplify scientific discovery with artificial intelligence. Science, 346(6206), 171–172.CrossRefGoogle Scholar
  25. Gil, Y., Honaker, J., Gupta, S., Ma, Y., D’Orazio, V., Garijo, D., et al. (2019). Towards human-guided machine learning. In Proceedings of the 24th ACM International Conference on Intelligent User Interfaces.Google Scholar
  26. Good, B. M., Nanis, M., Wu, C., & Su, A. I. (2014). Microtask crowdsourcing for disease mention annotation in PubMed abstracts. In Proceedings of the Pacific Symposium on Biocomputing (pp. 282–293).Google Scholar
  27. Haas, D., Ansel, J., Gu, L., & Marcus, A. (2015). Argonaut: Macrotask crowdsourcing for complex data processing. Proceedings of the VLDB Endowment, 8(12), 1642–1653.CrossRefGoogle Scholar
  28. Hansson, K., & Ludwig, T. (2018). Crowd dynamics: Conflicts, contradictions, and community in crowdsourcing. Computer Supported Cooperative Work (CSCW), 1–4.Google Scholar
  29. Hetmank, L. (2013). Components and functions of crowdsourcing systems – A systematic literature review. Wirtschaftsinformatik, 4.Google Scholar
  30. Hochachka, W. M., Fink, D., Hutchinson, R. A., Sheldon, D., Wong, W. K., & Kelling, S. (2012). Data-intensive science applied to broad-scale citizen science. Trends in Ecology & Evolution, 27(2), 130–137.CrossRefGoogle Scholar
  31. Hosseini, M., Phalp, K., Taylor, J., & Ali, R. (2014). The four pillars of crowdsourcing: A reference model. In Proceedings of the 2014 IEEE Eighth International Conference on Research Challenges in Information Science (RCIS) (pp. 1–12).Google Scholar
  32. Huang, S. W., & Fu, W. T. (2013). Don’t hide in the crowd!: Increasing social transparency between peer workers improves crowdsourcing outcomes. In Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (pp. 621–630).Google Scholar
  33. Ikeda, K., Morishima, A., Rahman, H., Roy, S. B., Thirumuruganathan, S., Amer-Yahia, S., et al. (2016). Collaborative crowdsourcing with Crowd4U. Proceedings of the VLDB Endowment, 9(13), 1497–1500.CrossRefGoogle Scholar
  34. Kamar, E. (2016). Directions in hybrid intelligence: Complementing AI systems with human intelligence. In IJCAI (pp. 4070–4073).Google Scholar
  35. Kittur, A., Smus, B., Khamkar, S., & Kraut, R. E. (2011). Crowdforge: Crowdsourcing complex work. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (pp. 43–52).Google Scholar
  36. Kittur, A., Khamkar, S., André, P., & Kraut, R. (2012). CrowdWeaver: Visually managing complex crowd work. In Proceedings of the ACM Conference on Computer Supported Cooperative Work (pp. 1033–1036).Google Scholar
  37. Kittur, A., Nickerson, J. V., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J., et al. (2013). The future of crowd work. In Proceedings of the ACM Conference on Computer-Supported Cooperative Work and Social Computing (pp. 1301–1318).Google Scholar
  38. Krivosheev, E., Casati, F., Caforio, V., & Benatallah, B. (2017). Crowdsourcing paper screening in systematic literature reviews. arXiv:1709.05168.
  39. Krivosheev, E., Casati, F., & Benatallah, B. (2018). Crowd-based multi-predicate screening of papers in literature reviews. In Proceedings of the World Wide Web Conference (pp. 55–64).Google Scholar
  40. Kulkarni, A., Gutheim, P., Narula, P., Rolnitzky, D., Parikh, T., & Hartmann, B. (2012). Mobileworks: Designing for quality in a managed crowdsourcing architecture. IEEE Internet Computing, 16(5), 28–35.CrossRefGoogle Scholar
  41. Kulkarni, A., Narula, P., Rolnitzky, D., & Kontny, N. (2014). Wish: Amplifying creative ability with expert crowds. In: Second AAAI Conference on Human Computation and Crowdsourcing.Google Scholar
  42. Lasecki, W. S. (2014). Crowd-powered intelligent systems. Human Computation Journal.Google Scholar
  43. Lasecki, W. S., Teevan, J., & Kamar, E. (2014). Information extraction and manipulation threats in crowd-powered systems. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 248–256).Google Scholar
  44. Law, E., Gajos, K. Z., Wiggins, A., Gray, M. L., & Williams, A. C. (2017). Crowdsourcing as a tool for research: Implications of uncertainty. In Proceedings of the ACM Conference on Computer-Supported Cooperative Work and Social Computing (pp. 1544–1561).Google Scholar
  45. Li, G., Wang, J., Zheng, Y., & Franklin, M. J. (2016). Crowdsourced data management: A survey. IEEE Transactions on Knowledge and Data Engineering, 28(9), 2296–2319.CrossRefGoogle Scholar
  46. Lofi, C., & El Maarry, K. (2014). Design patterns for hybrid algorithmic-crowdsourcing workflows. CBI, 1 (pp. 1–8).Google Scholar
  47. Luz, N., Silva, N., & Novais, P. (2015). A survey of task-oriented crowdsourcing. Artificial Intelligence Review, 44(2), 187–213.CrossRefGoogle Scholar
  48. Marcus, A., & Parameswaran, A. (2015). Crowdsourced data management: Industry and academic perspectives. Foundations and Trends in Databases, 6(1–2), 1–161.CrossRefGoogle Scholar
  49. Morishima, A., Shinagawa, N., Mitsuishi, T., Aoki, H., & Fukusumi, S. (2012). CyLog/Crowd4U: A declarative platform for complex data-centric crowdsourcing. Proceedings of the VLDB Endowment, 5(12), 1918–1921.CrossRefGoogle Scholar
  50. Mortensen, M. L., Adam, G. P., Trikalinos, T. A., Kraska, T., & Wallace, B. C. (2017). An exploration of crowdsourcing citation screening for systematic reviews. Research Synthesis Methods, 8(3), 366–386.CrossRefGoogle Scholar
  51. Nebeling, M., Guo, A., To, A., Dow, S., Teevan, J., & Bigham, J. (2015). WearWrite: Orchestrating the crowd to complete complex tasks from wearables. In Adjunct Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology (pp. 39–40).Google Scholar
  52. Nguyen, A. T., Wallace, B. C., & Lease, M. (2015). Combining crowd and expert labels using decision theoretic active learning. In Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing.Google Scholar
  53. Niu, X. J., Qin, S. F., Vines, J., Wong, R., & Lu, H. (2018). Key crowdsourcing technologies for product design and development. International Journal of Automation and Computing, 1–15.Google Scholar
  54. Nov, O., Arazy, O., & Anderson, D. (2014). Scientists@Home: What drives the quantity and quality of online citizen science participation? PLoS ONE, 9(4), e90375.CrossRefGoogle Scholar
  55. Parshotam, K. (2013). Crowd computing: A literature review and definition. In Proceedings of the South African Institute for Computer Scientists and Information Technologists Conference (pp. 121–130).Google Scholar
  56. Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017). Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology, 70, 153–163.CrossRefGoogle Scholar
  57. Quinn, A. J., Bederson, B. B., Yeh, T., & Lin, J. (2010). Crowdflow: Integrating machine learning with Mechanical Turk for speed-cost-quality flexibility. Better Performance over Iterations.Google Scholar
  58. Ramirez, J., Krivosheev, E., Baez, M., Casati, F., & Benatallah, B. (2018). CrowdRev: A platform for crowd-based screening of literature reviews. arXiv:1805.12376.
  59. Ranard, B. L., Ha, Y. P., Meisel, Z. F., Asch, D. A., Hill, S. S., Becker, L. B., et al. (2014). Crowdsourcing—Harnessing the masses to advance health and medicine, a systematic review. Journal of General Internal Medicine, 29(1), 187–203.CrossRefGoogle Scholar
  60. Retelny, D., Robaszkiewicz, S., To, A., Lasecki, W. S., Patel, J., Rahmati, N., & Bernstein, M. S. (2014). Expert crowdsourcing with flash teams. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (pp. 75–85).Google Scholar
  61. Rigby, J. (2009). Comparing the scientific quality achieved by funding instruments for single grant holders and for collaborative networks within a research system: Some observations. Scientometrics, 78(1), 145–164.CrossRefGoogle Scholar
  62. Salehi, N., Teevan, J., Iqbal, S., & Kamar, E. (2017). Communicating context to the crowd for complex writing tasks. In Proceedings of the ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 1890–1901).Google Scholar
  63. Schmitz, H., & Lykourentzou, I. (2016). It’s about time: Online macrotask sequencing in expert crowdsourcing. arXiv:1601.04038.
  64. Schmitz, H., & Lykourentzou, I. (2018). Online sequencing of non-decomposable macrotasks in expert crowdsourcing. ACM Transactions on Social Computing, 1(1), 1.CrossRefGoogle Scholar
  65. Schneider, D., Moraes, K., De Souza, J. M., & Esteves, M. G. P. (2012). CSCWD: Five characters in search of crowds. In Proceedings of the IEEE International Conference on Computer Supported Cooperative Work in Design (pp. 634–641).Google Scholar
  66. Sieg, J. H., Wallin, M. W., & von Krogh, G. (2010). Managerial challenges in open innovation: A study of innovation intermediation in the chemical industry. R&D Management, 40(3), 281–291.CrossRefGoogle Scholar
  67. Stonebraker, M., Bruckner, D., Ilyas, I. F., Beskales, G., Cherniack, M., Zdonik, S. B. et al. (2013). Data curation at scale: The data tamer system. In CIDR.Google Scholar
  68. Talia, D. (2019). A view of programming scalable data analysis: From clouds to exascale. Journal of Cloud Computing, 8(1), 4.CrossRefGoogle Scholar
  69. Tsueng, G., Nanis, M., Fouquier, J., Good, B., & Su, A. (2016). Citizen science for mining the biomedical literature. BioRxiv, 038083.Google Scholar
  70. Vaish, R., Davis, J., & Bernstein, M. (2015). Crowdsourcing the research process. Collective Intelligence.Google Scholar
  71. Vaish, R., Gaikwad, S. N. S., Kovacs, G., Veit, A., Krishna, R., Arrieta Ibarra, I.,… & Davis, J. (2017). Crowd research: Open and scalable university laboratories. In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (pp. 829–843).Google Scholar
  72. Valentine, M. A., Retelny, D., To, A., Rahmati, N., Doshi, T., & Bernstein, M. S. (2017). Flash organizations: Crowdsourcing complex work by structuring crowds as organizations. In Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (pp. 3523–3537).Google Scholar
  73. Vaughan, J. W. (2018). Making better use of the crowd: How crowdsourcing can advance machine learning research. Journal of Machine Learning Research, 18(193), 1–46.zbMATHGoogle Scholar
  74. Vukovic, M. (2009). Crowdsourcing for enterprises. In IEEE Congress on Services-I (pp. 686–692).Google Scholar
  75. Walsh, B., Maiers, C., Nally, G., Boggs, J., & Team, Praxis Program. (2014). Crowdsourcing individual interpretations: Between microtasking and macrotasking. Literary and Linguistic Computing, 29(3), 379–386.CrossRefGoogle Scholar
  76. Wang, N. C., Hicks, D., & Luther, K. (2018). Exploring trade-offs between learning and productivity in crowdsourced history. In Proceedings of the ACM on Human-Computer Interaction (CSCW) (Vol. 2, p. 178).Google Scholar
  77. Weiss, M. (2016). Crowdsourcing literature reviews in new domains. Technology Innovation Management Review, 6(2), 5–14.CrossRefGoogle Scholar
  78. Whiting, M. E., Gamage, D., Gaikwad, S. N. S., Gilbee, A., Goyal, S., Ballav, A., et al. (2017). Crowd guilds: Worker-led reputation and feedback on crowdsourcing platforms. In Proceedings of the ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 1902–1913).Google Scholar
  79. Xie, H., & Lui, J. C. (2018). Incentive mechanism and rating system design for crowdsourcing systems: Analysis, tradeoffs and inference. IEEE Transactions on Services Computing, 11(1), 90–102.CrossRefGoogle Scholar
  80. Yan, X., Ding, X., & Gu, N. (2016). Crowd work with or without crowdsourcing platforms. In: Proceedings of the IEEE 20th International Conference on Computer Supported Cooperative Work in Design (CSCWD) (pp. 56–61).Google Scholar
  81. Zakaria, N. A., & Abdullah, C. Z. H. (2018). Crowdsourcing and library performance in digital age. Development, 7(3).Google Scholar
  82. Zyskowski, K., Morris, M. R., Bigham, J. P., Gray, M. L., & Kane, S. K. (2015). Accessible crowdwork? Understanding the value in and challenge of microtask employment for people with disabilities. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 1682–1693).Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • António Correia
    • 1
    • 2
    • 3
    Email author
  • Shoaib Jameel
    • 3
  • Hugo Paredes
    • 1
    • 2
  • Benjamim Fonseca
    • 1
    • 2
  • Daniel Schneider
    • 4
  1. 1.University of Trás-os-Montes e Alto Douro, UTADVila RealPortugal
  2. 2.INESC TECPortoPortugal
  3. 3.University of Kent, School of ComputingCanterburyUK
  4. 4.Tércio Pacitti Institute of Computer Applications and Research (NCE), Federal University of Rio de JaneiroRio de JaneiroBrazil

Personalised recommendations