Crowd intelligence in AI 2.0 era

  • Wei Li
  • Wen-jun Wu
  • Huai-min Wang
  • Xue-qi Cheng
  • Hua-jun Chen
  • Zhi-hua Zhou
  • Rong Ding
Review

Abstract

The Internet based cyber-physical world has profoundly changed the information environment for the development of artificial intelligence (AI), bringing a new wave of AI research and promoting it into the new era of AI 2.0. As one of the most prominent characteristics of research in AI 2.0 era, crowd intelligence has attracted much attention from both industry and research communities. Specifically, crowd intelligence provides a novel problem-solving paradigm through gathering the intelligence of crowds to address challenges. In particular, due to the rapid development of the sharing economy, crowd intelligence not only becomes a new approach to solving scientific challenges, but has also been integrated into all kinds of application scenarios in daily life, e.g., online-to-offline (O2O) application, real-time traffic monitoring, and logistics management. In this paper, we survey existing studies of crowd intelligence. First, we describe the concept of crowd intelligence, and explain its relationship to the existing related concepts, e.g., crowdsourcing and human computation. Then, we introduce four categories of representative crowd intelligence platforms. We summarize three core research problems and the state-of-the-art techniques of crowd intelligence. Finally, we discuss promising future research directions of crowd intelligence.

Key words

Crowd intelligence Artificial intelligence 2.0 Crowdsourcing Human computation 

CLC number

TP18 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abraham, I., Alonso, O., Kandylas, V., et al., 2013. Adaptive crowdsourcing algorithms for the bandit survey problem. Proc. 26th Conf. on Computational Learning Theory, p.882–910.Google Scholar
  2. Ballesteros, J., Carbunar, B., Rahman, M., et al., 2014. Towards safe cities: a mobile and social networking approach. IEEE Trans. Parall. Distr. Syst., 25(9):2451–2462. http://dx.doi.org/10.1109/TPDS.2013.190 CrossRefGoogle Scholar
  3. Basili, V.R., Briand, L.C., Melo, W.L., 1996. A validation of object-oriented design metrics as quality indicators. IEEE Trans. Softw. Eng., 22(10):751–761. http://dx.doi.org/10.1109/32.544352 CrossRefGoogle Scholar
  4. Bhattacharya, P., Neamtiu, I., 2010. Fine-grained incremental learning and multi-feature tossing graphs to improve bug triaging. IEEE Int. Conf. on Software Maintenance, p.1–10. http://dx.doi.org/10.1109/ICSM.2010.5609736 Google Scholar
  5. Bird, C., Gourley, A., Devanbu, P., et al., 2006. Mining email social networks. Proc. Int. Workshop on Mining Software Repositories, p.137–143. http://dx.doi.org/10.1145/1137983.1138016 Google Scholar
  6. Bird, C., Pattison, D., de Souza, R., et al., 2008. Latent social structure in open source projects. Proc. 16th ACM SIGSOFT Int. Symp. on Foundations of Software Engineering, p.24–35. http://dx.doi.org/10.1145/1453101.1453107 Google Scholar
  7. Bird, C., Nagappan, N., Murphy, B., et al., 2011. Don’t touch my code!: examining the effects of ownership on software quality. Proc. 19th ACM SIGSOFT Symp. and 13th European Conf. on Foundations of Software Engineering, p.4–14. http://dx.doi.org/10.1145/2025113.2025119 Google Scholar
  8. Bollen, J., Mao, H.N., Zeng, X.J., 2011. Twitter mood predicts the stock market. J. Comput. Sci., 2(1):1–8. http://dx.doi.org/10.1016/j.jocs.2010.12.007 CrossRefGoogle Scholar
  9. Bonabeau, E., 2009. Decisions 2.0: the power of collective intelligence. MIT Sloan Manag. Rev., 50(2):45–52.Google Scholar
  10. Borne, K.D., Zooniverse Team, 2011. The Zooniverse: a framework for knowledge discovery from citizen science data. American Geophysical Union Fall Meeting.Google Scholar
  11. Burke, J.A., Estrin, D., Hansen, M., et al., 2006. Participatory sensing. Workshop on World-Sensor-Web: Mobile Device Centric Sensor Networks and Applications, p.117–134.Google Scholar
  12. Cao, C.C., She, J.Y., Tong, Y.X., et al., 2012. Whom to ask? Jury selection for decision making tasks on micro-blog services. Proc. VLDB Endow., 5(11):1495–1506. http://dx.doi.org/10.14778/2350229.2350264 CrossRefGoogle Scholar
  13. Cao, C.C., Tong, Y.X., Chen, L., et al., 2013. Wisemarket: a new paradigm for managing wisdom of online social users. Proc. 19th ACM Int. Conf. on Knowledge Discovery and Data Mining, p.455–463. http://dx.doi.org/10.1145/2487575.2487642 Google Scholar
  14. Castaneda, O.F., 2010. Hierarchy in Meritocracy: Community Building and Code Production in the Apache Software Foundation. MS Thesis, Delft University of Technology, Delft, Netherlands.Google Scholar
  15. Chen, X., Lin, Q.H., Zhou, D.Y., 2013. Optimistic knowledge gradient policy for optimal budget al location in crowdsourcing. Proc. 30th Int. Conf. on Machine Learning, p.64–72.Google Scholar
  16. Chen, X., Lin, Q.H., Zhou, D.Y., 2015. Statistical decision making for optimal budget al location in crowd labeling. J. Mach. Learn. Res., 16:1–46.MathSciNetMATHGoogle Scholar
  17. Dantec, C.A.L., Asad, M., Misra, A., et al., 2015. Planning with crowdsourced data: rhetoric and representation in transportation planning. Proc. 18th ACM Conf. on Computer Supported Cooperative Work, p.1717–1727. http://dx.doi.org/10.1145/2675133.2675212 Google Scholar
  18. Dawid, A.P., Skene, A.M., 1979. Maximum likelihood estimation of observer error-rates using the EM algorithm. Appl. Statist., 28(1):20–28. http://dx.doi.org/10.2307/2346806 CrossRefGoogle Scholar
  19. de Alwis, B., Sillito, J., 2009. Why are software projects moving from centralized to decentralized version control systems? Proc. ICSE Workshop on Cooperative and Human Aspects on Software Engineering, p.36–39. http://dx.doi.org/10.1109/CHASE.2009.5071408
  20. Dekel, O., Shamir, O., 2009. Vox Populi: collecting highquality labels from a crowd. Proc. 22nd Conf. on Learning Theory.Google Scholar
  21. Dempster, A.P., Laird, N.M., Rubin, D.B., 1977. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B, 39(1):1–38.MathSciNetMATHGoogle Scholar
  22. Difallah, D.E., Demartini, G., Cudré-Mauroux, G.P., 2013. Pick-a-crowd: tell me what you like, and I’ll tell you what to do. Proc. 22nd Int. Conf. on World Wide Web, p.367–374. http://dx.doi.org/10.1145/2488388.2488421 Google Scholar
  23. Difallah, D.E., Demartini, G., Cudré-Mauroux, G.P., 2016. Scheduling human intelligence tasks in multi-tenant crowd-powered systems. Proc. 25th Int. Conf. on World Wide Web, p.855–865. http://dx.doi.org/10.1145/2872427.2883030 Google Scholar
  24. Dong, X.L., Saha, B., Srivastava, D., 2012. Less is more: selecting sources wisely for integration. Proc. VLDB Endow., 6(2):37–48. http://dx.doi.org/10.14778/2535568.2448938 CrossRefGoogle Scholar
  25. Erenkrantz, J.R., Taylor, R.N, 2003. Supporting Distributed and Decentralized Projects: Drawing Lessons From the Open Source Community. ISR Technical Report No. UCI-ISR-03–4, Institute for Software Research, University of California, Irvine, USA.Google Scholar
  26. Farkas, K., Nagy, A.Z., Tomñs, T., et al., 2014. Participatory sensing based real-time public transport information service. IEEE Int. Conf. on Pervasive Computing and Communications Workshops, p.141–144. http://dx.doi.org/10.1109/PerComW.2014.6815181 Google Scholar
  27. Feng, Z.N., Zhu, Y.M., Zhang, Q., et al., 2014. Trac: truthful auction for location-aware collaborative sensing in mobile crowdsourcing. Proc. IEEE Conf. on Computer Communications, p.1231–1239. http://dx.doi.org/10.1109/INFOCOM.2014.6848055 Google Scholar
  28. Fowler, G., Schectman, J., 2013. Citizen surveillance helps officials put pieces together. The Wall Street Journal, April 17. http://www.wsj.com/articles/SB10001424127887324763404578429220091342796
  29. Gao, C., Zhou, D.Y., 2013. Minimax optimal convergence rates for estimating ground truth from crowdsourced labels. ePrint Archive, arXiv:1310.5764.Google Scholar
  30. Gao, D.W., Tong, Y.X., She, J.Y., et al., 2016. Top-k team recommendation in spatial crowdsourcing. LNCS, 9658:191–204. http://dx.doi.org/10.1007/978-3-319-39937-9_15 Google Scholar
  31. Gao, L., Hou, F., Huang, J.W., 2015. Providing long-term participation incentive in participatory sensing. IEEE Conf. on Computer Communications, p.2803–2811. http://dx.doi.org/10.1109/INFOCOM.2015.7218673
  32. Ghosh, R.A., 2005. Understanding Free Software Developers: Findings from the Floss Study. MIT Press, Cambrige, USA.Google Scholar
  33. Gousios, G., Pinzger, M., Deursen, A., 2014. An exploratory study of the pull-based software development model. Proc. 36th Int. Conf. on Software Engineering, p.345–355. http://dx.doi.org/10.1145/2568225.2568260 Google Scholar
  34. Gousios, G., Zaidman, A., Storey, M.A., et al., 2015. Work practices and challenges in pull-based development: the integrator’s perspective. Proc. 37th Int. Conf. on Software Engineering, p.358–368. http://dx.doi.org/10.1109/ICSE.2015.55 Google Scholar
  35. Han, K., Zhang, C., Luo, J., et al., 2016. Truthful scheduling mechanisms for powering mobile crowdsensing. IEEE Trans. Comput., 65(1):294–307. http://dx.doi.org/10.1109/TC.2015.2419658 MathSciNetCrossRefGoogle Scholar
  36. Hars, A., Ou, S., 2001. Working for free? Motivations of participating in open source projects. Proc. 34th Annual Hawaii Int. Conf. on System Sciences, p.1–9. http://dx.doi.org/10.1109/HICSS.2001.927045 Google Scholar
  37. Hassan, A.E., 2009. Predicting faults using the complexity of code changes. Proc. 31st Int. Conf. on Software Engineering, p.78–88. http://dx.doi.org/10.1109/ICSE.2009.5070510 Google Scholar
  38. Hertel, G., Niedner, S., Herrmann, S., 2003. Motivation of software developers in open source projects: an Internet-based survey of contributors to the Linux kernel. Res. Polic., 32(7):1159–1177. http://dx.doi.org/10.1016/S0048-7333(03)00047-7 CrossRefGoogle Scholar
  39. Ho, C.J., Vaughan, J.W., 2012. Online task assignment in crowdsourcing markets. Proc. 26th AAAI Conf. on Artificial Intelligence, p.45–51.Google Scholar
  40. Ho, C.J., Jabbari, S., Vaughan, J.W., 2013. Adaptive task assignment for crowdsourced classification. Proc. 30th Int. Conf. on Machine Learning, p.534–542.Google Scholar
  41. Hoffman, M.L., 1981. Is altruism part of human nature? J. Personal. Soc. Psychol., 40(1):121–137. http://dx.doi.org/10.1037/0022-3514.40.1.121 CrossRefGoogle Scholar
  42. Jaimes, L.G., Vergara-Laurens, I., Labrador, M.A., 2012. A location-based incentive mechanism for participatory sensing systems with budget constraints. Proc. 10th Annual IEEE Int. Conf. on Pervasive Computing and Communications, p.103–108. http://dx.doi.org/10.1109/PerCom.2012.6199855 Google Scholar
  43. Jain, S., Gujar, S., Bhat, S., et al., 2014. An incentive compatible multi-armed-bandit crowdsourcing mechanism with quality assurance. ePrint Archive, arXiv:1406.7157.Google Scholar
  44. Jeong, G., Kim, S., Zimmermann, T., 2009. Improving bug triage with bug tossing graphs. Proc. 7th Joint Meeting of the European Software Engineering Conf. and the ACM SIGSOFT Symp. on the Foundations of Software Engineering, p.111–120. http://dx.doi.org/10.1145/1595696.1595715
  45. Karger, D.R., Oh, S., Shah, D., 2011. Iterative learning for reliable crowdsourcing systems. Advances in Neural Information Processing Systems, p.1953–1961.Google Scholar
  46. Khetan, A., Oh, S., 2016. Achieving budget-optimality with adaptive schemes in crowdsourcing. Advances in Neural Information Processing Systems, p.4844–4852.Google Scholar
  47. Kittur, A., Smus, B., Khamkar, S., et al., 2011. Crowdforge: crowdsourcing complex work. Proc. 24th Annual ACM Symp. on User Interface Software and Technology, p.43–52. http://dx.doi.org/10.1145/2047196.2047202
  48. Krishna, V., 2009. Auction Theory. Academic Press, New York, USA.Google Scholar
  49. Krontiris, I., Albers, A., 2012. Monetary incentives in participatory sensing using multi-attributive auctions. Int. J. Parall. Emerg. Distr. Syst., 27(4):317–336. http://dx.doi.org/10.1080/17445760.2012.686170 CrossRefGoogle Scholar
  50. Law, E., Ahn, L., 2011. Human computation. Synth. Lect. Artif. Intell. Mach. Learn., 5(3):1–121.CrossRefGoogle Scholar
  51. Lazer, D., Kennedy, R., King, G., et al., 2014. The parable of Google flu: traps in big data analysis. Science, 343(6167):1203–1205. http://dx.doi.org/10.1126/science.1248506 CrossRefGoogle Scholar
  52. Lee, J.S., Hoh, B., 2010. Sell your experiences: a market mechanism based incentive for participatory sensing. IEEE Int. Conf. on Pervasive Computing and Communications, p.60–68. http://dx.doi.org/10.1109/PERCOM.2010.5466993 Google Scholar
  53. Li, G.L., Wang, J.N., Zheng, Y.D., et al., 2016. Crowd-sourced data management: a survey. IEEE Trans. Knowl. Data Eng., 28(9):2296–2319. http://dx.doi.org/10.1109/TKDE.2016.2535242 CrossRefGoogle Scholar
  54. Li, H.W., Yu, B., 2014. Error rate bounds and iterative weighted majority voting for crowdsourcing. ePrint Archive, arXiv:1411.4086.Google Scholar
  55. Li, X., Dong, X.L., Lyons, K., et al., 2012. Truth finding on the deep Web: is the problem solved? Proc. VLDB Endow., 6(2):97–108. http://dx.doi.org/10.14778/2535568.2448943 CrossRefGoogle Scholar
  56. Lintott, C.J., Schawinski, K., Slosar, A., et al., 2008. Galaxy Zoo: morphologies derived from visual inspection of galaxies from the sloan digital sky survey. Month. Not. R. Astronom. Soc., 389(3):1179–1189. http://dx.doi.org/10.1111/j.1365-2966.2008.13689.x CrossRefGoogle Scholar
  57. Liu, Q., Peng, J., Ihler, A.T., 2012. Variational inference for crowdsourcing. Advances in Neural Information Processing Systems, p.692–700.Google Scholar
  58. Luo, T., Tan, H.P., Xia, L.R., 2014. Profit-maximizing incentive for participatory sensing. Proc. IEEE Conf. on Computer Communications, p.127–135. http://dx.doi.org/10.1109/INFOCOM.2014.6847932 Google Scholar
  59. Malone, T.W., Laubacher, R., Dellarocas, C., 2009. Harnessing Crowds: Mapping the Genome of Collective Intelligence. MIT Sloan Research Paper No. 4732–09, Sloan School of Management, Massachusetts Institute of Technology, MA, USA. http://dx.doi.org/10.2139/ssrn.1381502 Google Scholar
  60. Mamykina, L., Manoim, B., Mittal, M., et al., 2011. Design lessons from the fastest Q&A site in the west. Proc. SIGCHI Conf. on Human Factors in Computing Systems, p.2857–2866. http://dx.doi.org/10.1145/1978942.1979366 Google Scholar
  61. Maslow, A.H., Frager, R., Fadiman, J., et al., 1970. Motivation and Personality. Harper & Row, New York, USA.Google Scholar
  62. Mavridis, P., Gross-Amblard, D., Miklós, Z., 2016. Using hierarchical skills for optimized task assignment in knowledge-intensive crowdsourcing. Proc. 25th Int. Conf. on World Wide Web, p.843–853.Google Scholar
  63. Meng, R., Tong, Y.X., Chen, L., et al., 2015. Crowd TC: crowdsourced taxonomy construction. Proc. IEEE Int. Conf. on Data Mining, p.913–918. http://dx.doi.org/10.1109/ICDM.2015.77 Google Scholar
  64. Mockus, A., Fielding, R.T., Herbsleb, J.D., 2002. Two case studies of open source software development: Apache and Mozilla. ACM Trans. Softw. Eng. Meth., 11(3): 309–346. http://dx.doi.org/10.1145/567793.567795 CrossRefGoogle Scholar
  65. Moser, R., Pedrycz, W., Succi, G., 2008. A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. Proc. 30th Int. Conf. on Software Engineering, p.181–190. http://dx.doi.org/10.1145/1368088.1368114
  66. Nagappan, N., Ball, T., 2005. Use of relative code churn measures to predict system defect density. Proc. 27th Int. Conf. on Software Engineering, p.284–292. http://dx.doi.org/10.1109/ICSE.2005.1553571 Google Scholar
  67. Nakakoji, K., Yamamoto, Y., Nishinaka, Y., et al., 2002. Evolution patterns of open-source software systems and communities. Proc. Int. Workshop on Principles of Software Evolution, p.76–85. http://dx.doi.org/10.1145/512035.512055 Google Scholar
  68. Ok, J., Oh, S., Shin, J., et al., 2016. Optimality of belief propagation for crowd-sourced classification. ePrint Archive, arXiv:1602.03619.Google Scholar
  69. Ouyang, W.R., Kaplan, L.M., Martin, P., et al., 2015. Debiasing crowdsourced quantitative characteristics in local businesses and services. Proc. 14th Int. Conf. on Information Processing in Sensor Networks, p.190–201. http://dx.doi.org/10.1145/2737095.2737116 Google Scholar
  70. Pan, Y.H., 2016. Heading toward artificial intelligence 2.0. Engineering, 2(4):409–413. http://dx.doi.org/10.1016/J.ENG.2016.04.018 Google Scholar
  71. Pierre, L., 1997. Collective intelligence: mankind’s emerging world in cyberspace. Bononno, R., translator. Perseus Books, Cambridge, USA.Google Scholar
  72. Quinn, A.J., Bederson, B.B., 2011. Human computation: a survey and taxonomy of a growing field. Proc. SIGCHI Conf. on Human Factors in Computing Systems, p.1403–1412. http://dx.doi.org/10.1145/1978942.1979148 Google Scholar
  73. Raban, D.R., Harper, F.M., 2008. Motivations for Answering Questions Online. http://www.researchgate.net/publication/241053908Google Scholar
  74. Rahman, F., Devanbu, P.T., 2011. Ownership, experience and defects: a fine-grained study of authorship. Proc. 33rd Int. Conf. on Software Engineering, p.491–500. http://dx.doi.org/10.1145/1985793.1985860 Google Scholar
  75. Rahman, F., Devanbu, P.T., 2013. How, and why, process metrics are better. Proc. 35th Int. Conf. on Software Engineering, p.432–441.Google Scholar
  76. Rana, R.K., Chou, C.T., Kanhere, S.S., et al., 2010. Earphone: an end-to-end participatory urban noise mapping system. Proc. 9th ACM/IEEE Int. Conf. on Information Processing in Sensor Networks, p.105–116. http://dx.doi.org/10.1145/1791212.1791226 Google Scholar
  77. Raykar, V.C., Yu, S., 2012. Eliminating spammers and ranking annotators for crowdsourced labeling tasks. J. Mach. Learn. Res., 13(Feb):491–518.MathSciNetMATHGoogle Scholar
  78. Raykar, V.C., Yu, S., Zhao, L.H., et al., 2009. Supervised learning from multiple experts: whom to trust when everyone lies a bit. Proc. 26th Int. Conf. on Machine Learning, p.889–896. http://dx.doi.org/10.1145/1553374.1553488 Google Scholar
  79. Raykar, V.C., Yu, S., Zhao, L.H., et al., 2010. Learning from crowds. J. Mach. Learn. Res., 11(Apr):1297–1322.MathSciNetGoogle Scholar
  80. Raymond, E., 1999. The cathedral and the bazaar. Knowl. Technol. Polic., 12(3):23–49. http://dx.doi.org/10.1007/s12130-999-1026-0 CrossRefGoogle Scholar
  81. Rigby, P.C., German, D.M., Cowen, L., et al., 2014. Peer review on open-source software projects: parameters, statistical models, and theory. ACM Trans. Softw. Eng. Meth., 23(4):No.35. http://dx.doi.org/10.1145/2594458 CrossRefGoogle Scholar
  82. Rogers, E.M., 2010. Diffusion of Innovations. Simon and Schuster, New York, USA.Google Scholar
  83. Sakaki, T., Okazaki, M., Matsuo, Y., 2010. Earthquake shakes Twitter users: real-time event detection by social sensors. Proc. 19th Int. Conf. on World Wide Web, p.851–860. http://dx.doi.org/10.1145/1772690.1772777 Google Scholar
  84. Shah, N.B., Zhou, D., 2015. Double or nothing: multiplicative incentive mechanisms for crowdsourcing. Advances in Neural Information Processing Systems, p.1–9.Google Scholar
  85. Shah, N.B., Zhou, D., 2016. No oops, you won’t do it again: mechanisms for self-correction in crowdsourcing. Proc. 33rd Int. Conf. on Machine Learning, p.1–10.Google Scholar
  86. Shah, N.B., Zhou, D., Peres, Y., 2015. Approval voting and incentives in crowdsourcing. ePrint Archive, arXiv:1502.05696.Google Scholar
  87. She, J.Y., Tong, Y.X., Chen, L., 2015a. Utility-aware social event-participant planning. Proc. ACM Int. Conf. on Management of Data, p.1629–1643. http://dx.doi.org/10.1145/2723372.2749446 Google Scholar
  88. She, J.Y., Tong, Y.X., Chen, L., et al., 2015b. Conflict-aware event-participant arrangement. Proc. 31st IEEE Int. Conf. on Data Engineering, p.735–746. http://dx.doi.org/10.1109/ICDE.2015.7113329 Google Scholar
  89. She, J.Y., Tong, Y.X., Chen, L., et al., 2016. Conflict-aware event-participant arrangement and its variant for online setting. IEEE Trans. Knowl. Data Eng., 28(9):2281–2295. http://dx.doi.org/10.1109/TKDE.2016.2565468 CrossRefGoogle Scholar
  90. Shen, H.W., Barabási, A.L., 2014. Collective credit allocation in science. PNAS, 111(34):12325–12330. http://dx.doi.org/10.1073/pnas.1401992111 CrossRefGoogle Scholar
  91. Smith, J.B., 1994. Collective Intelligence in Computer-Based Collaboration. CRC Press, Boca Raton, USA.Google Scholar
  92. Subramanian, A., Kanth, G.S., Vaze, R., 2013. Offline and online incentive mechanism design for smart-phone crowd-sourcing. ePrint Archive, arXiv:1310.1746.Google Scholar
  93. Subramanyam, R., Krishnan, M.S., 2003. Empirical analysis of CK metrics for object-oriented design complexity: implications for software defects. ACM Trans. Softw. Eng. Meth., 29(4):297–310. http://dx.doi.org/10.1109/TSE.2003.1191795 CrossRefGoogle Scholar
  94. Sullivan, B.L., Wood, C.L., Iliff, M.J., et al., 2009. eBird: a citizen-based bird observation network in the biological sciences. Biol. Consev., 142(10):2282–2292. http://dx.doi.org/10.1016/j.biocon.2009.05.006 CrossRefGoogle Scholar
  95. Tamrawi, A., Nguyen, T.T., Al-Kofahi, J.M., et al., 2011. Fuzzy set and cache-based approach for bug triaging. Proc. 19th ACM SIGSOFT Symp. and 13th European Conf. on Foundations of Software Engineering, p.365–375. http://dx.doi.org/10.1145/2025113.2025163 Google Scholar
  96. Tang, J.C., Cebrian, M., Giacobe, N.A., et al., 2011. Reflecting on the DARPA red balloon challenge. Commun. ACM, 54(4):78–85. http://dx.doi.org/10.1145/1924421.1924441 CrossRefGoogle Scholar
  97. Teodoro, R., Ozturk, P., Naaman, M., et al., 2014. The motivations and experiences of the on-demand mobile workforce. Proc. 17th ACM Conf. on Computer Supported Cooperative Work & Social Computing, p.236–247. http://dx.doi.org/10.1145/2531602.2531680 Google Scholar
  98. Thebault-Spieker, J., Terveen, L.G., Hecht, B., 2015. Avoiding the south side and the suburbs: the geography of mobile crowdsourcing markets. Proc. 18th ACM Conf. on Computer Supported Cooperative Work & Social Computing, p.265–275. http://dx.doi.org/10.1145/2675133.2675278 Google Scholar
  99. Thongtanunam, P., Tantithamthavorn, C., Kula, R.G., et al., 2015. Who should review my code? A file locationbased code-reviewer recommendation approach for modern code review. IEEE 22nd Int. Conf. on Software Analysis, Evolution, and Reengineering, p.141–150. http://dx.doi.org/10.1109/SANER.2015.7081824 Google Scholar
  100. Tian, T., Zhu, J., 2015. Max-margin majority voting for learning from crowds. Advances in Neural Information Processing Systems, p.1621–1629.Google Scholar
  101. Tong, Y.X., Chen, L., Ding, B.L., 2012a. Discovering threshold-based frequent closed itemsets over probabilistic data. Proc. IEEE 28th Int. Conf. on Data Engineering, p.270–281. http://dx.doi.org/10.1109/ICDE.2012.51 Google Scholar
  102. Tong, Y.X., Chen, L., Cheng, Y.R., et al., 2012b. Mining frequent itemsets over uncertain databases. Proc. VLDB Endow., 5(11):1650–1661.CrossRefGoogle Scholar
  103. Tong, Y.X., Cao, C.C., Zhang, C.J., et al., 2014a. Crowd-Cleaner: data cleaning for multi-version data on the web via crowdsourcing. Proc. IEEE 28th Int. Conf. on Data Engineering, p.1182–1185. http://dx.doi.org/10.1109/ICDE.2014.6816736 Google Scholar
  104. Tong, Y.X., Cao, C.C., Chen, L., 2014b. TCS: efficient topic discovery over crowd-oriented service data. Proc. 20th ACM Int. Conf. on Knowledge Discovery and Data Mining, p.861–870. http://dx.doi.org/10.1145/2623330.2623647 Google Scholar
  105. Tong, Y.X., Chen, L., She, J.Y., 2015a. Mining frequent itemsets in correlated uncertain databases. J. Comput. Sci. Technol., 30(4):696–712. http://dx.doi.org/10.1007/s11390-015-1555-9 MathSciNetCrossRefGoogle Scholar
  106. Tong, Y.X., Meng, R., She, J.Y., 2015b. On bottleneck-aware arrangement for event-based social networks. Proc. 31st IEEE Int. Conf. on Data Engineering Workshops, p.216–223. http://dx.doi.org/10.1109/ICDEW.2015.7129579 Google Scholar
  107. Tong, Y.X., She, J.Y., Meng, R., 2016a. Bottleneck-aware arrangement over event-based social networks: the maxmin approach. World Wide Web, 19(6):1151–1177. http://dx.doi.org/10.1007/s11280-015-0377-6 CrossRefGoogle Scholar
  108. Tong, Y.X., She, J.Y., Ding, B.L., et al., 2016b. Online mobile micro-task allocation in spatial crowdsourcing. Proc. 32nd IEEE Int. Conf. on Data Engineering, p.49–60. http://dx.doi.org/10.1109/ICDE.2016.7498228 Google Scholar
  109. Tong, Y.X., She, J.Y., Ding, B.L., et al., 2016c. Online minimum matching in real-time spatial data: experiments and analysis. Proc. VLDB Endow., 9(12):1053–1064. http://dx.doi.org/10.14778/2994509.2994523 CrossRefGoogle Scholar
  110. Tong, Y.X., Zhang, X.F., Chen, L., 2016d. Tracking frequent items over distributed probabilistic data. World Wide Web, 19(4):579–604. http://dx.doi.org/10.1007/s11280-015-0341-5 CrossRefGoogle Scholar
  111. Tong, Y.X., Yuan, Y., Cheng, Y.R., et al., 2017. A survey of spatiotemporal crowdsourced data management techniques. J. Softw., 28(1):35–58 (in Chinese).Google Scholar
  112. Tran-Thanh, L., Stein, S., Rogers, A., et al., 2012. Efficient crowdsourcing of unknown experts using multi-armed bandits. Proc. European Conf. on Artificial Intelligence, p.768–773. http://dx.doi.org/10.3233/978-1-61499-098-7-768 MATHGoogle Scholar
  113. Tsay, J., Dabbish, L., Herbsleb, J.D., 2014a. Influence of social and technical factors for evaluating contribution in GitHub. 36th Int. Conf. on Software Engineering, p.356–366. http://dx.doi.org/10.1145/2568225.2568315 Google Scholar
  114. Tsay, J., Dabbish, L., Herbsleb, J.D., 2014b. Let’s talk about it: evaluating contributions through discussion in GitHub. Proc. 22nd ACM Int. Symp. on Foundations of Software Engineering, p.144–154. http://dx.doi.org/10.1145/2635868.2635882 Google Scholar
  115. Vasilescu, B., Yu, Y., Wang, H., et al., 2015. Quality and productivity outcomes relating to continuous integration in GitHub. Proc. 10th Joint Meeting on Foundations of Software Engineering, p.805–816. http://dx.doi.org/10.1145/2786805.2786850 Google Scholar
  116. Vickrey, W., 1961. Counterspeculation, auctions, and competitive sealed tenders. J. Finan., 16(1):8–37. http://dx.doi.org/10.1111/j.1540-6261.1961.tb02789.x MathSciNetCrossRefGoogle Scholar
  117. von Ahn, L., Maurer, B., McMillen, C., et al., 2008. reCAPTCHA: human-based character recognition via web security measures. Science, 321(5895):1465–1468. http://dx.doi.org/10.1126/science.1160379 MathSciNetMATHCrossRefGoogle Scholar
  118. Wang, D., Abdelzaher, T.F., Kaplan, L.M., et al., 2013. Recursive fact-finding: a streaming approach to truth estimation in crowdsourcing applications. IEEE 33rd Int. Conf. on Distributed Computing Systems, p.530–539. http://dx.doi.org/10.1109/ICDCS.2013.54 Google Scholar
  119. Wang, D., Amin, M.T.A., Li, S., et al., 2014. Using humans as sensors: an estimation-theoretic perspective. Proc. 13th Int. Conf. on Information Processing in Sensor Networks, p.35–46.Google Scholar
  120. Wang, H.M., Yin, G., Li, X., et al., 2015. TRUSTIE: a software development platform for crowdsourcing. In: Li, W., Huhns, M.N., Tsai, W.T., et al. (Eds.), Crowd-sourcing. Springer Berlin Heidelberg, Berlin, Germany, p.165–190.Google Scholar
  121. Wang, J.N., Li, G.L., Kraska, T., et al., 2013. Leveraging transitive relations for crowdsourced joins. Proc. ACM Int. Conf. on Management of Data, p.229–240. http://dx.doi.org/10.1145/2463676.2465280 Google Scholar
  122. Wang, L., Zhou, Z.H., 2016. Cost-saving effect of crowd-sourcing learning. Proc. 25th Int. Joint Conf. on Artificial Intelligence, p.2111–2117.Google Scholar
  123. Wang, W., Zhou, Z.H., 2015. Crowdsourcing label quality: a theoretical study. Sci. China Inform. Sci., 58(11):1–12. http://dx.doi.org/10.1007/s11432-015-5391-x Google Scholar
  124. Wauthier, F.L., Jordan, M.I., 2011. Bayesian bias mitigation for crowdsourcing. Advances in Neural Information Processing Systems, p.1800–1808.Google Scholar
  125. Welinder, P., Branson, S., Perona, P., et al., 2010. The multidimensional wisdom of crowds. Advances in Neural Information Processing Systems, p.2424–2432.Google Scholar
  126. Whitehill, J., Wu, T., Bergsma, J., et al., 2009. Whose vote should count more: optimal integration of labels from labelers of unknown expertise. Advances in Neural Information Processing Systems, p.2035–2043.Google Scholar
  127. Wu, W.J., Tsai, W.T., Li, W., 2013. An evaluation framework for software crowdsourcing. Front. Comput. Sci., 7(5):694–709. http://dx.doi.org/10.1007/s11704013-2320-2 MathSciNetCrossRefGoogle Scholar
  128. Yan, Y., Fung, G.M., Rosales, R., et al., 2011. Active learning from crowds. Proc. 28th Int. Conf. on Machine Learning, p.1161–1168.Google Scholar
  129. Yang, D.J., Fang, X., Xue, G.L., 2013. Truthful incentive mechanisms for k-anonymity location privacy. Proc. IEEE Conf. on Computer Communications, p.2994–3002. http://dx.doi.org/10.1109/INFCOM.2013.6567111 Google Scholar
  130. Yang, D.J., Xue, G.L., Fang, X., et al., 2012. Crowd-sourcing to smartphones: incentive mechanism design for mobile phone sensing. Proc. 18th Annual Int. Conf. on Mobile Computing and Networking, p.173–184. http://dx.doi.org/10.1145/2348543.2348567 Google Scholar
  131. Ye, Y.W., Kishida, K., 2003. Toward an understanding of the motivation of open source software developers. Proc. 25th Int. Conf. on Software Engineering, p.419–429. http://dx.doi.org/10.1109/ICSE.2003.1201220 Google Scholar
  132. Yu, Y., Yin, G., Wang, H., et al., 2014. Exploring the patterns of social behavior in GitHub. Proc. 1st Int. Workshop on Crowd-Based Software Development Methods and Technologies, p.31–36. http://dx.doi.org/10.1145/2666539.2666571 Google Scholar
  133. Yu, Y., Yin, G., Wang, T., et al., 2016a. Determinants of pull-based development in the context of continuous integration. Sci. China Inform. Sci., 59(8):080104. http://dx.doi.org/10.1007/s11432-016-5595-8 CrossRefGoogle Scholar
  134. Yu, Y., Wang, H.M., Yin, G., et al., 2016b. Reviewer recommendation for pull-requests in GitHub: what can we learn from code review and bug assignment? Imform. Softw. Technol., 74:204–218. http://dx.doi.org/10.1016/j.infsof.2016.01.004 CrossRefGoogle Scholar
  135. Zhang, C.J., Chen, L., Tong, Y.X., 2014a. MaC: a probabilistic framework for query answering with machinecrowd collaboration. Proc. 23rd ACM Int. Conf. on Information and Knowledge Management, p.11–20. http://dx.doi.org/10.1145/2661829.2661880 Google Scholar
  136. Zhang, C.J., Tong, Y.X., Chen, L., 2014b. Where to: crowdaided path selection. Proc. VLDB Endow., 7(11):2005–2016. http://dx.doi.org/10.14778/2733085.2733105 CrossRefGoogle Scholar
  137. Zhang, C.J., Chen, L., Tong, Y., et al., 2015. Cleaning uncertain data with a noisy crowd. Proc. 31st IEEE Int. Conf. on Data Engineering, p.6–17. http://dx.doi.org/10.1109/ICDE.2015.7113268 Google Scholar
  138. Zhang, Y., Chen, X., Zhou, D., et al., 2014. Spectral methods meet EM: a provably optimal algorithm for crowdsourcing. Advances in Neural Information Processing Systems, p.1260–1268.Google Scholar
  139. Zhao, D., Li, X.Y., Ma, H.D., 2014. How to crowdsource tasks truthfully without sacrificing utility: online incentive mechanisms with budget constraint. Proc. IEEE Conf. on Computer Communications, p.1213–1221. http://dx.doi.org/10.1109/INFOCOM.2014.6848053 Google Scholar
  140. Zhong, J.H., Tang, K., Zhou, Z.H., 2015. Active learning from crowds with unsure option. Proc. 24th Int. Joint Conf. on Artificial Intelligence, p.1061–1067.Google Scholar
  141. Zhou, D., Basu, S., Mao, Y., et al., 2012. Learning from the wisdom of crowds by minimax entropy. Advances in Neural Information Processing Systems, p.2195–2203.Google Scholar
  142. Zhou, Y., Chen, X., Li, J., 2014. Optimal PAC multiple arm identification with applications to crowdsourcing. Proc. 31st Int. Conf. on Machine Learning, p.217–225.Google Scholar
  143. Zhu, Y., Zhang, Q., Zhu, H., et al., 2014. Towards truthful mechanisms for mobile crowdsourcing with dynamic smartphones. Proc. 34th Int. Conf. on Distributed Computing Systems, p.11–20. http://dx.doi.org/10.1109/ICDCS.2014.10 Google Scholar

Copyright information

© Journal of Zhejiang University Science Editorial Office and Springer-Verlag Berlin Heidelberg 2017

Authors and Affiliations

  • Wei Li
    • 1
  • Wen-jun Wu
    • 1
  • Huai-min Wang
    • 2
  • Xue-qi Cheng
    • 3
  • Hua-jun Chen
    • 4
  • Zhi-hua Zhou
    • 5
  • Rong Ding
    • 5
  1. 1.State Key Laboratory of Software DevelopmentBeihang UniversityBeijingChina
  2. 2.National Laboratory for Parallel and Distributed Processing, College of ComputerNational University of Defense TechnologyChangshaChina
  3. 3.Institute of Computing TechnologyChinese Academy of SciencesBeijingChina
  4. 4.College of Computer Science and TechnologyZhejiang UniversityHangzhouChina
  5. 5.National Key Laboratory for Novel Software TechnologyNanjing UniversityNanjingChina

Personalised recommendations