Abstract
Recently software crowdsourcing has become an emerging area of software engineering. Few papers have presented a systematic analysis on the practices of software crowdsourcing. This paper first presents an evaluation framework to evaluate software crowdsourcing projects with respect to software quality, costs, diversity of solutions, and competition nature in crowdsourcing. Specifically, competitions are evaluated by the min-max relationship from game theory among participants where one party tries to minimize an objective function while the other party tries to maximize the same objective function. The paper then defines a game theory model to analyze the primary factors in these minmax competition rules that affect the nature of participation as well as the software quality. Finally, using the proposed evaluation framework, this paper illustrates two crowdsourcing processes, Harvard-TopCoder and AppStori. The framework demonstrates the sharp contrasts between both crowdsourcing processes as participants will have drastic behaviors in engaging these two projects.
Similar content being viewed by others
References
Doan A, Ramakrishnan R, Halevy A Y. Crowdsourcing systems on the World-Wide Web. Communications of the ACM, 2011, 54(4): 86–96
Lakhani K, Garvin D, Lonstein E. Topcoder (a): developing software through crowdsourcing. Harvard Business School General Management Unit Case, 2010. Available at SSRN: http://ssrn.com/abstract=2002884
uTest. https://www.utest.com/
Bosch J. From software product lines to software ecosystems. In: Proceedings of the 13th International Software Product Line Conference. 2009, 111–119
Jansen S, Finkelstein A, Brinkkemper S. A sense of community: a research agenda for software ecosystems. In: Proceedings of the 31st International Conference on Software Engineering-Companion Volume. 2009, 187–190
Apple Store Metrics. http://148apps.biz/app-store-metrics/, 2012
AppStori. http://appstori.com/, 2012
Kittur A. Crowdsourcing, collaboration and creativity. XRDS, 2010, 17(2): 22–26
Constantinescu R, Iacob I M. Capability maturity model integration. Journal of Applied Quantitative Methods, 2007, 2(1): 187
Atwood M. Military standard: defense system software development. Department of Defense, USA, 1988
Schenk E, Guittard C. Crowdsourcing: what can be outsourced to the crowd, and why. In: Workshop on Open Source Innovation, Strasbourg, France. 2009
Tong R, Lakhani K. Public-private partnerships for organizing and executing prize-based competitions. Berkman Center Research Publication, 2012. Available at SSRN: http://ssrn.com/abstract=2083755
Algorithm Development Through Crowdsourcing. http://catalyst.harvard.edu/services/crowdsourcing/, 2012
Archak N, Sundararajan A. Optimal design of crowdsourcing contests. In: Proceedings of the 30th International Conference on Information Systems. 2009, 1–16
Wu T W, Li W. Creative software crowdsourcing. Creative Software Crowdsourcing: From Components and Algorithm Development to Project Concept Formations, 2013
Baldwin C Y, Clark K B. The architecture of participation: does code architecture mitigate free riding in the open source development model? Management Science, 2006, 52(7): 1116–1127
Rand D G, Dreber A, Ellingsen T, Fudenberg D, Nowak MA. Positive interactions promote public cooperation. Science, 2009, 325(5945): 1272–1275
Herbrich R, Minka T, Graepel T. TrueSkill™: a bayesian skill rating system. In: Proceedings of the 2006 Annual Conference of Advances in Neural Information Processing Systems. 2007, 19: 569–576
TopCoder Inc. http://apps.topcoder.com/wiki/display/tc/algorithm+competition+rating+system, 2013
Apple App Store Review Guidelines. https://developer.apple.com/appstore/guidelines.html, 2010
Archak N. Money, glory and cheap talk: analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on topcoder. com. In: Proceedings of the 19th International Conference on World Wide Web. 2010, 21–30
DiPalantino D, Vojnovic M. Crowdsourcing and all-pay auctions. In: Proceedings of the 10th ACM Conference on Electronic Commerce. 2009, 119–128
Horton J J, Chilton L B. The labor economics of paid crowdsourcing. In: Proceedings of the 11th ACM Conference on Electronic Commerce. 2010, 209–218
Bacon D F, Chen Y, Parkes D, Rao M. A market-based approach to software evolution. In: Proceedings of the 24th ACM SIGPLAN Conference Companion on Object Oriented Programming Systems Languages and Applications. 2009, 973–980
Bullinger A C, Moeslein K. Innovation contests-where are we? In: Proceedings of the 16th Americas Conference on Information Systems. 2010
Leimeister J M, Huber M, Bretschneider U, Krcmar H. Leveraging crowdsourcing: activation-supporting components for it-based ideas competition. Journal of Management Information Systems, 2009, 26(1): 197–224
Kazman R, Chen HM. The metropolis model a new logic for development of crowdsourced systems. Communications of the ACM, 2009, 52(7): 76–84
Bratvold D, Armstrong C. http://www.dailycrowdsource.com, 2013
Author information
Authors and Affiliations
Corresponding author
Additional information
Wenjun Wu is a professor in the School of Computer Science and Engineering at the Beihang University. He was previously a research scientist from 2006 to 2010, at the Computation Institute (CI) at the University of Chicago and Argonne National Laboratory. He was a technical staff and post-doctoral research associate from 2002 to 2006, at the Community Grids Lab at the Indiana University. He received his BS, MS, and PhD degrees in computer science from Beihang University in 1994, 1997 and 2001, respectively. He published over 50 peer-review papers on journals and conferences. His research interests include crowdsourcing, green computing, cloud computing, eScience and cyberinfrastructure, and multimedia collaboration.
Wei-Tek Tsai is currently a professor in the School of Computing, Informatics, and Decision Systems Engineering at Arizona State University, USA. He received his PhD and MS in computer science from University of California at Berkeley, and SB in computer science and engineering from MIT, Cambridge. He has produced over 300 papers in various journals and conferences, two Best Paper awards, and awarded several Guest Professorships. His work has been supported by US Department of Defense, Department of Education, National Science Foundation, EU, and industrial companies such as Intel, Fujitsu, and Guidant. In the last ten years, he focused his energy on service-oriented computing and SaaS, and worked on various aspects of software engineering including requirements, architecture, testing, and maintenance.
Professor Wei Li is the member of Chinese Science Academy. He received his PhD in computer science from University of Edinburgh and BS degree in mathematics from Peking University. He is the director of State Key Lab of Software Environment Development and vice-chair of Chinese Institute of Electronic. He was president of Beihang University from 2002 to 2009. His research interests focus on theoretic computer science, including open logic for scientific discovery, formal semantics, revision calculus and program debugging. He has published over 100 papers and one book.
Rights and permissions
About this article
Cite this article
Wu, W., Tsai, WT. & Li, W. An evaluation framework for software crowdsourcing. Front. Comput. Sci. 7, 694–709 (2013). https://doi.org/10.1007/s11704-013-2320-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11704-013-2320-2