Advertisement

A Review on the Methods to Evaluate Crowd Contributions in Crowdsourcing Applications

  • Hazleen ArisEmail author
  • Aqilah Azizan
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1073)

Abstract

Due to the nature of crowdsourcing that openly accepts contributions from the crowd, the need to evaluate the contributions is obvious to ensure their reliability. A number of evaluation methods are used in the existing crowdsourcing applications to evaluate these contributions. This study aims to identify and document these methods. To do this, 50 crowdsourcing applications obtained from an extensive literature and online search were reviewed. Analysis performed on the applications found that depending on the types of crowdsourcing applications, whether simple, complex or creative, three different methods are being used. These are expert judgement, rating and feedback. While expert judgement is mostly used in complex and creative crowdsourcing initiatives, rating is widely used in simple ones. This paper is the only reference known so far that documents the current state of the evaluation methods in the existing crowdsourcing applications. It would be useful in determining the way forward for research in the area, such as designing a new evaluation method. It also justifies the need for automated evaluation method for crowdsourced contributions.

Keywords

Evaluation method Crowdsourcing evaluation Survey Systematic review Grounded theory 

Notes

Acknowledgement

Information presented in this paper forms part of the research work funded by Universiti Tenaga Nasional entitled Formulation of a Trust Building Mechanism for Trustworthy Non-profit Mobile Crowdsourcing Initiatives Using Beta Reputation System (J510050668).

References

  1. 1.
    Howe, J.: Crowdsourcing: a definition (2015). http://www.crowdsourcing.com
  2. 2.
    Doan, A., Ramakrishnan, R., Halevy, A.Y.: Crowdsourcing systems on the world-wide web. Commun. ACM 54, 86–96 (2011)CrossRefGoogle Scholar
  3. 3.
    Moher, D., Liberati, A., Tetzlaff, J., Altman, D.G., The PRISMA Group: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann. Intern. Med. 151, 264–269 (2009)Google Scholar
  4. 4.
    Estellés-Arolas, E., González-Ladrón-de-Guevara, F.: Towards an integrated crowdsourcing definition. J. Inf. Sci. 38, 189–200 (2012)CrossRefGoogle Scholar
  5. 5.
    Borromeo, R.M., Toyama, M.: An investigation of unpaid crowdsourcing. Hum.-Centric Comput. Inf. Sci. 6, 11 (2016)CrossRefGoogle Scholar
  6. 6.
    Rogstadius, J., Kostakos, V., Smus, B., Laredo, J., Vukovic, M.: An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In: Fifth International AAAI Conference on Weblogs and Social Media, pp. 321–328. Association of Advanced Artificial Intelligence (2011)Google Scholar
  7. 7.
    Zhao, Y.C., Zhu, Q.: Effects of extrinsic and intrinsic motivation on participation in crowdsourcing contest: a perspective of self-determination theory. Online Inf. Rev. 38, 896–917 (2014)CrossRefGoogle Scholar
  8. 8.
    Lane, N.D., Miluzzo, E., Lu, H., Peebles, D., Choudhury, T., Campbell, A.T., College, D.: ADHOC and sensor networks a survey of mobile phone sensing. IEEE Commun. Mag. 48, 140–150 (2010)CrossRefGoogle Scholar
  9. 9.
    Brabham, D.C.: Moving the crowd at threadless. Inf. Commun. Soc. 13, 1122–1145 (2010)CrossRefGoogle Scholar
  10. 10.
    Smeets, B.C.W.: Crowdsourcing: the process of innovation (2011)Google Scholar
  11. 11.
    Kleemann, F., Voß, G.G., Rieder, K.: Un(der)paid innovators: the commercial utilization of consumer work through crowdsourcing. Sci. Technol. Innov. Stud. 4, 5–26 (2008)Google Scholar
  12. 12.
    Brown, T.C., Daniel, T.C.: Scaling of ratings: concepts and methods. USDA For. Serv. Res. Pap. RM-293, pp. 1–24 (1990)Google Scholar
  13. 13.
    Müller, N.: BlaBlaCar – business model and empirical analysis of usage patterns (2018)Google Scholar
  14. 14.
    Pan, W., Xiang, E.W., Yang, Q.: Transfer learning in collaborative filtering with uncertain ratings transfer by integrative factorization. In: Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, pp. 662–668, Toronto (2011)Google Scholar
  15. 15.
    Wang, G., Xie, S., Liu, B., Yu, P.S.: Review graph based online store review spammer detection. In: 2011 IEEE 11th International Conference on Data Mining, Vancouver (2011)Google Scholar
  16. 16.
    Hattie, J., Timperley, H.: The power of feedback. Rev. Educ. Res. 77(1), 81–112 (2007).  https://doi.org/10.3102/003465430298487CrossRefGoogle Scholar
  17. 17.
    Leibold, N., Schwarz, L.M.: The art of giving online feedback. J. Eff. Teach. 15, 34–46 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Universiti Tenaga NasionalKajangMalaysia

Personalised recommendations