Skip to main content

Advertisement

Log in

Utilizing crowdsourcing and machine learning in education: Literature review

  • Published:
Education and Information Technologies Aims and scope Submit manuscript

Abstract

For many years, learning continues to be a vital developing field since it is the key measure of the world’s civilization and evolution with its enormous effect on both individuals and societies. Enhancing existing learning activities in general will have a significant impact on literacy rates around the world. One of the crucial activities in education is the assessment method because it is the primary way used to evaluate the student during their studies. The main purpose of this review is to examine the existing learning and e-learning approaches that use either crowdsourcing, machine learning, or both crowdsourcing and machine learning in their proposed solutions. This review will also investigate the addressed applications to identify the existing researches related to the assessment. Identifying all existing applications will assist in finding the unexplored gaps and limitations. This study presents a systematic literature review investigating 30 papers from the following databases: IEEE and ACM Digital Library. After performing the analysis, we found that crowdsourcing is utilized in 47.8% of the investigated learning activities, while each of the machine learning and the hybrid solutions are utilized in 26% of the investigated learning activities. Furthermore, all the existing approaches regarding the exam assessment problem that are using machine learning or crowdsourcing were identified. Some of the existing assessment systems are using the crowdsourcing approach and other systems are using the machine learning, however, none of the approaches provide a hybrid assessment system that uses both crowdsourcing and machine learning. Finally, it is found that using either crowdsourcing or machine learning in the online courses will enhance the interactions between the students. It is concluded that the current learning activities need to be enhanced since it is directly affecting the student’s performance. Moreover, merging both the machine learning to the crowd wisdom will increase the accuracy and the efficiency of education.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Alghamdi, E. A., Aljohani, N. R., Alsaleh, A. N., Bedewi, W., & Basheri, M. (2015). CrowdyQ: A virtual crowdsourcing platform for question items development in higher education. Proceedings of the 17th International Conference on Information Integration and Web-Based Applications &Services - IiWAS ‘15, 1–4. https://doi.org/10.1145/2837185.2843852.

  • Barbosa, C. E., Epelbaum, V. J., Antelio, M., Oliveira, J., & Moreira de Souza, J. (2013). Crowdsourcing environments in E-learning scenario: A classification based on educational and collaboration criteria. 2013 IEEE International Conference on Systems, Man, and Cybernetics, 687–692. https://doi.org/10.1109/SMC.2013.122.

  • Choi, J.-R., Kim, S., & Lim, S.-B. (2017). The feedback block model for an adaptive E-book. Adjunct Publication of the 30th Annual ACM Symposium on User Interface Software and Technology - UIST ‘17, 127–128. https://doi.org/10.1145/3131785.3131834.

  • Corrigan-Gibbs, H., Gupta, N., Northcutt, C., Cutrell, E., & Thies, W. (2015). Deterring cheating in online environments. ACM Transactions on Computer-Human Interaction, 22(6), 1–23. https://doi.org/10.1145/2810239.

    Article  Google Scholar 

  • Crooks, T. J. (1988). The impact of classroom evaluation practices on students. Review of Educational Research, 58(4), 438–481. https://doi.org/10.3102/00346543058004438.

    Article  Google Scholar 

  • de Alfaro, L., & Shavlovsky, M. (2014). CrowdGrader: A tool for crowdsourcing the evaluation of homework assignments. Proceedings of the 45th ACM Technical Symposium on Computer Science Education - SIGCSE ‘14, 415–420. https://doi.org/10.1145/2538862.2538900.

  • Dow, S., Gerber, E., & Wong, A. (2013). A pilot study of using crowds in the classroom. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI ‘13, 227. https://doi.org/10.1145/2470654.2470686.

  • Faisal, M. H., AlAmeeri, A. W., & Alsumait, A. A. (2015). An adaptive e-learning framework: Crowdsourcing approach. Proceedings of the 17th International Conference on Information Integration and Web-Based Applications &Services - IiWAS ‘15, 1–5. https://doi.org/10.1145/2837185.2837249.

  • Farasat, A., Nikolaev, A., Miller, S., & Gopalsamy, R. (2017). CROWDLEARNING: Towards collaborative problem-posing at scale. Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale - L@S ‘17, 221–224. https://doi.org/10.1145/3051457.3053990.

  • Gutierrez, F., Dou, D., Martini, A., Fickas, S., & Zong, H. (2013). Hybrid ontology-based information extraction for automated text grading. 2013 12th International Conference on Machine Learning and Applications, 359–364. https://doi.org/10.1109/ICMLA.2013.73.

  • Harlen, W., Crick, R. D., Broadfoot, P., Daugherty, R., Gardner, J., James, M., & Stobart, G. (2002). A systematic review of the impact of summative assessment and tests on students’ motivation for learning. EPPI-Centre, University of London, 151.

  • Huang, S.-W., Tu, P.-F., Fu, W.-T., & Amamzadeh, M. (2013). Leveraging the crowd to improve feature-sentiment analysis of user reviews. Proceedings of the 2013 International Conference on Intelligent User Interfaces - IUI ‘13, 3. https://doi.org/10.1145/2449396.2449400.

  • Kamar, E. (2016). Directions in Hybrid Intelligence: Complementing AI Systems with Human Intelligence. In IJCAI, 4070-4073.

  • Kamath, A., Biswas, A., & Balasubramanian, V. (2016). A crowdsourced approach to student engagement recognition in e-learning environments. 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), 1–9. https://doi.org/10.1109/WACV.2016.7477618.

  • Kashi, A., Shastri, S., Deshpande, A. R., Doreswamy, J., & Srinivasa, G. (2016). A score recommendation system towards automating assessment in professional courses. 2016 IEEE Eighth International Conference on Technology for Education (T4E), 140–143. https://doi.org/10.1109/T4E.2016.036.

  • Kim, J., Sterman, S., Cohen, A. A. B., & Bernstein, M. S. (2017). Mechanical novel: Crowdsourcing complex work through reflection and revision. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing - CSCW ‘17, 233–245. https://doi.org/10.1145/2998181.2998196.

  • Kotturi, Y., Kulkarni, C. E., Bernstein, M. S., & Klemmer, S. (2015). Structure and messaging techniques for online peer learning systems that increase stickiness. Proceedings of the Second (2015) ACM Conference on Learning @ Scale - L@S ‘15, 31–38. https://doi.org/10.1145/2724660.2724676.

  • Krause, M., Garncarz, T., Song, J., Gerber, E. M., Bailey, B. P., & Dow, S. P. (2017). Critique style guide: Improving Crowdsourced design feedback with a natural language model. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI ‘17, 4627–4639. https://doi.org/10.1145/3025453.3025883.

  • Kulkarni, C., Cambre, J., Kotturi, Y., Bernstein, M. S., & Klemmer, S. R. (2015a). Talkabout: Making Distance matter with small groups in massive classes. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing - CSCW ‘15, 1116–1128. https://doi.org/10.1145/2675133.2675166.

  • Kulkarni, C. E., Bernstein, M. S., & Klemmer, S. R. (2015b). PeerStudio: Rapid peer feedback emphasizes revision and improves performance. Proceedings of the Second (2015) ACM Conference on Learning @ Scale - L@S ‘15, 75–84. https://doi.org/10.1145/2724660.2724670.

  • Le, L. T., Shah, C., & Choi, E. (2016). Evaluating the quality of educational answers in community question-answering. Proceedings of the 16th ACM/IEEE-CS on Joint Conference on Digital Libraries - JCDL ‘16, 129–138. https://doi.org/10.1145/2910896.2910900.

  • Lemley, D., Sudweeks, R., Howell, S., Laws, R. D., & Sawyer, O. (2007). The effects of immediate and delayed feedback on secondary distance learners. Quarterly Review of Distance Education, 8(3), 251.

  • Li, Y. (2018). An application of EDM: Design of a new Online System for correcting exam paper. 2018 13th International Conference on Computer Science & Education (ICCSE), 1–6. https://doi.org/10.1109/ICCSE.2018.8468821.

  • Li, X., Chang, K., Yuan, Y., & Hauptmann, A. (2015). Massive open online proctor: Protecting the credibility of MOOCs certificates. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing - CSCW ‘15, 1129–1137. https://doi.org/10.1145/2675133.2675245.

  • Luger, S. K. K., & Bowles, J. (2013). An analysis of question quality and user performance in crowdsourced exams. Proceedings of the 2103 Workshop on Data-Driven User Behavioral Modelling and Mining from Social Media - DUBMOD ‘13, 29–32. https://doi.org/10.1145/2513577.2513584.

  • Nkoana, T. H. (2016). E-learning: Crowdsourcing as an alternative model to traditional learning. 2016 International Conference on Advances in Computing and Communication Engineering (ICACCE), 423–428. https://doi.org/10.1109/ICACCE.2016.8073786.

  • Oxford Dictionary [Dictionary]. (2020). Retrieved January 1, 2020, from Lexico website: https://www.lexico.com/en/definition/machine_learning. Accessed 1 Jan 2020

  • Pirttinen, N., Kangas, V., Nikkarinen, I., Nygren, H., Leinonen, J., & Hellas, A. (2018). Crowdsourcing programming assignments with CrowdSorcerer. Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education - ITiCSE 2018, 326–331. https://doi.org/10.1145/3197091.3197117.

  • Poetz, M. K., & Schreier, M. (2012). The value of crowdsourcing: Can users really compete with professionals in generating new product ideas?: The value of crowdsourcing. Journal of Product Innovation Management, 29(2), 245–256. https://doi.org/10.1111/j.1540-5885.2011.00893.x.

    Article  Google Scholar 

  • Schenk, E., & Guittard, C. (2009). Crowdsourcing: What can be Outsourced to the Crowd, and Why. In Workshop on open source innovation, Strasbourg, France 72, 3.

  • Shetty, B. (2018). Natural Language Processing(NLP) for machine learning. Retrieved November 24, 2018, from towards data science website: towardsdatascience.com/natural-language-processing-nlp-for-machine-learning-d44498845d5b. Accessed 24 Sept 2019

  • Shongwe, T., & Zuva, T. (2018). Benefits of crowdsourcing and its growth in the education sector. 2018 International Conference on Advances in Big Data, Computing and Data Communication Systems (IcABCD), 1–6. https://doi.org/10.1109/ICABCD.2018.8465436.

  • Tang, T., Smith, R., Rixner, S., & Warren, J. (2016). Data-driven test case generation for automated programming assessment. Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education - ITiCSE ‘16, 260–265. https://doi.org/10.1145/2899415.2899423.

  • Taniguchi, A., & Inoue, S. (2015). A method for automatic assessment of user-generated tests and its evaluation. Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers - UbiComp ‘15, 225–228. https://doi.org/10.1145/2800835.2800931.

  • Vaish, R., Goel, S., Davis, J., Bernstein, M. S., Gaikwad, S. (Neil) S., Kovacs, G., … Belongie, S. (2017). Crowd research: Open and scalable university laboratories. Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology - UIST ‘17, 829–843. https://doi.org/10.1145/3126594.3126648.

  • Vaughan, J. W. (2017). Making better use of the crowd: How crowdsourcing can advance machine learning research. Journal of Machine Learning Research, 18, 193-1.

  • Watson, P., Ma, T., Tejwani, R., Chang, M., Ahn, J., & Sundararajan, S. (2018). Human-level multiple choice question guessing without domain knowledge: Machine-learning of framing Effects. Companion of the The Web Conference 2018 on The Web Conference 2018 - WWW ‘18, 299–303. https://doi.org/10.1145/3184558.3186340.

  • Whitehill, J., & Seltzer, M. (2016). A crowdsourcing approach to collecting tutorial videos—Toward personalized learning-at-scale. ArXiv:1606.09610 [Cs]. Retrieved from http://arxiv.org/abs/1606.09610. Accessed 15 Aug 2019

  • Williams, J. J., Kim, J., Rafferty, A., Maldonado, S., Gajos, K. Z., Lasecki, W. S., & Heffernan, N. (2016). AXIS: Generating explanations at scale with Learnersourcing and machine learning. Proceedings of the Third (2016) ACM Conference on Learning @ Scale - L@S ‘16, 379–388. https://doi.org/10.1145/2876034.2876042.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hadeel S. Alenezi.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alenezi, H.S., Faisal, M.H. Utilizing crowdsourcing and machine learning in education: Literature review. Educ Inf Technol 25, 2971–2986 (2020). https://doi.org/10.1007/s10639-020-10102-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10639-020-10102-w

Keywords

Navigation