Advertisement

A Run-Time Detector of Hardworking E-Learners with Underperformance

  • Diego García-Saiz
  • Marta Zorrilla
  • Alfonso de la Vega
  • Pablo Sánchez
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 804)

Abstract

Due to the lack of a face-to-face interaction between teachers and students in virtual courses, the identification of at-risk learners among those who appear to show normal activity is a challenge. Particularly, we refer to those who are very active in the Learning Management System, but their performance is low in comparison with their peers. To fix this issue, we describe a method aimed to discover learners with an inconsistent performance with respect to their activity, by using an ensemble of classifiers. Its effectiveness will be shown by its application on data from virtual courses and its comparison with the results achieved by two well-known outlier detection techniques.

Keywords

At-risk students Warning system Educational data mining 

Notes

Acknowledgements

This work has been partially funded by the Spanish Government under grant TIN2014-56158-C4-2-P (M2C2).

References

  1. 1.
    Hara, N., Kling, R.: Student distress in web-based distance education. Educause Q. 24(3), 68–69 (2001)Google Scholar
  2. 2.
    Giesbersa, B., et al.: Investigating the relations between motivation, tool use, participation, and performance in an e-learning course using web-videoconferencing. Comput. Hum. Behav. 29(1), 285–292 (2013)CrossRefGoogle Scholar
  3. 3.
    Romero, C., Ventura, S.: Data mining in education. Wiley Interdisc. Rew. Data Mining and Knowl. Disc. 3, 12–27 (2013)CrossRefGoogle Scholar
  4. 4.
    Wolff, A., Zdrahal, Z., et al. : Developing predictive models for early detection of at-risk students on distance learning modules. In: 4th International Conference on Learning Analytics and Knowledge, pp. 24–28. Indianapolis (2014)Google Scholar
  5. 5.
    Xing, W., Guo, R., et al.: Participation-based student final performance prediction model through interpretable genetic programming: integrating learning analytics, edm and theory. Comput. Hum. Behav. 47, 168–181 (2015)CrossRefGoogle Scholar
  6. 6.
    Koprinska, I., Stretton, J., Yacef, K.: Predicting student performance from multiple data sources. In: International Conference on Artificial Intelligence in Education, pp. 678–681 (2015)Google Scholar
  7. 7.
    Frenay, B., Verleysen, M.: Classification in the presence of label noise: a survey. IEEE Trans. Neural Netw. Learn. Syst. 25(5), 845–869 (2014)CrossRefGoogle Scholar
  8. 8.
    Rokach, L.: Ensemble-based classifiers. Artif. Intell. Rev. 33, 1–39 (2010)CrossRefGoogle Scholar
  9. 9.
    Smith, M., et al.: The robustness of majority voting compared to filtering misclassified instances in supervised classification. Artif. Intell. Rev. 49, 105–130 (2017)CrossRefGoogle Scholar
  10. 10.
    Breunig, M.M., Kriegel, H.P., et al.: LOF: identifying density-based local outliers. In: ACM International Conference on Management of Data, Dallas, pp. 93–104 (2000)Google Scholar
  11. 11.
    Saad, M.K., Hewahi, N.M.: A comparative study of outlier mining and class outlier mining. Comput. Sci. Lett. 1(1) (2009)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Diego García-Saiz
    • 1
  • Marta Zorrilla
    • 1
  • Alfonso de la Vega
    • 1
  • Pablo Sánchez
    • 1
  1. 1.University of CantabriaSantanderSpain

Personalised recommendations