Advertisement

Computing

pp 1–15 | Cite as

Improved randomized learning algorithms for imbalanced and noisy educational data classification

  • Ming Li
  • Changqin HuangEmail author
  • Dianhui Wang
  • Qintai Hu
  • Jia Zhu
  • Yong Tang
Article
  • 43 Downloads

Abstract

Despite that neural networks have demonstrated their good potential to be used in constructing learners which exhibit strong predictive performance, there are still some uncertainty issues that can greatly affect the effectiveness of the employed supervised learning algorithms, such as class imbalance and labeling errors (or class noise). Technically, imbalanced data resource can cause more difficulties or limitations for learning algorithms to distinguish different classes, while data with labeling errors can lead to an unreasonable problem formulation due to incorrect hypotheses. Indeed, noise and class imbalance are pervasive problems in the domain of educational data analytics. This study aims at developing improved randomized learning algorithms by investigating a novel type of cost function that focuses on the combined effects of class imbalance and class noise. Instead of concerning these uncertainty issues isolation, we present a convex combination of robust and imbalanced modelling objectives, contributing to a generalized formulation of weighted least squares problems by which the improved randomized learner models can be built. Our experimental study on several educational data classification tasks have verified the advantages of our proposed algorithms, in comparison with some existing methods that either takes no account of class imbalance and labeling errors, or merely consider one specific aspect in problem-solving.

Keywords

Imbalanced data classification Randomized algorithms Noisy data classification Educational data analytics 

Mathematics Subject Classification

68W20 68T01 

References

  1. 1.
    Abellán J, Masegosa AR (2010) Bagging decision trees on data sets with classification noise. In: International symposium on foundations of information and knowledge systems, Springer, pp 248–265Google Scholar
  2. 2.
    Brodley CE, Friedl MA (1999) Identifying mislabeled training data. J Artif Intell Res 11:131–167CrossRefzbMATHGoogle Scholar
  3. 3.
    Cortez P, Silva AMG (2008) Using data mining to predict secondary school student performance. In: Proceedings of the 5th future business technology conference, pp 5–12Google Scholar
  4. 4.
    Frénay B, Verleysen M (2014) Classification in the presence of label noise: a survey. IEEE Trans Neural Netw Learn Syst 25(5):845–869CrossRefzbMATHGoogle Scholar
  5. 5.
    Gorban AN, Tyukin IY, Prokhorov DV, Sofeikov KI (2016) Approximation with random bases: Pro et contra. Inf Sci 364:129–145CrossRefGoogle Scholar
  6. 6.
    He H, Garcia EA (2008) Learning from imbalanced data. IEEE Trans Knowl Data Eng 9:1263–1284Google Scholar
  7. 7.
    Igelnik B, Pao YH (1995) Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans Neural Netw 6(6):1320–1329CrossRefGoogle Scholar
  8. 8.
    Khardon R, Wachman G (2007) Noise tolerant variants of the perceptron algorithm. J Mach Learn Res 8(Feb):227–248zbMATHGoogle Scholar
  9. 9.
    Khoshgoftaar TM, Van Hulse J, Napolitano A (2010) Supervised neural network modeling: an empirical investigation into learning from imbalanced data with labeling errors. IEEE Trans Neural Netw 21(5):813–830CrossRefGoogle Scholar
  10. 10.
    Khoshgoftaar TM, Van Hulse J, Napolitano A (2011) Comparing boosting and bagging techniques with noisy and imbalanced data. IEEE Trans Syst Man Cybern A Syst Hum 41(3):552–568CrossRefGoogle Scholar
  11. 11.
    Krawczyk B (2016) Learning from imbalanced data: open challenges and future directions. Prog Artif Intell 5(4):221–232CrossRefGoogle Scholar
  12. 12.
    Lancaster P, Tismenetsky M (1985) The theory of matrices: with applications, 2nd edn. Academic Press, San DiegozbMATHGoogle Scholar
  13. 13.
    Li M, Huang C, Wang D (2019) Robust stochastic configuration networks with maximum correntropy criterion for uncertain data regression. Inf Sci 473:73–86CrossRefGoogle Scholar
  14. 14.
    Li M, Wang D (2016) Insights into randomized algorithms for neural networks: Practical issues and common pitfalls. Inf Sci 382:170–178Google Scholar
  15. 15.
    Li M, Wang D (2018) Two dimensional stochastic configuration networks for image data analytics. arXiv:1809.02066
  16. 16.
    Lin CF, Wang SD (2004) Training algorithms for fuzzy support vector machines with noisy data. Pattern Recognit Lett 25(14):1647–1656CrossRefGoogle Scholar
  17. 17.
    Masnadi-Shirazi H, Vasconcelos N (2009) On the design of loss functions for classification: theory, robustness to outliers, and savageboost. In: Advances in neural information processing systems, pp 1049–1056Google Scholar
  18. 18.
    Oza NC (2004) Aveboost2: boosting for noisy data. In: International workshop on multiple classifier systems, Springer, pp 31–40Google Scholar
  19. 19.
    Pao YH, Park GH, Sobajic DJ (1994) Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 6(2):163–180CrossRefGoogle Scholar
  20. 20.
    Pao YH, Takefuji Y (1992) Functional-link net computing: theory, system architecture, and functionalities. Computer 25(5):76–79CrossRefGoogle Scholar
  21. 21.
    Scardapane S, Wang D (2017) Randomness in neural networks: an overview. WIREs Data Min Knowl Discov 7(2):e1200.  https://doi.org/10.1002/widm.1200 CrossRefGoogle Scholar
  22. 22.
    Stempfel G, Ralaivola L (2009) Learning SVMs from sloppily labeled data. In: International conference on artificial neural networks, Springer, pp 884–893Google Scholar
  23. 23.
    Sun Y, Kamel MS, Wong AK, Wang Y (2007) Cost-sensitive boosting for classification of imbalanced data. Pattern Recognit 40(12):3358–3378CrossRefzbMATHGoogle Scholar
  24. 24.
    Wang D, Cui C (2017) Stochastic configuration networks ensemble with heterogeneous features for large-scale data analytics. Inf Sci 417:55–71CrossRefGoogle Scholar
  25. 25.
    Wang D, Li M (2017) Stochastic configuration networks: fundamentals and algorithms. IEEE Trans Cybern q 47(10):3466–3479CrossRefGoogle Scholar
  26. 26.
    Wang D, Li M (2017) Robust stochastic configuration networks with kernel density estimation for uncertain data regression. Inf Sci 412:210–222MathSciNetCrossRefGoogle Scholar
  27. 27.
    Wang D, Li M (2018) Deep stochastic configuration networks with universal approximation property. In: Proceedings of international joint conference on neural networks, IEEE, pp 1–8Google Scholar

Copyright information

© Springer-Verlag GmbH Austria, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Information Technology in EducationSouth China Normal UniversityGuangzhouChina
  2. 2.Guangdong Engineering Research Center for Smart LearningSouth China Normal UniversityGuangzhouChina
  3. 3.Department of Computer Science and Information TechnologyLa Trobe UniversityMelbourneAustralia
  4. 4.The State Key Laboratory of Synthetical Automation for Process IndustriesNortheastern UniversityShenyangChina

Personalised recommendations