Advertisement

An Academic Achievement Prediction Model Enhanced by Stacking Network

  • Shaofeng ZhangEmail author
  • Meng Liu
  • Jingtao Zhang
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1181)

Abstract

This article focuses on the use of data mining and machine learning in AI education to achieve better prediction accuracy of students’ academic achievement. So far, there are already many well-built gradient boosting machines for small data sets prediction, such as lightGBM, XGBoost, etc. Based on this, we presented and experimented a new method in a regression prediction. Our Stacking Network combines the traditional ensemble models with the idea of deep neural network. Compared with the original Stacking method, Stacking Network can infinitely increase the number of layers, making the effect of Stacking Network much higher than that of traditional Stacking. Simultaneously, compared with deep neural network, this Stacking Network inherits the advantages of the Boosting machines. We have applied this approach to achieve higher accuracy and better speed than the conventional Deep neural network. And also, we achieved a highest rank on the Middle School Grade Dataset provided by Shanghai Telecom Corporation.

Keywords

Machine learning Data mining Stacking network 

References

  1. 1.
    Ke, G., Meng, Q., Finley, T., et al.: Lightgbm: a highly efficient gradient boosting decision tree. In: Advances in Neural Information Processing Systems, pp. 3146–3154 (2017)Google Scholar
  2. 2.
    Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794. ACM (2016)Google Scholar
  3. 3.
    Lemley, M.A., Shapiro, C.: Patent holdup and royalty stacking. Tex. L. Rev. 2006, 85 (1991)Google Scholar
  4. 4.
    Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)zbMATHGoogle Scholar
  5. 5.
    Fauconnier, G., Turner, M.: The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. Basic Books, New York (2008)Google Scholar
  6. 6.
    Rowley, H.A., Baluja, S., Kanade, T.: Neural network-based face detection. IEEE Trans. Pattern Anal. Mach. Intell. 20(1), 23–38 (1998)CrossRefGoogle Scholar
  7. 7.
    Specht, D.F.: A general regression neural network. IEEE Trans. Neural Netw. 2(6), 568–576 (1991)CrossRefGoogle Scholar
  8. 8.
    Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. In: Advances in Neural Information Processing Systems, pp. 231–238 (1995)Google Scholar
  9. 9.
    Li, J., Chang, H., Yang, J.: Sparse deep stacking network for image classification. In: Twenty-Ninth AAAI Conference on Artificial Intelligence (2015)Google Scholar
  10. 10.
    Prokhorenkova, L., Gusev, G., Vorobev, A., et al.: CatBoost: unbiased boosting with categorical features. In: Advances in Neural Information Processing Systems, pp. 6638–6648 (2018)Google Scholar
  11. 11.
    Odom, M.D., Sharda, R.: A neural network model for bankruptcy prediction. In: 1990 IJCNN International Joint Conference on Neural Networks, pp. 163–168. IEEE (1990)Google Scholar
  12. 12.
    Rose, S.: Mortality risk score prediction in an elderly population using machine learning. Am. J. Epidemiol. 177(5), 443–452 (2013)CrossRefGoogle Scholar
  13. 13.
    Grady, J., Oakley, T., Coulson, S.: Blending and metaphor. Amst. Stud. Theory Hist. Linguist. Sci. Ser. 4, 101–124 (1999)Google Scholar
  14. 14.
    Freund, Y., Iyer, R., Schapire, R.E., et al.: An efficient boosting algorithm for combining preferences. J. Mach. Learn. Res. 4(Nov), 933–969 (2003)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Schapire, R.E.: A brief introduction to boosting. In: IJCAI, vol. 99, pp. 1401–1406 (1999)Google Scholar
  16. 16.
    Solomatine, D.P., Shrestha, D.L.: AdaBoost. RT: a boosting algorithm for regression problems. In: 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541), vol. 2, pp. 1163–1168. IEEE (2004)Google Scholar
  17. 17.
    Kudo, T., Matsumoto, Y.: A boosting algorithm for classification of semi-structured text. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 301–308 (2004)Google Scholar
  18. 18.
    Yosinski, J., Clune, J., Bengio, Y., et al.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)Google Scholar
  19. 19.
    Esteva, A., Kuprel, B., Novoa, R.A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115 (2017)CrossRefGoogle Scholar
  20. 20.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)Google Scholar
  21. 21.
    Hecht-Nielsen, R.: Theory of the backpropagation neural network. In: Neural Networks for Perception, pp. 65–93. Academic Press (1992)Google Scholar
  22. 22.
    Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: Proceedings of ICML, vol. 30, no. 1, p. 3 (2013)Google Scholar
  23. 23.
    Psaltis, D., Sideris, A., Yamamura, A.A.: A multilayered neural network controller. IEEE Control Syst. Mag. 8(2), 17–21 (1988)CrossRefGoogle Scholar
  24. 24.
    Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 (2014)
  25. 25.
    Saposnik, G., Cote, R., Mamdani, M., et al.: JURaSSiC: accuracy of clinician vs risk score prediction of ischemic stroke outcomes. Neurology 81(5), 448–455 (2013)CrossRefGoogle Scholar
  26. 26.
    Holland, P.W., Hoskens, M.: Classical test theory as a first-order item response theory: application to true-score prediction from a possibly nonparallel test. Psychometrika 68(1), 123–149 (2003)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Liu, Y., An, A., Huang, X.: Boosting prediction accuracy on imbalanced datasets with SVM ensembles. In: Ng, W.-K., Kitsuregawa, M., Li, J., Chang, K. (eds.) PAKDD 2006. LNCS (LNAI), vol. 3918, pp. 107–118. Springer, Heidelberg (2006).  https://doi.org/10.1007/11731139_15CrossRefGoogle Scholar
  28. 28.
    Chawla, N.V., Lazarevic, A., Hall, L.O., Bowyer, K.W.: SMOTEBoost: improving prediction of the minority class in boosting. In: Lavrač, N., Gamberger, D., Todorovski, L., Blockeel, H. (eds.) PKDD 2003. LNCS (LNAI), vol. 2838, pp. 107–119. Springer, Heidelberg (2003).  https://doi.org/10.1007/978-3-540-39804-2_12CrossRefGoogle Scholar
  29. 29.
    Bühlmann, P., Hothorn, T.: Boosting algorithms: regularization, prediction and model fitting. Stat. Sci. 22(4), 477–505 (2007)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Bagnell, J.A., Chestnutt, J., Bradley, D.M., et al.: Boosting structured prediction for imitation learning. In: Advances in Neural Information Processing Systems, pp. 1153–1160 (2007)Google Scholar
  31. 31.
    Du, X., Sun, S., Hu, C., et al.: DeepPPI: boosting prediction of protein-protein interactions with deep neural networks. J. Chem. Inf. Model. 57(6), 1499–1510 (2017)CrossRefGoogle Scholar
  32. 32.
    Lu, N., Lin, H., Lu, J., et al.: A customer churn prediction model in telecom industry using boosting. IEEE Trans. Industr. Inf. 10(2), 1659–1665 (2012)CrossRefGoogle Scholar
  33. 33.
    Bühlmann, P., Hothorn, T.: Twin boosting: improved feature selection and prediction. Stat. Comput. 20(2), 119–138 (2010)MathSciNetCrossRefGoogle Scholar
  34. 34.
    Friedman, J.H.: Stochastic gradient boosting. Comput. Stat. Data Anal. 38(4), 367–378 (2002)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.University of Electronic Science and Technology of ChinaChengduChina

Personalised recommendations