Advertisement

Missing Features Reconstruction and Its Impact on Classification Accuracy

  • Magda FriedjungováEmail author
  • Marcel Jiřina
  • Daniel Vašata
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11538)

Abstract

In real-world applications, we can encounter situations when a well-trained model has to be used to predict from a damaged dataset. The damage caused by missing or corrupted values can be either on the level of individual instances or on the level of entire features. Both situations have a negative impact on the usability of the model on such a dataset. This paper focuses on the scenario where entire features are missing which can be understood as a specific case of transfer learning. Our aim is to experimentally research the influence of various imputation methods on the performance of several classification models. The imputation impact is researched on a combination of traditional methods such as k-NN, linear regression, and MICE compared to modern imputation methods such as multi-layer perceptron (MLP) and gradient boosted trees (XGBT). For linear regression, MLP, and XGBT we also propose two approaches to using them for multiple features imputation. The experiments were performed on both real world and artificial datasets with continuous features where different numbers of features, varying from one feature to \(50\%\), were missing. The results show that MICE and linear regression are generally good imputers regardless of the conditions. On the other hand, the performance of MLP and XGBT is strongly dataset dependent. Their performance is the best in some cases, but more often they perform worse than MICE or linear regression.

Keywords

Missing features Imputation methods Feature reconstruction Transfer learning 

Notes

Acknowledgements

This research was supported by SGS grant No. SGS17/210/OHK3/3T/18 and by GACR grant No. GA18-18080S.

References

  1. 1.
    Alcalá-Fdez, J., et al.: KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. J. Mult.-Valued Log. Soft Comput. 17, 255–287 (2011)Google Scholar
  2. 2.
    Anderson, T.W.: An Introduction to Multivariate Statistical Analysis. Wiley Series in Probability and Statistics, 3rd edn. Wiley, Hoboken (2003)zbMATHGoogle Scholar
  3. 3.
    Arroyo, Á., Herrero, Á., Tricio, V., Corchado, E., Woźniak, M.: Neural models for imputation of missing ozone data in air-quality datasets. Complexity 2018, 14 (2018)CrossRefGoogle Scholar
  4. 4.
    Azur, M.J., Stuart, E.A., Frangakis, C., Leaf, P.J.: Multiple imputation by chained equations: what is it and how does it work? Int. J. Methods Psychiatr. Res. 20(1), 40–49 (2011)CrossRefGoogle Scholar
  5. 5.
    Baitharu, T.R., Pani, S.K.: Effect of missing values on data classification. J. Emerg. Trends Eng. Appl. Sci. (JETEAS) 4(2), 311–316 (2013)Google Scholar
  6. 6.
    Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 785–794. ACM, New York (2016)Google Scholar
  7. 7.
    Cover, T.M., Thomas, J.A.: Elements of Information Theory, 2nd edn. Wiley-Interscience, New York (2006)zbMATHGoogle Scholar
  8. 8.
    Farhangfar, A., Kurgan, L.A., Dy, J.G.: Impact of imputation of missing values on classification error for discrete data. Pattern Recogn. 41, 3692–3705 (2008)CrossRefGoogle Scholar
  9. 9.
    Jonsson, P., Wohlin, C.: An evaluation of k-nearest neighbour imputation using Likert data. In: Proceedings of the 10th International Symposium on Software Metrics, pp. 108–118, September 2004Google Scholar
  10. 10.
    Jordanov, I., Petrov, N., Petrozziello, A.: Classifiers accuracy improvement based on missing data imputation. J. Artif. Intell. Soft Comput. Res. 8(1), 31–48 (2018)CrossRefGoogle Scholar
  11. 11.
    Junninen, H., Niska, H., Tuppurainen, K., Ruuskanen, J., Kolehmainen, M.: Methods for imputation of missing values in air quality data sets. Atmos. Environ. 38(18), 2895–2907 (2004)CrossRefGoogle Scholar
  12. 12.
    Kozachenko, L.F., Leonenko, N.N.: Sample estimate of the entropy of a random vector. Probl. Peredachi Inf. 23, 9–16 (1987)zbMATHGoogle Scholar
  13. 13.
    Lichman, M.: UCI machine learning repository (2013)Google Scholar
  14. 14.
    Little, R.J.A., Rubin, D.B.: Statistical Analysis with Missing Data, vol. 333. Wiley, Hoboken (2014)zbMATHGoogle Scholar
  15. 15.
    Lombardi, D., Pant, S.: Nonparametric \(k\)-nearest-neighbor entropy estimator. Phys. Rev. E 93, 013310 (2016)CrossRefGoogle Scholar
  16. 16.
    Murray, J.S., et al.: Multiple imputation: a review of practical and theoretical findings. Stat. Sci. 33(2), 142–159 (2018)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10) (2010)CrossRefGoogle Scholar
  18. 18.
    Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  19. 19.
    Rubin, D.B.: Multiple Imputation for Nonresponse in Surveys. Wiley, Hoboken (1987)CrossRefGoogle Scholar
  20. 20.
    Salgado, C.M., Azevedo, C., Proença, H., Vieira, S.M.: Missing data. In: Secondary Analysis of Electronic Health Records, pp. 143–162. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-43742-2_13CrossRefGoogle Scholar
  21. 21.
    Schafer, J.L.: Analysis of Incomplete Multivariate Data. Chapman and Hall, London (1997)CrossRefGoogle Scholar
  22. 22.
    Silva-Ramírez, E.-L., Pino-Mejías, R., López-Coello, M.: Single imputation with multilayer perceptron and multiple imputation combining multilayer perceptron and k-nearest neighbours for monotone patterns. Appl. Soft Comput. 29, 65–74 (2015)CrossRefGoogle Scholar
  23. 23.
    Sricharan, K., Wei, D., Hero, A.O.: Ensemble estimators for multivariate entropy estimation. IEEE Trans. Inf. Theory 59(7), 4374–4388 (2013)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Van Buuren, S.: Flexible Imputation of Missing Data. Chapman and Hall/CRC, Boca Raton (2018)CrossRefGoogle Scholar
  25. 25.
    Zhang, Q., Rahman, A., D’este, C.: Impute vs. ignore: missing values for prediction. In: The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, August 2013Google Scholar
  26. 26.
    Zhu, M., Cheng, X.: Iterative KNN imputation based on GRA for missing values in TPLMS. In: 2015 4th International Conference on Computer Science and Network Technology (ICCSNT), vol. 1, pp. 94–99. IEEE (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Faculty of Information TechnologyCzech Technical University in PraguePragueCzech Republic

Personalised recommendations