Journal of Classification

, Volume 29, Issue 2, pp 227–258 | Cite as

Accurate Tree-based Missing Data Imputation and Data Fusion within the Statistical Learning Paradigm

  • Antonio D’Ambrosio
  • Massimo Aria
  • Roberta Siciliano
Article

Abstract

Framework of this paper is statistical data editing, specifically how to edit or impute missing or contradictory data and how to merge two independent data sets presenting some lack of information. Assuming a missing at random mechanism, this paper provides an accurate tree-based methodology for both missing data imputation and data fusion that is justified within the Statistical Learning Theory of Vapnik. It considers both an incremental variable imputation method to improve computational efficiency as well as boosted trees to gain in prediction accuracy with respect to other methods. As a result, the best approximation of the structural risk (also known as irreducible error) is reached, thus reducing at minimum the generalization (or prediction) error of imputation. Moreover, it is distribution free, it holds independently of the underlying probability law generating missing data values. Performance analysis is discussed considering simulation case studies and real world applications.

Keywords

Data editing Tree-based methods Boosting algorithm FAST algorithm Incremental imputation Generalization error 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. ALUJA-BANET, T., MORINEAU A., and RIUS, R. (1997), “La Greffe de Fichiers et Ses Conditions D’application. Méthode et Exemple”, in Enquêtes et Sondages, eds. G. Brossier G. and A.M. Dussaix, Paris: Dunod, pp. 94–102.Google Scholar
  2. ALUJA-BANET, T., RIUS, R., NONELL, R., and MARTÍNEZ-ABARCA, M.J. (1998), “Data Fusion and File Grafting”, in Analyses Multidimensionelles Des Données (1st ed.), NGUS 97, eds. A. Morineau, and K. Fernández Aguirre, París: CISIA-CERESTA, pp. 7–14.Google Scholar
  3. ALUJA-BANET, T., DAUNIS-I-ESTADELLA, J., and PELLICER, D. (2007), “GRAFT, a Complete System for Data Fusion”, Computational Statistics and Data Analysis 52, 635–649.MathSciNetMATHCrossRefGoogle Scholar
  4. BARCENA,M.J., and TUSELL, F. (1999), “Enlace de Encuestas: Una PropuestaMetodológica y Aplicación a la Encuesta de Presupuestos de Tempo”, Qüestiio, 23(2), 297–320.MathSciNetMATHGoogle Scholar
  5. BARCENA, M.J., and TUSELL, F. (2000), “Tree-based Algorithms for Missing Data Imputation”, in Proceedings in Computational Statistics, COMPSTAT 2000, eds. J.G. Bethlehem, and P.G.M. van der Heijden, Heidelberg: Physica-Verlag, pp. 193–198.Google Scholar
  6. BREIMAN, L. (1996), “Bagging Predictors”, Machine Learning, 26, 46–59.Google Scholar
  7. BREIMAN, L. (1998), “Arcing Classifiers”, The Annals of Statistics, 26(3), 801–849.MathSciNetMATHCrossRefGoogle Scholar
  8. BREIMAN, L., FRIEDMAN, J.H., OLSHEN, R.A., and STONE, C.J. (1984), Classification and Regression Trees, Belmont CA: Wadsworth International Group.MATHGoogle Scholar
  9. CAPPELLI, C.,MOLA, F., and SICILIANO, R. (2002), “A Statistical Approach to Growing a Reliable Honest Tree”, Computational Statistics and Data Analysis, 38, 285–299.MathSciNetMATHCrossRefGoogle Scholar
  10. CHU, C.K., and CHENG, P.E. (1995), “Nonparametric Regression Estimation With Missing Data”, Journal of Statistical Planning and Inference, 48, 85–99.MathSciNetMATHCrossRefGoogle Scholar
  11. CONTI, P.L., MARELLA, D., and SCANU, M. (2008), “Evaluation of Matching Noise for Imputation Techniques Based on Nonparametric Local Linear Regression Estimators”, Computational Statistics and Data Analysis, 43, 354–365.MathSciNetCrossRefGoogle Scholar
  12. CONVERSANO, C., and SICILIANO, R. (2008), “Statistical Data Editing”, in: J. WANG. Data Warehousing And Mining: Concepts, Methodologies, Tools, And Applications (Vol. 4), ed. J. Wang, HERSHEY PA: Information Science Reference, pp. 1835–1840.Google Scholar
  13. CONVERSANO, C., and SICILIANO, R. (2009), “Incremental Tree-Based Missing Data Imputation with Lexicographic Ordering”, Journal of Classification, 26(3), 361–379.MathSciNetCrossRefGoogle Scholar
  14. D’AMBROSIO, A., ARIA, M., and SICILIANO, R. (2007), “Robust Tree-based Incremental Imputation Method for Data Fusion”, in LNCS 4273; Advances in Intelligent Data Analysis, Berlin/Heidelberg: Springer-Verlag, pp 174–183.Google Scholar
  15. DAVID, M.H., LITTLE, R.J.A., SAMUEL, M.E., and TRIEST, R.K. (1986), “Alternative Methods for CPS Income Imputation”, Journal of American Statistical Association, 81, 29–41.Google Scholar
  16. DEWAAL T., PANNEKOEK, J, and SCHOLTUS, S. (2011), “Handbook of Statistical Data Editing and Imputation”, New York: Wiley.CrossRefGoogle Scholar
  17. DEMPSTER, A.P., LAIRD, N.M., and RUBIN, D.B. (1977), “Maximul Likelihood Estimation from Incomplete Data via the EM Algorithm (With Discussion)”, Journal of the Royal Statistical Society, Series B, 39, 1–38.MathSciNetMATHGoogle Scholar
  18. DIETTERICH, T.G. (2000), “Ensemble Methods in Machine Learning”, in First International Workshop on Multiple Classifier Systems, eds. J. Kittler and F. Roli, Springer-Verlag, pp. 1-15.Google Scholar
  19. D’ORAZIO, M., DI ZIO, M., and SCANU, M. (2006), Statistical Matching: Theory and Practice, Chinchester: John Wiley & Sons.MATHCrossRefGoogle Scholar
  20. EIBL, G., and PFEIFFER, K. P. (2002), “How To Make AdaBoost.M1 Work for Weak Base Classifiers by Changing Only One Line of the Code”, in Machine Learning: ECML 2002, Lecture Notes in Artificial Intelligence, Heidelberg: Springer.Google Scholar
  21. FELLEGI, I. P., and HOLT, D. (1976), “A Systematic Approach To Automatic Edit and Imputation”, Journal of American Statistical Association, 71, 17–35.Google Scholar
  22. FORD, B.N. (1983), “An Overview of Hot Deck Procedures”, in Incomplete Data in Sample Surveys, Vol. II: Theory and Annotated Bibliography, eds. G. Madow, I. Olkin and D.B. Rubin, New York: Academic Press.Google Scholar
  23. FREUND, Y., and SCHAPIRE, R.E. (1997), “A Decision-Theoretic Generalization of On-Line Learning and An Application To Boosting”, Journal of Computer and System Sciences, 55(1), 119–139.MathSciNetMATHCrossRefGoogle Scholar
  24. GEY, S., and POGGI, J.M. (2006), “Boosting and Instability for Regression Trees”, Computational Statistics and Data Analysis, 50, 533–550.MathSciNetMATHCrossRefGoogle Scholar
  25. HASTIE, T.J., TIBSHIRANI, R.J., and FRIEDMAN, J.H. (2009), The Elements of Statistical Learning (2nd ed.), New York: Springer Verlag.MATHCrossRefGoogle Scholar
  26. IBRAHIM, J.G. (1990), “Incomplete Data in Generalized Linear Models”, Journal of the American Statistical Association, 85, 765–769.Google Scholar
  27. IBRAHIM, J.G., LIPSITZ, S.R., and CHEN, M.H. (1999), “Missing Covariates in Generalized Linear Models When the Missing Data Mechanism Is Non-Ignorable”, Journal of the Royal Statistical Society, Series B, 61(1), 173–190.MathSciNetMATHCrossRefGoogle Scholar
  28. KOHAVI, R., and WOLPERT, D. (1996), “Bias Plus Variance for Zero-One Loss Functions”, in Proceedings of the 13th International Machine Learning Conference, San Mateo CA: Morgan Kaufmann, pp. 275–283.Google Scholar
  29. KONG, E., and DIETTERICH, T.G. (1995), “Error-Correcting Output Coding Correct Bias and Variance”, in The XII International Conference on Machine Learning, San Francisco CA: Morgan Kaufmann, pp. 313–321.Google Scholar
  30. LAKSHMINARAYAN, K., HARP, S.A., GOLDMAN R., and SAMAD, T. (1996), “Imputation of Missing Data Using Machine Learning Techniques”, in Proceedings of the Second International Conference on Knowledge Discovery and Data Miming, eds. Simoudis, Han and Fayyad, Menlo Park CA: AAAI Press, pp. 140–145.Google Scholar
  31. LITTLE, J.R.A. (1992), “Regression with Missing X’s: A Review”, Journal of the American Statistical Association, 87(420), 1227–1237.Google Scholar
  32. LITTLE, J.R.A., and RUBIN, D.B. (1987), Statistical Analysis with Missing Data, New York: John Wiley and Sons.MATHGoogle Scholar
  33. McKNIGHT, P.E., McKNIGHT, K.M., SIDANI, S., and FIGUEREDO, A.J. (2007), Missing Data: A Gentle Introduction, New York: The Guildford Press.Google Scholar
  34. MARELLA, D., SCANU, M., and CONTI, P.L. (2008), “On the Matching Noise of Some Nonparametric Imputation Procedures”, Statistics & Probability Letters, 78(12), 1593–1600.MathSciNetMATHCrossRefGoogle Scholar
  35. MOLA, F., and SICILIANO, R. (1992), “A Two-Stage Predictive Splitting Algorithm in Binary Segmentation”, in Computational Statistics: COMPSTAT 92, 1, eds. Y. Dodge, and J. Whittaker, Heiderlberg (D): Physica Verlag, pp. 179–184.Google Scholar
  36. MOLA, F., and SICILIANO, R. (1997), “A Fast Splitting Procedure for Classification and Regression Trees”, Statistics and Computing, 7, 208–216.CrossRefGoogle Scholar
  37. OUDSHOORN, C.G.M., VAN BUUREN, S., and VAN RIJCKEVORSEL, J.L.A. (1999), “Flexible Multiple Imputation by Chained Equations of the AVO-95 Survey”, TNO Preventie en Gezondheid, TNO/PG 99.045.Google Scholar
  38. PAAS, G. (1985), “Statistical Record Linkage Methodology, State of the Art and Future Prospects”, Bulletin of the International Statistical Institute, Proceedings of the 45th Session, LI, Book 2.Google Scholar
  39. PETRAKOS, G., CONVERSANO, C., FARMAKIS, G., MOLA, F., SICILIANO, R., and STAVROPOULOS, P. (2004), “New Ways to Specify Data Edits”, Journal of Royal Statistical Society, Series A, 167(2), 249–274.MathSciNetCrossRefGoogle Scholar
  40. RASSLER, S. (2002), Statistical Matching: A Frequentist Theory, Practical Applications and Alternative Bayesian Approaches, New York: Springer-Verlag.Google Scholar
  41. RASSLER, S. (2004), “Data Fusion: Identification Problems, Validity, and Multiple Imputation”, Austrian Journal of Statistics, 33(1 & 2), 153–171.Google Scholar
  42. RUBIN, D.B. (1976), “Inference and Missing Data (with Discussion)”, Biometrika, 63, 581–592.MathSciNetMATHCrossRefGoogle Scholar
  43. RUBIN, D.B. (1987), Multiple Imputation for Nonresponse in Surveys, New York: Wiley.CrossRefGoogle Scholar
  44. SANDE, I.G. (1983), “Hot Deck Imputation Procedures”, in Incomplete Data in Sample Surveys, Vol. III. Symposium on Incomplete Data: Proceedings, New York: Academic Press.Google Scholar
  45. SAPORTA,G. (2002), “Data Fusion and Data Grafting”, Computational Statistics and Data Analysis, 38, 465-473.MathSciNetMATHCrossRefGoogle Scholar
  46. SCHAPIRE, R.E., FREUND, Y., BARTLETT, P., and LEE, W.S. (1998), “Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods”, The Annals of Statistics, 26(5), 1651–1686.MathSciNetMATHCrossRefGoogle Scholar
  47. SICILIANO, R., and CONVERSANO, C. (2002), “Tree-Based Classifiers for Conditional Missing Data Incremental Imputation”, Proceedings of the International Conference on Data Clean (Jyväskylä, May 29-31, 2002), University of Jyväskylä, Finland.Google Scholar
  48. SICILIANO, R., and CONVERSANO, C. (2008), “Decision Tree Induction”, in Data Warehousing And Mining: Concepts, Methodologies, Tools, And Applications (Vol. 2), ed. J. Wang, Hershey PA: Information Science Reference, pp. 624–629.Google Scholar
  49. SICILIANO, R., and MOLA, R. (1996), “A Fast Regression Tree Procedure”, in Statistical Modelling, Proceedings of the 11th International Workshop on Statistical Modeling, eds. A. Forcina, G.M. Marchetti, R. Hatzinger, and G. Galmacci, Orvieto, 15-19 luglio, Graphos, Cittá di Castello, pp. 332–340.Google Scholar
  50. TIBSHIRANI, R. (1996), “Bias, Variance and Prediction Error for Classification Rules”, Technical Report, University of Toronto, Department of Statistics.Google Scholar
  51. VAPNIK, V.N. (1995), The Nature of Statistical Learning Theory, New York: Springer Verlag.MATHGoogle Scholar
  52. VAPNIK, V.N. (1998), Statistical Learning Theory, New York: Wiley.MATHGoogle Scholar
  53. VAPNIK, V.N., and CHERVONENKIS, A.J. (1989), ”The Necessary and Sufficient Conditions for Consistency of the Method of Empirical Risk Minimization”, Pattern Recognition and Image Analysis, 284–305.Google Scholar
  54. VAN BUUREN, S., BRAND, JPL., GROOTHUIS-OUDSHOORN, C.G.M., and RUBIN, D.B. (2006), “Fully Conditional Specification in Multivariate Imputation”, Journal of Statistical Computation and Simulation, 76 (12), 1049–1064.MathSciNetMATHCrossRefGoogle Scholar
  55. WINKLER, W. E. (1999), “State of Statistical Data Editing and Current Research Problems”, Working paper No 29 in the UN/ECE Work Session on Statistical Data Editing, Rome, 2-4 June 1999.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  • Antonio D’Ambrosio
    • 1
  • Massimo Aria
    • 1
  • Roberta Siciliano
    • 1
  1. 1.Department of Mathematics and StatisticsUniversity of Naples Federico IINaplesItaly

Personalised recommendations