Advertisement

Building robust models for small data containing nominal inputs and continuous outputs based on possibility distributions

  • Der-Chiang LiEmail author
  • Qi-Shi Shi
  • Hung-Yu Chen
Original Article
  • 59 Downloads

Abstract

Learning with small data is challenging for most algorithms in regard to building statistically robust models. In previous studies, virtual sample generation (VSG) approaches have been verified as effective in terms of meeting this challenge. However, most VSG methods were developed for numerical inputs. Therefore, to address situations where data has nominal inputs and continuous outputs, a systemic VSG procedure is proposed to generate samples based on fuzzy techniques to further enhance modelling capability. Based on the concept of the data preprocess in the M5′ model tree, we reveal a useful procedure by which to extract the fuzzy relations between nominal inputs and continuous outputs. Further, with the idea of nonparametric operations, we employ trend similarity to present the fuzzy relations between inputs and outputs. Then, these relations are represented by possibility distributions, and sample candidates are created based on these distributions. Finally, the candidates filtered using \(\alpha\)-cut are regarded as qualified virtual samples. In the experiments, we demonstrate the effectiveness of our approach through a comparison with two other VSG approaches using five public datasets and two prediction models. Moreover, three parameters used in our approaches are discussed. However, determining how to find the most fit parameters requires further study in the future.

Keywords

Small data Virtual sample Possibility distribution Nominal input 

Notes

References

  1. 1.
    Ali SS, Howlader T, Rahman SMM (2018) Pooled shrinkage estimator for quadratic discriminant classifier: an analysis for small sample sizes in face recognition. Int J Mach Learn Cybern 9(3):507–522CrossRefGoogle Scholar
  2. 2.
    Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140zbMATHMathSciNetGoogle Scholar
  3. 3.
    Chawla NV, Bowyer KW, Hall LO, Philip Kegelmeyer W (2002) SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 16:321–357zbMATHCrossRefGoogle Scholar
  4. 4.
    Conroy B, Eshelman L, Potes C, Xu-Wilson M (2016) A dynamic ensemble approach to robust classification in the presence of missing data. Mach Learn 102(3):443–463MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    Cost S, Salzberg S (1993) A weighted nearest neighbor algorithm for learning with symbolic features. Mach Learn 10(1):57–78Google Scholar
  6. 6.
    de Jesús Rubio J (2018) Error convergence analysis of the SUFIN and CSUFIN. Appl Soft Comput 72:587–595CrossRefGoogle Scholar
  7. 7.
    Efron B (1979) Computers and the theory of statistics: thinking the unthinkable. SIAM Rev 21(4):460–480MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Fard MJ, Wang P, Chawla S, Reddy CK (2016) A bayesian perspective on early stage event prediction in longitudinal data. IEEE Trans Knowl Data Eng 28:3126–3139CrossRefGoogle Scholar
  9. 9.
    Gosset WS (1908) The probable error of a mean. Biometrika 6(1):1–25MathSciNetCrossRefGoogle Scholar
  10. 10.
    Gui L, Xu RF, Lu Q, Du JC, Zhou Y (2018) Negative transfer detection in transductive transfer learning. Int J Mach Learn Cybern 9(2):185–197CrossRefGoogle Scholar
  11. 11.
    Huang C (1997) Principle of information diffusion. Fuzzy Sets Syst 91(1):69–90MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Huang C, Moraga C (2004) A diffusion-neural-network for learning from small samples. Int J Approx Reason 35(2):137–161MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Kawakita M, Takeuchi J (2017) A note on model selection for small sample regression. Mach Learn 106(11):1839–1862MathSciNetzbMATHCrossRefGoogle Scholar
  14. 14.
    Li DC, Lin WK, Chen CC, Chen HY, Lin LS (2018) Rebuilding sample distributions for small dataset learning. Decis Support Syst 105:66–76CrossRefGoogle Scholar
  15. 15.
    Li DC, Wu CS, Tsai TI, Lina YS (2007) Using mega-trend-diffusion and artificial samples in small data set learning for early flexible manufacturing system scheduling knowledge. Comput Oper Res 34(4):966–982zbMATHCrossRefGoogle Scholar
  16. 16.
    Meza AG, Cortes TH, Lopez AV, Carranza LA, Herrera RT, Ramirez IO, Campana JA (2017) Analysis of fuzzy observability property for a class of TS fuzzy models. IEEE Latin Am Trans 15(4):595–602CrossRefGoogle Scholar
  17. 17.
    Niyogi P, Girosi F, Poggio T (1998) Incorporating prior information in machine learning by creating virtual examples. Proc IEEE 86(11):2196–2209CrossRefGoogle Scholar
  18. 18.
    Pan SJ, Yang QA (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359CrossRefGoogle Scholar
  19. 19.
    Sezer EA, Nefeslioglu HA, Gokceoglu C (2014) An assessment on producing synthetic samples by fuzzy C-means for limited number of data in prediction models. Appl Soft Comput 24:126–134CrossRefGoogle Scholar
  20. 20.
    Sharma A, Paliwal K (2015) Linear discriminant analysis for the small sample size problem: an overview. Int J Mach Learn Cybern 6(3):443–454CrossRefGoogle Scholar
  21. 21.
    Sinn HW (1980) A rehabilitation of the principle of insufficient reason. Q J Econ 94(3):493–506CrossRefGoogle Scholar
  22. 22.
    Sáez JA, Luengo J, Stefanowski J, Herrera F (2015) SMOTE–IPF: addressing the noisy and borderline examples problem in imbalanced classification by a re-sampling method with filtering. Inf Sci 291:184–203CrossRefGoogle Scholar
  23. 23.
    Song X, Shao C, Yang X, Wu X (2017) Sparse representation-based classification using generalized weighted extended dictionary. Soft Comput 21(15):4335–4348CrossRefGoogle Scholar
  24. 24.
    van de Schoot R, Broere JJ, Perryck KH, Zondervan-Zwijnenburg M, Van Loey NE (2015) Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors. Eur J Psychotraumatol 6(1):25216CrossRefGoogle Scholar
  25. 25.
    Wang XZ, Wang R, Xu C (2018) Discovering the relationship between generalization and uncertainty by incorporating complexity of classification. IEEE Trans Cybern 48(2):703–715MathSciNetCrossRefGoogle Scholar
  26. 26.
    Wang XZ, Xing HJ, Li Y, Hua Q, Dong CR, Pedrycz W (2015) A study on relationship between generalization abilities and fuzziness of base classifiers in ensemble learning. IEEE Trans Fuzzy Syst 23(5):1638–1654CrossRefGoogle Scholar
  27. 27.
    Wang Y, Witten IH (1997) Inducing model trees for continuous classes. In: Proceedings of the ninth european conference on machine learning, pp128–37Google Scholar
  28. 28.
    Zadeh LA (1965) Fuzzy sets. Inf Control 8(3):338–353zbMATHCrossRefGoogle Scholar
  29. 29.
    Zadeh LA (1978) Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst 1(1):3–28MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Industrial and Information ManagementNational Cheng Kung UniversityTainanTaiwan
  2. 2.Institute of Information ManagementNational Cheng Kung UniversityTainanTaiwan

Personalised recommendations