Advertisement

Pattern Analysis and Applications

, Volume 20, Issue 2, pp 441–452 | Cite as

MoNGEL: monotonic nested generalized exemplar learning

  • Javier García
  • Habib M. Fardoun
  • Daniyal M. Alghazzawi
  • José-Ramón Cano
  • Salvador García
Theoretical Advances

Abstract

In supervised prediction problems, the response attribute depends on certain explanatory attributes. Some real problems require the response attribute to represent ordinal values that should increase with some of the explaining attributes. They are called classification problems with monotonicity constraints. In this paper, we aim at formalizing the approach to nested generalized exemplar learning with monotonicity constraints, proposing the monotonic nested generalized exemplar learning (MoNGEL) method. It accomplishes learning by storing objects in \({\mathbb {R}}^n\), hybridizing instance-based learning and rule learning into a combined model. An experimental analysis is carried out over a wide range of monotonic data sets. The results obtained have been verified by non-parametric statistical tests and show that MoNGEL outperforms well-known techniques for monotonic classification, such as ordinal learning model, ordinal stochastic dominance learner and k-nearest neighbor, considering accuracy, mean absolute error and simplicity of constructed models.

Keywords

Monotonic classification Instance-based learning Rule induction Nested generalized examples 

Notes

Acknowledgments

The authors are very grateful to the anonymous reviewers for their valuable suggestions and comments to improve the quality of this paper.

Compliance with ethical standards

Conflict of interest

We declare that we have no conflict of interest.

References

  1. 1.
    Aha DW (ed) (1997) Lazy learning. Springer, New YorkGoogle Scholar
  2. 2.
    Aha DW, Kibler D, Albert MK (1991) Instance-based learning algorithms. Mach Learn 6(1):37–66Google Scholar
  3. 3.
    Alcala-Fdez J, Fernández A, Luengo J, Derrac J, García S, Sánchez L, Herrera F (2011) KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. J Mult-Valued Log Soft Comput 17(2–3):255–287Google Scholar
  4. 4.
    Bache K, Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml. Accessed 21 May 2015
  5. 5.
    Ben-David A (1992) Automatic generation of symbolic multiattribute ordinal knowledge-based dsss: methodology and applications. Decis Sci 23:1357–1372CrossRefGoogle Scholar
  6. 6.
    Ben-David A (1995) Monotonicity maintenance in information-theoretic machine learning algorithms. Mach Learn 19(1):29–43Google Scholar
  7. 7.
    Ben-David A, Sterling L, Pao YH (1989) Learning, classification of monotonic ordinal concepts. Comput Intel 5:45–49CrossRefGoogle Scholar
  8. 8.
    Ben-David A, Sterling L, Tran T (2009) Adding monotonicity to learning algorithms may impair their accuracy. Expert Syst Appl 36(3):6627–6634CrossRefGoogle Scholar
  9. 9.
    Cao-Van K (2003) Supervised ranking, from semantics to algorithms. Ph.D. dissertation, Ghent University, GhentGoogle Scholar
  10. 10.
    Cao-Van K, Baets BD (2003) Growing decision trees in an ordinal setting. Int J Intel Syst 18(7):733–750CrossRefzbMATHGoogle Scholar
  11. 11.
    Chen CC, Li ST (2014) Credit rating with a monotonicity-constrained support vector machine model. Expert Syst Appl 41(16):7235–7247CrossRefGoogle Scholar
  12. 12.
    Daniels H, Velikova M (2010) Monotone and partially monotone neural networks. IEEE Trans Neural Netw 21(6):906–917CrossRefGoogle Scholar
  13. 13.
    Dembczyński K, Kotłowski W, Słowiński R (2009) Learning rule ensembles for ordinal classification with monotonicity constraints. Fundam Inform 94(2):163–178MathSciNetzbMATHGoogle Scholar
  14. 14.
    Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30MathSciNetzbMATHGoogle Scholar
  15. 15.
    Derrac J, García S, Herrera F (2014) Fuzzy nearest neighbor algorithms: taxonomy, experimental analysis and prospects. Inf Sci 260:98–119CrossRefGoogle Scholar
  16. 16.
    Domingos P (1996) Unifying instance-based and rule-based induction. Mach Learn 24(2):141–168MathSciNetGoogle Scholar
  17. 17.
    Duivesteijn W, Feelders A (2008) Nearest neighbour classification with monotonicity constraints. In: ECML/PKDD (1), pp 301–316Google Scholar
  18. 18.
    Escalante HJ, Marin-Castro M, Morales-Reyes A, Graff M, Rosales-Pérez A, y Gómez MM, Reyes CA, González JA (2015) MOPG: a multi-objective evolutionary algorithm for prototype generation. Pattern Anal Appl. doi: 10.1007/s10044-015-0454-6 (in press)
  19. 19.
    Feelders AJ, Pardoel M (2003) Pruning for monotone classification trees. In: IDA. Lecture notes in computer science, vol, 2810. Springer, New York, pp 1–12Google Scholar
  20. 20.
    Fernández-Navarro F, Riccardi A, Carloni S (2014) Ordinal neural networks without iterative tuning. IEEE Trans Neural Netw Learn Syst 25(11):2075–2085CrossRefGoogle Scholar
  21. 21.
    Fürnkranz J (1999) Separate-and-conquer rule learning. Artif Intel Rev 13:3–54CrossRefzbMATHGoogle Scholar
  22. 22.
    García S, Herrera F (2008) An extension on “statistical comparisons of classifiers over multiple data sets” for all pairwise comparisons. J Mach Learn Res 9:2677–2694zbMATHGoogle Scholar
  23. 23.
    García S, Fernández A, Luengo J, Herrera F (2010) Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power. Inf Sci 180(10):2044–2064CrossRefGoogle Scholar
  24. 24.
    García S, Derrac J, Luengo J, Carmona CJ, Herrera F (2011) Evolutionary selection of hyperrectangles in nested generalized exemplar learning. Appl Soft Comput 11(3):3032–3045CrossRefGoogle Scholar
  25. 25.
    García S, Derrac J, Triguero I, Carmona CJ, Herrera F (2012) Evolutionary-based selection of generalized instances for imbalanced classification. Knowl-Based Syst 25(1):3–12CrossRefGoogle Scholar
  26. 26.
    García S, Luengo J, Herrera F (2015) Data preprocessing in data mining. Springer, New YorkGoogle Scholar
  27. 27.
    Gaudette L, Japkowicz N (2009) Evaluation methods for ordinal classification. In: Canadian conference on AI. Lecture notes in computer science, vol 5549, pp 207–210Google Scholar
  28. 28.
    Han J, Kamber M (2011) Data mining: concepts and techniques. Morgan Kaufmann Publishers Inc., San FranciscoGoogle Scholar
  29. 29.
    Hu Q, Che X, Zhang L, Zhang D, Guo M, Yu D (2012) Rank entropy-based decision trees for monotonic classification. IEEE Trans Knowl Data Eng 24(11):2052–2064CrossRefGoogle Scholar
  30. 30.
    Japkowicz N, Shah M (eds) (2011) Evaluating learning algorithms: a classification perspective. Cambridge University Press, CambridgezbMATHGoogle Scholar
  31. 31.
    Kotłowski W, Słowiński R (2009) Rule learning with monotonicity constraints. In: ICML, vol 382Google Scholar
  32. 32.
    Kotłowski W, Słowiński R (2013) On nonparametric ordinal classification with monotonicity constraints. IEEE Trans Knowl Data Eng 25(11):2576–2589CrossRefGoogle Scholar
  33. 33.
    Lievens S, Baets BD, Cao-Van K (2008) A probabilistic framework for the design of instance-based supervised ranking algorithms in an ordinal setting. Ann Oper Res 163(1):115–142MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Liu T, Moore AW, Gray A (2006) New algorithms for efficient high-dimensional nonparametric classification. J Mach Learn Res 7:1135–1158MathSciNetzbMATHGoogle Scholar
  35. 35.
    López V, Fernández A, García S, Palade V, Herrera F (2013) An insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics. Inf Sci 250:113–141CrossRefGoogle Scholar
  36. 36.
    Mariolis IG, Dermatas E (2013) Automatic classification of seam pucker images based on ordinal quality grades. Pattern Anal Appl 16(3):447–457MathSciNetCrossRefGoogle Scholar
  37. 37.
    Potharst R, Feelders AJ (2002) Classification trees for problems with monotonicity constraints. SIGKDD Explor 4(1):1–10CrossRefGoogle Scholar
  38. 38.
    Potharst R, Ben-David A, van Wezel MC (2009) Two algorithms for generating structured and unstructured monotone ordinal datasets. Eng Appl Artif Intel 22(4–5):491–496CrossRefGoogle Scholar
  39. 39.
    Salzberg S (1991) A nearest hyperrectangle learning method. Mach Learn 6(3):251–276Google Scholar
  40. 40.
    Triguero I, Peralta D, Bacardit J, García S, Herrera F (2015) MRPR: a mapreduce solution for prototype reduction in big data classification. Neurocomputing 150:331–345CrossRefGoogle Scholar
  41. 41.
    Wettschereck D, Dietterich TG (1995) An experimental comparison of the nearest-neighbor and nearest-hyperrectangle algorithms. Mach Learn 19(1):5–27Google Scholar
  42. 42.
    Wolpert DH (1996) The lack of a priori distinctions between learning algorithms. Neural Comput 8(7):1341–1390CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2015

Authors and Affiliations

  • Javier García
    • 1
  • Habib M. Fardoun
    • 2
  • Daniyal M. Alghazzawi
    • 2
  • José-Ramón Cano
    • 3
  • Salvador García
    • 4
  1. 1.Department of Computer ScienceUniversity of JaénJaénSpain
  2. 2.Department of Information Systems, Faculty of Computing and Information TechnologyKing Abdulaziz UniversityJeddahSaudi Arabia
  3. 3.Department of Computer Science, EPS of LinaresUniversity of JaénLinaresSpain
  4. 4.Department of Computer Science and Artificial IntelligenceUniversity of GranadaGranadaSpain

Personalised recommendations