Advertisement

Memetic Computing

, Volume 4, Issue 2, pp 135–147 | Cite as

An immune-inspired instance selection mechanism for supervised classification

  • Grazziela P. FigueredoEmail author
  • Nelson F. F. Ebecken
  • Douglas A. Augusto
  • Helio J. C. Barbosa
Regular Research Paper

Abstract

One issue in data classification problems is to find an optimal subset of instances to train a classifier. Training sets that represent well the characteristics of each class have better chances to build a successful predictor. There are cases where data are redundant or take large amounts of computing time in the learning process. To overcome this issue, instance selection techniques have been proposed. These techniques remove examples from the data set so that classifiers are built faster and, in some cases, with better accuracy. Some of these techniques are based on nearest neighbors, ordered removal, random sampling and evolutionary methods. The weaknesses of these methods generally involve lack of accuracy, overfitting, lack of robustness when the data set size increases and high complexity. This work proposes a simple and fast immune-inspired suppressive algorithm for instance selection, called SeleSup. According to self-regulation mechanisms, those cells unable to neutralize danger tend to disappear from the organism. Therefore, by analogy, data not relevant to the learning of a classifier are eliminated from the training process. The proposed method was compared with three important instance selection algorithms on a number of data sets. The experiments showed that our mechanism substantially reduces the data set size and is accurate and robust, specially on larger data sets.

Keywords

Instance selection Data reduction Artificial immune systems Data classification 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    A Asuncion DN (2010) UCI Machine learning repository. http://www.ics.uci.edu/~mlearn/MLRepository.html
  2. 2.
    Abbas AK, Lichtman AH, Pober JS (1991) Cellular and molecular immunology. Saunders, PhiladelphiaGoogle Scholar
  3. 3.
    Aha DW, Kibbler D, Albert MK (1991) Instance-based learning algorithms. Mach Learn 6: 37–66Google Scholar
  4. 4.
    Blum AL, Langley P (1997) Selection of relevant features and examples in machine learning. Artif Intell 97: 245–271MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    Broadley CE (1993) Addressing the selective superiority problem: automatic algorithm/model class selection. In: Procedings of 10th international machine learning conference, pp 17–24Google Scholar
  6. 6.
    Cano J, Herrera F, Lozano M (2003) Using evolutionary algorithms as instance selection for data reduction in KDD: An experimental study. IEEE Trans Evol Comput 7(6): 561–575CrossRefGoogle Scholar
  7. 7.
    Cano JR, Herrera F, Lozano M (2006) On the combination of evolutionary algorithms and stratified strategies for training set selection in data mining. Appl Soft Comput 6(3): 323–332CrossRefGoogle Scholar
  8. 8.
    Cormack DH (2001) Essential histology, 2nd edn. Lippincott Williams and WilkinsGoogle Scholar
  9. 9.
    Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines (and other kernel-based learning methods). Cambridge University Press, CambridgeGoogle Scholar
  10. 10.
    Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30. http://dl.acm.org/citation.cfm?id=1248547.1248548 Google Scholar
  11. 11.
    Devijver P, Kittler J (1980) On the edited nearest neighbour rule. IEEE Patt Recognit 1: 72–80Google Scholar
  12. 12.
    Eick C, Zeidat N, Vilalta R (2004) Using representative-based clustering for nearest neighbor dataset editing. In: Data mining, 2004. ICDM ’04. Fourth IEEE international conference, pp 375–378Google Scholar
  13. 13.
    Eshelman LJ (1991) The CHC adaptive search algorithm: how to have safe search when engaging in nontraditional genetic recombination. In: Rawlins GJE (ed) Proceedings of the first workshop on foundations of genetic algorithms. Morgan Kaufmann, San Mateo, pp 265–283Google Scholar
  14. 14.
    Espíndola RP, Ebecken NFF (2005) On extending f-measure and g-mean metrics to multi-class problems. In: Sixth international conference on data mining, text mining and their business applications, Wessex Institute of Technology, UK, vol 35. WIT Press, Skiathos, pp 25–34Google Scholar
  15. 15.
    Franco A, Maltoni D, Nanni L (2010) Data pre-processing through reward-punishment editing. Pattern Anal Appl (PAA) 13:367–381(15)Google Scholar
  16. 16.
    García S, Cano JR, Herrera F (2010) Intelligent systems for automated learning and adaptation: emerging trends and applications, IGI Global, chap A review on evolutionary prototype selection: an empirical study of performance and efficiency, pp 92–113Google Scholar
  17. 17.
    García S, Cano JR, Herrera F (2008) A memetic algorithm for evolutionary prototype selection: a scaling up approach. Pattern Recognit 41:2693–2709. doi: 10.1016/j.patcog.2008.02.006, http://dl.acm.org/citation.cfm?id=1367147.1367320 Google Scholar
  18. 18.
    Garfield E (1979) Citation indexing—its theory and application in science, technology, and humanities/Eugene Garfield. Wiley, New YorkGoogle Scholar
  19. 19.
    Gates GW (1972) The reduced nearest neighbor rule. IEEE Trans Inform Theory 14: 431–433CrossRefGoogle Scholar
  20. 20.
    Goldberg D (1989) Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, ReadingzbMATHGoogle Scholar
  21. 21.
    Hart PE (1968) The condensed nearest neighbor rule. IEEE Trans Inform Theory 14: 515–516CrossRefGoogle Scholar
  22. 22.
    Holland JH (1975) Adaptation in natural and artificial systems. University of Michigan Press, Ann ArborGoogle Scholar
  23. 23.
    Janeway CA, Travers P, Walport M, Shlomchik M (2001) Immunobiology: the immune system in health and disease, 5th edn. Garland Science, OxfordGoogle Scholar
  24. 24.
    John GH, Kohavi R, Pfleger K (1994) Irrelevant features and the subset selection problem. In: International conference on machine learning. Morgan Kaufmann, San Mateo, pp 121–129Google Scholar
  25. 25.
    Kibbler D, Aha DW (1987) Learning representative exemplars of concepts: an initial case of study. In: Proceedings of the 4th international workshop on machine learning, pp 24–30Google Scholar
  26. 26.
    Kohavi R (1995) A study of cross-validation and bootstrap for accuracy estimation and model selection. In: IJCAI, pp 1137–1145Google Scholar
  27. 27.
    Kohonen T (1990) The self-organizing map. In: IEEE international conference on neural networks, vol 78, pp 1464–1480Google Scholar
  28. 28.
    Lowe DG (1995) Similarity metric learning for a variable-kernel classifier. Neural Comput 7(1): 72–85CrossRefGoogle Scholar
  29. 29.
    Nanni L, Lumini A (2009) Particle swarm optimization for prototype reduction. Neurocomputing 72(4–6):1092–1097 (Brain Inspired Cognitive Systems (BICS 2006) / Interplay Between Natural and Artificial Computation (IWINAC 2007))Google Scholar
  30. 30.
    Paredes R, Vidal E (2004) Learning prototypes and distances (LPD): a prototype reduction technique based on nearest neighbor error minimization. Int Conf Pattern Recognit 3: 442–445Google Scholar
  31. 31.
    Pedreira CE (2006) Learning vector quantization with training data selection. IEEE Trans Pattern Anal Mach Intell 28: 157–162CrossRefGoogle Scholar
  32. 32.
    Quinlan JR (1993) C4.5: Programs for machine learning. Morgan Kaufmann, San MateoGoogle Scholar
  33. 33.
    Rice JA (2006) Mathematical statistics and data analysis. Duxbury Press, Pacific GroveGoogle Scholar
  34. 34.
    Skalak DB (1994) Prototype and feature selection by sampling and random mutation hill climbing algorithms. In: Proceedings of the 11th international conference on machine learning. Morgan Kaufmann, San MateoGoogle Scholar
  35. 35.
    Tizard I (1985) Introduction to veterinary immunology, 2nd edn. ROCA, La Roca Del Valles-barcel (in Portuguese)Google Scholar
  36. 36.
    Whitley D (1989) The genitor algorithm and selective preasure: Why rank based allocation of reproductive trials is best. In: Proceedings of the 3rd international conference on genetic algorithms, pp 116–121Google Scholar
  37. 37.
    Wilcoxon F (1945) Individual comparisons by ranking methods. Biometrics Bull 1(6):80–83. doi: 10.2307/3001968 Google Scholar
  38. 38.
    Wilson DL (1972) Asymptotic properties of nearest neighbor rules using edited data. Syst Man Cybern IEEE Trans 2(3): 408–421zbMATHCrossRefGoogle Scholar
  39. 39.
    Wilson DR, Martinez TR (1997) Instance pruning techniques. In: Proceedings of the 14th international conference on machine learning, pp 403–411Google Scholar
  40. 40.
    Wilson DR, Martinez TR (2000) Reduction techniques for instance-based learning algorithms. Mach Learn 38: 257–268zbMATHCrossRefGoogle Scholar
  41. 41.
    Yin XC, Liu CP, Han Z (2005) Feature combination using boosting. Pattern Recognit Lett 26: 2195–2205CrossRefGoogle Scholar
  42. 42.
    Zeidat N, Wang S, Eick CF (2005) Dataset editing techniques: a comparative studyGoogle Scholar

Copyright information

© Springer-Verlag 2012

Authors and Affiliations

  • Grazziela P. Figueredo
    • 1
    Email author
  • Nelson F. F. Ebecken
    • 1
  • Douglas A. Augusto
    • 2
  • Helio J. C. Barbosa
    • 2
  1. 1.Federal University of Rio de Janeiro-COPPERio de JaneiroBrazil
  2. 2.LNCC-MCTRio de JaneiroBrazil

Personalised recommendations