Analysis and Improvements of the Adaptive Discretization Intervals Knowledge Representation

  • Jaume Bacardit
  • Josep Maria Garrell
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3103)


In order to handle classification problems with real-valued attributes using discretization algorithms it is necessary to obtain a good and reduced set of cut points in order to learn successfully. In recent years a discretization-based knowledge representation called Adaptive Discretization Intervals has been developed that can use several discretizers at the same time and also combines adjacent cut points. In this paper we analyze its behavior in several aspects. From this analysis we propose some fixes and new operators that manage to improve the performance of the representation across a large set of domains.


Knowledge Representation Conjunctive Normal Form Discretization Algorithm Rule Representation Parallel Evolutionary Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Holland, J.H.: Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor (1975)Google Scholar
  2. 2.
    Wilson, S.W.: Classifier fitness based on accuracy. Evolutionary Computation 3, 149–175 (1995)CrossRefGoogle Scholar
  3. 3.
    Llorà, X., Garrell, J.M.: Knowledge-independent data mining with fine-grained parallel evolutionary algorithms. In: Proceedings of the Third Genetic and Evolutionary Computation Conference, pp. 461–468. Morgan Kaufmann, San Francisco (2001)Google Scholar
  4. 4.
    Giráldex, R., Aguilar-Ruiz, J., Riquelme, J.: Natural coding: A more efficient representation for evolutionary learning. In: GECCO 2003: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 979–990. Springer, Heidelberg (2003)Google Scholar
  5. 5.
    Divina, F., Keijzer, M., Marchiori, E.: A method for handling numerical attributes in GA-based inductive concept learners. In: GECCO 2003: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 898–908. Springer, Heidelberg (2003)Google Scholar
  6. 6.
    DeJong, K.A., Spears, W.M.: Learning concept classification rules using genetic algorithms. In: Proceedings of the International Joint Conference on Artificial Intelligence, pp. 651–656 (1991)Google Scholar
  7. 7.
    Bacardit, J., Garrell, J.M.: Evolving multiple discretizations with adaptive intervals for a pittsburgh rule-based learning classifier system. In: Cantú-Paz, E., Foster, J.A., Deb, K., Davis, L., Roy, R., O’Reilly, U.-M., Beyer, H.-G., Kendall, G., Wilson, S.W., Harman, M., Wegener, J., Dasgupta, D., Potter, M.A., Schultz, A., Dowsland, K.A., Jonoska, N., Miller, J., Standish, R.K. (eds.) GECCO 2003. LNCS, vol. 2724, pp. 1818–1831. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  8. 8.
    Bacardit, J., Garrell, J.M.: Bloat control and generalization pressure using the minimum description length principle for a pittsburgh approach learning classifier system. In: Kovacs, T., Llorà, X., Takadama, K., Lanzi, P.L., Stolzmann, W., Wilson, S.W. (eds.) IWLCS 2003. LNCS (LNAI), vol. 4399, pp. 59–79. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  9. 9.
    Blake, C., Keogh, E., Merz, C.: UCI repository of machine learning databases (1998),
  10. 10.
    Martínez Marroquín, E., Vos, C., et al.: Morphological analysis of mammary biopsy images. In: Proceedings of the IEEE International Conference on Image Processing, pp. 943–947 (1996)Google Scholar
  11. 11.
    Martí, J., Cufí, X., Regincós, J., et al.: Shape-based feature selection for microcalcification evaluation. In: Imaging Conference on Image Processing, vol. 3338, pp. 1215–1224 (1998)Google Scholar
  12. 12.
    Witten, I.H., Frank, E.: Data Mining: practical machine learning tools and techniques with java implementations. Morgan Kaufmann, San Francisco (2000)Google Scholar
  13. 13.
    Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)Google Scholar
  14. 14.
    Aha, D.W., Kibler, D.F., Albert, M.K.: Instance-based learning algorithms. Machine Learning 6, 37–66 (1991)Google Scholar
  15. 15.
    Brodley, C.: Addressing the selective superiority problem: Automatic algorithm /model class selection (1993)Google Scholar
  16. 16.
    Cantu-Paz, E., Kamath, C.: Inducing oblique decision trees with evolutionary algorithms. IEEE Transactions on Evolutionary Computation 7, 54–68 (2003)CrossRefGoogle Scholar
  17. 17.
    Wilson, S.W.: Get real! XCS with continuous-valued inputs. In: Booker, L., Forrest, S., Mitchell, M., Riolo, R.L. (eds.) Festschrift in Honor of John H. Holland, Center for the Study of Complex Systems, pp. 111–121 (1999)Google Scholar
  18. 18.
    Stone, C., Bull, L.: For real! xcs with continuous-valued inputs. Evolutionary Computation Journal 11, 298–336 (2003)CrossRefGoogle Scholar
  19. 19.
    Aguilar, J., Bacardit, J., Divina, F.: Experimental evaluation of discretization schemes for rule induction. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3102, pp. 828–839. Springer, Heidelberg (2004) (to appear)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Jaume Bacardit
    • 1
  • Josep Maria Garrell
    • 1
  1. 1.Intelligent Systems Research GroupUniversitat Ramon LlullBarcelonaSpain, Europe

Personalised recommendations