Creating Abstract Concepts for Classification by Finding Top-N Maximal Weighted Cliques

  • Yoshiaki Okubo
  • Makoto Haraguchi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2843)


This paper presents a method for creating abstract concepts for classification rule mining. We try to find abstract concepts that are useful for the classification in the sense that assuming such a concept can well discriminate a target class and supports data as much as possible. Our task of finding useful concepts is formalized as an optimization problem in which its constraint and objective function are given by entropy and probability of class distributions, respectively. Concepts to be found can be stated in terms of maximal weighted cliques in a graph constructed from the possible distributions. From the graph, as useful abstract concepts, top-N maximal weighted cliques are efficiently extracted with two pruning techniques: branch-and-bound and entropy-based pruning. It is shown that our entropy-based pruning can safely prune only useless cliques by adding distributions in increasing order of their entropy in the process of clique expansion. Preliminary experimental results show that useful concepts can be created in our framework.


Abstract Concept Class Distribution Rule Mining Target Attribute Discrimination Ability 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Han, J., Fu, Y.: Attribute-Oriented Induction. In: Data Mining, Advances in Knowledge Discovery and Data Mining, pp. 399–421. MIT Press, Cambridge (1996)Google Scholar
  2. 2.
    Quinlan, J.R.: C4.5 – Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)Google Scholar
  3. 3.
    Haraguchi, M.: Concept Learning Based on Optimal Clique Searches. SIG-FAIA202- 11, 63–66 (2002)Google Scholar
  4. 4.
    Haraguchi, M., Kudoh, Y.: Some Criterions for Selecting the Best Data Abstractions. In: Arikawa, S., Shinohara, A. (eds.) Progress in Discovery Science. LNCS (LNAI), vol. 2281, pp. 156–167. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  5. 5.
    Kudoh, Y., Haraguchi, M., Okubo, Y.: Data Abstractions for Decision Tree Induction. Theoretical Computer Science 292, 387–416 (2003)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Hettich, S., Bay, S.D.: The UCI KDD Archive Dept. of Information and Computer Science, Univ. of California (1999),
  7. 7.
    Jain, A.K., Dubes, R.C.: Algorithms for Clustering Data. Prentice Hall, Englewood Cliffs (1988)zbMATHGoogle Scholar
  8. 8.
    Wakai, Y., Tomita, E., Wakatsuki, M.: An Efficient Algorithm for Finding a Maximum Weight Clique. In: Proc. of the 12th Annual Conf. of JSAI, pp. 250–252 (1998) (in Japanese)Google Scholar
  9. 9.
    Takabatake, K.: Clustering of Distributions in Summarizing Information. In: Proc. of IBIS 2001, pp. 309–314 (2001) (in Japanese)Google Scholar
  10. 10.
    Morimoto, Y.: Algorithm for mining association rules for binary segmentations of huge categorical databases. In: Proc. of VLDB 1998, pp.380–391 (1998)Google Scholar
  11. 11.
    Michalski, R.S., Kaufman, K.A.: Data Mining and Knowledge Discovery: A Review of Issues and a Multistrategy Approach. In: Machine Learning and Data Mining: Methods and Applications, pp. 71–112. Wiley, Chichester (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Yoshiaki Okubo
    • 1
  • Makoto Haraguchi
    • 1
  1. 1.Division of Electronics and Information EngineeringHokkaido UniversitySapporoJAPAN

Personalised recommendations